id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.10114 | open-UST: An Open-Source Ultrasound Tomography Transducer Array System | Fast imaging methods are needed to promote widespread clinical adoption of
Ultrasound Tomography (UST), and more widely available UST hardware could
support the experimental validation of new measurement configurations. In this
work, an open-source 256-element transducer ring array was developed
(morganjroberts.github.io/open-UST) and manufactured using rapid prototyping,
for only {\pounds}2k. Novel manufacturing techniques were used, resulting in a
1.17$^{\circ}$ mean beam axis skew angle, a 104 $\mu$m mean element position
error, and a $\pm$13.6 $\mu$m deviation in matching layer thickness. The
nominal acoustic performance was measured using hydrophone scans and watershot
data, and the 61.2 dB SNR, 55.4$^{\circ}$ opening angle, 16.3 mm beamwidth and
54% transmit-receive bandwidth (-12 dB), were found to be similar to existing
systems, and compatible with full waveform inversion reconstruction methods.
The inter-element variation in acoustic performance was typically <10% without
using normalisation, meaning that the elements can be modelled identically
during image reconstruction, removing the need for individual source
definitions based on hydrophone measurements. Finally, data from a phantom
experiment was successfully reconstructed. These results demonstrate that the
open-UST system is accessible for users, and suitable for UST imaging research. | Morgan Roberts, Eleanor Martin, Michael D. Brown, Ben T. Cox, Bradley E. Treeby | 2023-02-20T17:17:20Z | http://arxiv.org/abs/2302.10114v1 | # Open-UST: An Open-Source Ultrasound Tomography Transducer Array System +
###### Abstract
Fast imaging methods are needed to promote widespread clinical adoption of Ultrasound Tomography (UST), and more widely available UST hardware could support the experimental validation of new measurement configurations. In this work, an open-source 256-element transducer ring array was developed (morganjroberts.github.io/open-UST) and manufactured using rapid prototyping, for only \(\upxi\)2k. Novel manufacturing techniques were used, resulting in a 1.17\({}^{\circ}\) mean beam axis skew angle, a 104 \(\mu\)m mean element position error, and a \(\pm\)13.6 \(\mu\)m deviation in matching layer thickness. The nominal acoustic performance was measured using hydrophone scans and watershot data, and the 61.2 dB SNR, 55.4\({}^{\circ}\) opening angle, 16.3 mm beamwidth and 54% transmit-receive bandwidth (-12 dB), were found to be similar to existing systems, and compatible with full waveform inversion reconstruction methods. The inter-element variation in acoustic performance was typically \(<\)10% without using normalisation, meaning that the elements can be modelled identically during image reconstruction, removing the need for individual source definitions based on hydrophone measurements. Finally, data from a phantom experiment was successfully reconstructed. These results demonstrate that the open-UST system is accessible for users, and suitable for UST imaging research.
## 1 Introduction
Breast cancer screening reduces mortality, but mammograms have lower sensitivity for people with high breast density, and over-diagnosis causes harm in healthy people [1]. Ultrasound Tomography (UST) is a method for measuring the 3D acoustic property distributions in the breast, using a transducer array to transmit ultrasound waves into the breast from different angles and measure the transmitted and scattered field. Clinical performance has been promising [2, 3], and advanced methods are being developed [4, 5], but improvement is still needed to achieve clinically useful image reconstruction times. UST hardware allows new measurement configurations to be investigated, but there is a high barrier to entry since UST systems are not available off the shelf, and custom arrays are expensive. Therefore, more widely available and re-configurable UST hardware could accelerate progress towards fast, accurate imaging methods and promote widespread clinical adoption of UST.
Rapid prototyping technologies such as 3D-printing can be used to manufacture ultrasound hardware in-house, without expensive specialist equipment [6, 7]. Rapid prototyped ultrasound hardware is low cost, has a short lead time, and can be easily modified, but the user has to design components from scratch. However, rapid prototyped hardware can also be
easily open-sourced, a concept that has already promoted collaboration in the UST community [8; 9], which reduces the upfront design time, and could allow users without transducer manufacture experience to build a UST system in-house.
Open-source designs have been released for a microbubble characterisation chamber [10] and an acoustic levitation system [11], and the files were accompanied by manufacturing instructions, which are essential for users to access the project. However, an open-source instructional guide for ultrasound transducer array manufacture does not exist. This paper presents the design, manufacture and evaluation of open-UST: a low cost UST transducer array system optimised for in-house manufacture using rapid prototyping (Figure 1) [12]. The hardware distribution includes CAD models, PCB and 3D-printing files, a bill of materials, assembly videos, and full manufacture documentation. End users are expected to be UST researchers comfortable with manual assembly processes, for example soldering and polymer casting. The goals of the open-UST project, explained below, are that:
1. The manufacture is accessible to users,
2. The design parameters and nominal acoustic behaviour are suitable for UST imaging research,
3. The inter-element variation (IEV) in acoustic behaviour is low.
### Accessibility
For users to access open-UST, the cost and lead time of the system should be low, meaning that the material cost should be minimised, and that only essential features should be included to accelerate assembly. Also, the manufacturing processes should be simple, and it is assumed that users do not have access to specialist transducer manufacture equipment, only a vacuum chamber, a 3D-printer capable of printing polylactic acid (PLA) and polyvinyl alcohol (PVA) filament, and standard workshop hand tools.
### Design Parameters and Nominal Acoustic Behaviour
The bandwidth and signal-to-noise-ratio (SNR) of the open-UST transducers should provide useful data at frequencies above 350 kHz [2], to be compatible with full waveform inversion (FWI) reconstruction methods. The finite bandwidth and size of physical transducer elements affect the way that they emit and respond to ultrasound waves, and modelling their angle dependent frequency response (ADR) during image reconstruction can help to better match simulated and observed data, leading to increased reconstruction accuracy [13]. The open-UST transducers should have a smooth ADR, since this makes the UST data easier to interpret, and allows the response to be easily incorporated into the
Figure 1: **Top Left:** Transducer during manufacture. **Top Right:** Finished transducer. **Bottom:** open-UST transducer array.
reconstruction forward model, for example by representing the transducers as ideal pistons with an effective element size chosen to best represent the watershot data.
### Interelement Variation (IEV) in Acoustic Behaviour
For transducer arrays, manufacturing tolerances can cause the individual elements to have slightly different ADRs, small variations in position and beam axis skews. These can be characterised to improve reconstruction accuracy [13], but this requires additional time, hydrophone measurements, and computational complexity, presenting another barrier to entry to users. Although reconstruction methods exist that have achieved excellent results without modelling the ADR of the transducer elements [5], their implicit assumption is still that the transducers behave identically. Therefore, the IEV in acoustic behaviour of the open-UST system must be low so that the transducers can be modelled identically.
For high performance arrays, low IEV is achieved using high precision manufacturing equipment, for example dicing saws, spin coaters and lapping machines, and so a tradeoff between cost and IEV is expected for rapid prototyped transducer arrays. Previously, the IEV in electrical impedance was measured for a 3D-printed histotripsy array [6]. In this paper, the IEV in electrical impedance, transmit impulse response, beam axis skew, beamwidth, opening angle, signal-to-noise-ratio, receive crosstalk and transmit-receive directional response are assessed for the open-UST system.
Previously, prototype transducer modules were evaluated for open-UST [14], and low cost techniques for matching layer deposition were developed [15]. In this paper, the design and manufacture of the open-UST system in explained, and then the experimental evaluation of IEV in acoustic performance is described. Finally, results from a phantom imaging experiment are shown as a proof of principle of the open-UST system.
## 2 Design and Manufacture
### Array Design
The open-UST aperture configuration and acoustic performance should support typical UST imaging use cases, and facilitate experimentation with new arrangements. Two single-element or clinical array transducers could be purchased and mounted to a rotation stage to sample a virtual array [16; 17], but this configuration has a large data acquisition time and so multi-element transducer arrays that fully surround the object are typically used instead. Either bowl [18] or rotating planar [19] configurations are used in 3D, but the most common design is a vertically translated 2D ring array [20; 17; 21], since these allow data to be collected and reconstructed in 2D slices, which is computationally efficient. The standard open-UST configuration is a 2D ring array, but its modularity also allows reconfiguration into 3D geometries. To simplify manufacture, each module is a linear array, with a total of 16 modules forming a hexadecagon approximation to a ring.
Current UST systems have between 40 [21] and 2304 [18] transducer elements. Systems with many elements have denser sampling and higher image quality, but are complex to manufacture and require a data acquisition system (DAQ) with an equivalent channel count, or a multiplexer. Rotating the array can increase the sampling density, but adds complexity and cost, and increases the data acquisition time. The standard open-UST configuration is a 256-element ring array, since this is a typical number of channels available from open ultrasound DAQ platforms [22]. Excellent reconstructions of in vivo data from 256-element ring arrays have been demonstrated using full waveform inversion (FWI) reconstruction methods [23].
The open-UST array diameter is 220 mm, which is larger than the pendant breast diameter for an entire study population of American women [24]. The diameter and number of elements constrains the intra-module element pitch, which was chosen to be 2.54 mm to align with common PCB connector sizes. An overview of the array design is shown in Table 1.
### PZT Element Selection
The open-UST system uses individual PZT plates for the transducer elements, since users are unlikely to have access to a dicing saw, and custom diced PZT slabs are expensive. PZT 850 was selected as a piezoelectric material, which is ideal for sensing applications, with an acoustic impedance of \(Z_{p}\) = 31.5 MRayl, and frequency constants of \(N_{\mathrm{T}}\) = 2040 m/s and \(N_{\mathrm{L}}\) = 1500 m/s in the thickness and lateral directions respectively [25]. The dimensions of the PZT elements affect their resonance spectra and beam patterns. For breast UST, the centre frequency is typically between 0.9 MHz [19] and 3 MHz [20], and for 2D ring arrays a wide lateral opening angle and thin elevation beamwidth are required to confine the waves to a slice through the entire breast.
Decreasing the lateral width of PZT elements increases their lateral opening angle, but decreases their sensitivity and presents manufacturing challenges. The minimum PZT plate width widely available off-the-shelf is 1 mm. Figure 2
shows opening angle predictions for lateral widths from 0.7 mm to 1.5 mm between 800 kHz and 2 MHz, simulated using the acoustic field propagator [26] from the k-Wave toolbox. For each simulation, the -6 dB opening angle was extracted from the far field directional response in the lateral plane. A width of 1 mm provides a minimum opening angle of 54\({}^{\circ}\) at 2 MHz, which is comparable to the 43\({}^{\circ}\) -10 dB opening angle at 2.6 MHz achieved by other UST systems [27], indicating that this is a suitable beam pattern for UST imaging.
A 1 mm PZT thickness was selected due to its availability. A width to thickness aspect ratio \(w/t\approx\) 1 can cause complex behaviour due to the interaction of lateral and thickness vibration modes [28], meaning the exact plate resonances could not be calculated from the width and thickness. However, approximate thickness and lateral resonances of 2 MHz and 1.5 MHz were predicted, which are within the required range for UST.
Lenses can provide a thin and uniform elevational beamwidth [29], but these add complexity to the manufacturing procedure, and instead the elevation height of the elements was optimised to provide weak focusing. Figure 2 shows -6 dB elevational beamwidth predictions, averaged over all axial positions from the source to the array radius, for elevational heights from 7 mm to 15 mm, simulated using the acoustic field propagator. The elevation height was selected as 10 mm due to its off the shelf availability, which is the optimal value from 1.34 MHz to 1.58 MHz. Following the dimension selection (summarised in Table 1), 1 \(\times\) 1 \(\times\) 10 mm PZT elements (Item 689, APC International Ltd, PA, USA) were purchased.
Figure 3 shows the in-air electrical input impedance spectrum of a single PZT plate, measured using a vector impedance analyser (4193A, Hewlett Packard). There are multiple resonances in the phase spectrum from 1.22 MHz to 2.86 MHz, which is a wider range than originally predicted, highlighting the difficulty in estimating resonances from the element
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Number of elements & 256 \\ Number of transducer modules & 16 \\ Elements per module & 16 \\ PZT element thickness & 1 mm \\ PZT element width & 1 mm \\ PZT element length & 10 mm \\ PZT pitch & 2.54 mm \\ Material costs & \(\xi\)2k \\ Assembly time & 4 months \\ \hline \hline \end{tabular}
\end{table}
Table 1: Key parameters of the open-UST transducer array.
Figure 2: **A:** Predicted opening angle as a function of source width and frequency. **B:** Simulated average beam width as a function of source height and frequency. White crosses indicate the optimal elevation height with minimum beamwidth at each frequency.
dimensions alone. This spectrum suggested that the acoustic centre frequency would be close to 1.22 MHz, due to the strong series resonance caused by the lateral vibration mode coupled into the thickness mode. With a damping backing layer, the acoustic response was expected to tail off smoothly towards 2.86 MHz, providing a suitable bandwidth for UST imaging. Although the 10 mm elevation height is optimal for 1.34 MHz - 1.58 MHz, at the 1.22 MHz resonance, the average beamwidth was 10.24 mm, which is only slightly larger than the optimal value of 10.15 mm.
### Acoustic Stack Design and Manufacture
The acoustic stack for each element, shown in Figure 4O, is a PZT plate with a backing layer, quarter wavelength matching layer, and a thin polyurethane waterproof coating. Although formulations exist that account for complex resonant behaviour, acoustic stack design tools such as the KLM model were not used, since these rely heavily on accurate material properties, which are usually found by fitting to experimental data [30], and this was not available. The transducer module manufacturing procedure is documented on the open-UST website [12], and is summarised in Figure 4.
Matching and backing layer composites can be made by mixing filler powder with castable polymers. Tungsten powder was chosen because relatively low volume fractions can produce high impedance composites, which results in a low enough viscosity for the composite to be properly hand mixed. Araldite Standard epoxy (Huntsman Advanced Materials, Cambridge, UK) was chosen for the polymer since it is widely available, low cost, and has a high enough viscosity to prevent particle settling during curing.
Matching layers are typically tuned to the existing PZT resonance to achieve an even larger response. However, prototyping showed that the transmit pressure at 1.22 MHz was sufficient without a matching layer (Figure 5). Instead, the matching layer resonance frequency was selected to be 2 MHz, which is higher than the main resonance and in the middle of the 1.22 MHz - 2.86 MHz range where an acoustic response was expected after damping. That was done because boosting the high frequency content is useful for improving resolution during image reconstruction. Figure 5 shows that this design worked as intended. For a tungsten-epoxy composite used to match PZT with impedance \(Z_{p}\) = 31.5 MRayl to water with impedance \(Z_{w}\) = 1.5 MRayl, the target impedance of the matching layer should be \(Z_{l}=\sqrt{Z_{p}Z_{w}}\) = 6.87 MRayl. Preliminary testing showed that a tungsten weight fraction of 86.7% provides a sound speed of 1317 m/s, an acoustic impedance of 6.67 MRayl [31], and requires a quarter-wavelength matching layer thickness of 165 \(\mu\)m. For manufacture, a low cost deposition method was developed, first using blade coating (Figure 4B-D), and then compression between glass plates [15], producing a thickness distribution of 174 \(\mu\)m \(\pm\) 13.6 \(\mu\)m (N = 128).
A backing layer was added to the rear face of each PZT element to increase damping and widen the bandwidth. For tungsten-polymer composites, increasing the tungsten weight fraction increases acoustic impedance but decreases attenuation [32]. Previous prototyping showed that a tungsten weight ratio of 80.8% has a high enough impedance to provide damping, and a high enough absorption to attenuate internal backing layer reverberation to below the noise floor [14]. A common backing layer was cast onto the rear electrodes of the PZT elements with a 24 mm thickness, and its rear face was given a scattering structure to further attenuate backing layer reverberation (Figure 4H-I).
Figure 3: Electrical impedance magnitude (**A**) and phase (**B**) spectra of 128 transducer elements measured in water, after backing layer casting. The mean and entire measured range of the data are shown. Red lines show the in-air electrical input impedance spectra for a single PZT plate before manufacture.
open-UST: An Open-Source Ultrasound Tomography Transducer Array System
Figure 4: **A-N**: Summary of the transducer module manufacture. **O**: Cross section through the acoustic stack.
A 400 \(\mu\)m layer of Aptflex F7 polyurethane (Precision Acoustics, Dorchester, UK) was added to the front face of the transducer, to provide electrical insulation. This material has an acoustic impedance of 1.5 MRayl, making the transmission coefficient at the coating-water boundary approximately 1.
The total material cost of the manufacture was \(\xi\)2k, comprising \(\xi\)906 for the PZT elements, \(\xi\)140 for 3D-printing filament, and \(\xi\)954 for off-the-shelf components, adhesives, and consumable materials. The total manufacture duration was 4 months of one person working full time, including the manufacture of custom tooling. Due to the large number of transducer modules and elements, most manufacture processes were typically performed in 8 separate batches, meaning that the time taken for 3D-printing parts was not a limiting factor, since this took place in parallel to the manual assembly steps.
## 3 Nominal Acoustic Performance and IEV
After the 16 array modules were manufactured, their nominal acoustic performance and IEV were characterised. All of the results are shown without normalisation, and are summarised in Table 2.
### Electrical Input Impedance
Figure 3 shows the electrical input impedance of the transducer elements immersed in deionised water, measured after backing layer casting. The phase spectra has a peak at 1.22 MHz matching the series resonance of the in-air PZT plate, with a small phase angle indicating relatively weak damping. This could be because the PZT vibration is dominated by the coupling of lateral and thickness modes, which may not be effectively damped by the backing layer. Other UST transducers with a similar backing material had a larger resonance phase angle of -62.5\({}^{\circ}\), and were damped laterally by epoxy [33], but in this work the lateral damping was very weak, since the kerfs were filled with water during the impedance measurement (polyurethane for the final transducers), creating a large reflection coefficient at the PZT-kerf boundary.
The IEV in impedance magnitude and phase was low, with no defective channels. The \(\pm\)4.96 \({}^{\circ}\) standard deviation in peak phase angle is very similar to the \(\pm\)5.7 \({}^{\circ}\) standard deviation reported for 144 UST transducer elements manufactured using advanced equipment [34], with a smaller overall range. This demonstrates the reliability of the conductive-epoxy technique used to connect the PZT element electrodes to the PCB, and also demonstrates that the matching layers, PZT plates, and backing layers have uniform acoustic properties and dimensions. The ability to measure the electrical input impedance of the acoustic stack during manufacture is a useful interface, since it allows users to discard defective transducer modules as soon as possible or to collect data when making modifications, such as changing the PZT element dimensions.
### Transmit Impulse Response
Figure 5 shows the mean transmit impulse response for 64 elements, measured using a calibrated 200 \(\mu\)m polyvinylidene fluoride needle hydrophone (Precision Acoustics, Dorchester, UK) after driving each element with a 80 ns pulse, which excited harmonics up to 6.8 MHz. The higher harmonics are not typically used for UST, so the impulse response was low pass filtered (cutoff 5 MHz) so that the waveform shape and IEV could be more easily visualised in the frequency range of interest.
The -6 dB and -12 dB fractional bandwidths are 53 % and 175 %, with pass-bands at 967 kHz - 1.67 MHz and 833 kHz - 3.23 MHz respectively. Figure 5 shows weak resonance features from the 1.22 MHz acoustic centre frequency up to 2.86 MHz, which correspond to the in-air PZT resonances shown in Figure 3. The IEV in impulse response was low with no outliers, and only a small amplitude deviation at resonance of \(\pm\)6.9%, which again demonstrates the consistency in the acoustic properties and dimensions of the matching layers, PZT plates, and backing layers.
Figure 5 also shows the mean amplitude spectrum of a prototype 16-element module manufactured identically to the final transducers, but without matching layers. Comparing the two spectra shows that tuning the matching layer resonance to 2 MHz to boost the high frequency response was successful, since the -6 dB and -12 dB bandwidths increased from 39 % and 70 % to 53 % and 175 % respectively. At the 1.22 MHz centre frequency, the mean amplitude was 4.7 % lower for the 16 elements measured without matching layers.
Users could tune the matching layer resonance by choosing a different thickness during manufacture, or the matching layers could be omitted altogether, which could reduce manufacture time by 2 months. Figure 5 shows that this may decrease image resolution due to the lower SNR above 1.4 MHz, but that the low frequency data required for FWI methods would be unaffected.
### Field Scans
Figure 6 shows the peak positive pressure field of a single element (channel 8). To reduce acquisition time, field scans were performed for 3 modules with all 16 elements driven simultaneously with a 1-cycle 1.4045 MHz 80 V tri-state pulse, which matches the driving conditions used for UST data acquisition. Hydrophone voltage signals were acquired over a 100.1 mm \(\times\) 20.3 mm plane (0.35 mm step, 30.45 mm axial offset), within a time window including the entire pulse. The frequency dependent sensitivity of the hydrophone was deconvolved to obtain the pressure, and the measured field was backprojected to the source plane using the angular spectrum method [35]. A mask was used to isolate the source field of each element, which was then re-projected forwards to 5 planes from z = 70 mm to z = 110 mm (for channel 8, the entire peak pressure field was projected for visualisation). The pressure amplitude field of each element \(F(x,y,z,f)\) was calculated using a Fast Fourier Transform, and the beam axis was located at each axial \(z\) position by calculating the weighted centroid \((x_{c},y_{c},f_{c})\) of a cross section through the field, ignoring values below -6dB (see Figure 6).
### Beam Axis Skew Angles
Figure 7 shows the distribution of elevational and lateral beam axis skew angles for 48 elements, calculated in the far field using linear fitting to the beam axis intersection points \((x_{c},y_{c},z)\) defined above. Transducer body misalignment relative
Figure 5: **A:** Transmit impulse response hydrophone signals, aligned in time. **B:** Amplitude spectra. Is shown before and after low pass filtering (LPF). The red line shows the mean amplitude spectra of N = 16 transducer elements without matching layers (displayed with the same dB reference).
Figure 6: Peak pressure field for element 8, normalised and log compressed. **A**: Lateral plane and directional response profile calculated using 2D interpolation. **B:**Elevational plane and elevational response profile. The projected planes and the beam axis intersection points are also shown.
to the hydrophone scan axes was estimated to be 0.115\({}^{\circ}\) and 0.337\({}^{\circ}\) in the elevation and lateral planes respectively, based on surface height data acquired from the transducer body, and by inspecting phase differences at the source plane. During UST data acquisition the transducer modules are mounted in the same fashion, and so the misalignment is also expected to be very small.
The small skew angles show that the PZT elements are well aligned relative to the transducer module, and that beam skew caused by non-uniformity in matching layer thickness is negligible. The lateral skew is larger than the elevational skew because the small element width makes alignment in this plane more sensitive to manufacture error. The beam axis skew angles are so small that they could be ignored during image reconstruction which simplifies the transducer modelling.
### Beamwidth, Opening Angle and Angle Dependent Frequency Response
Figure 8 shows the mean elevational response (amplitude as a function of elevation position \(y\) and frequency \(f\) at the centre of the ring array \(z_{c}\) = 110 mm) and mean far field directional response of 48 elements, which were derived from the amplitude fields calculated previously (see Figure 6).
The ADR is smooth in both planes, suggesting that it could be easily incorporated into the forward model during image reconstruction. The standard deviation in the elevational and directional responses of the elements is low (maximum 9.5 % and 11.5 %), which is further evidence of the very small beam axis skew angles and the uniformity in the effective
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter & Mean Value & Standard \\ & & Deviation \\ \hline
**Electrical input impedance (N = 128)** & & \\ Resonance frequency & 1.23 MHz & 18 kHz \\ Phase at resonance & -8.58 \({}^{\circ}\) & 4.96 \({}^{\circ}\) \\ Magnitude at resonance & 1114 \(\Omega\) & 121 \(\Omega\) \\
**Transmit impulse response (N = 64)** & & \\ Resonance frequency & 1.22 MHz & 26 kHz \\ Amplitude deviation at resonance & – & 6.9 \% \\ -6dB FBW (967 kHz - 1.67 MHz) & 53 \% & 12 \% \\ -12dB FBW (833 kHz - 3.23 MHz) & 175 \% & 32 \% \\
**Transmit impulse response, no matching layers (N = 16)** & & \\ Resonance frequency & 1.25 MHz & 26 kHz \\ -6dB FBW (967 kHz - 1.43 MHz) & 39 \% & 3.2 \% \\
**-12dB FBW (867 kHz - 1.73 MHz)** & 70 \% & 2.4 \% \\
**Beam pattern (N = 48)** & & \\ Elevational skew & 0.457 \({}^{\circ}\) & 0.207 \({}^{\circ}\) \\ Lateral skew & 1.169 \({}^{\circ}\) & 0.834 \({}^{\circ}\) \\ Elevational -6 dB beamwidth & 16.3 mm & 0.456 mm \\ Lateral -6 dB opening angle & 55.4 \({}^{\circ}\) & 2.96 \({}^{\circ}\) \\
**On-axis transmit-receive response (N = 256)** & & \\ Resonance frequency & 1.21 MHz & 7.1 kHz \\ Amplitude deviation at resonance & – & 7.9 \% \\ -6 dB FBW (1.08 MHz - 1.41 MHz) & 29 \% & 6.1 \% \\ -12 dB FBW (924 kHz - 1.58 MHz) & 54 \% & 4.9 \% \\ -40 dB FBW (528 kHz - 2.61 MHz) & 170 \% & 3.0 \% \\ Signal to noise ratio & 61.2 dB & 1.2 dB \\ Arrival time & 150.26 \(\mu\)s & 0.070 \(\mu\)s \\
**Off-axis transmit-receive response (N = 256)** & & \\ Receive cross talk & -37.1 dB & 6.0 dB \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary statistics for the open-UST transducer ring array.
sizes of the elements. Figure 7C shows the distribution of -6 dB elevation beamwidth (at the ring array centre) and the -6 dB opening angle, extracted at the centroid frequency \(f_{c}\) from the elevational and directional responses respectively.
The nominal beamwidth of 16.3 mm at the ring array centre is relatively large, and may generate out-of-plane scattering artefacts if the images are reconstructed in 2D. However, in vivo data from a ring array with a 12 mm beamwidth has been successfully reconstructed using 2D FWI [23], and for the same system a 3D forward model including the finite elevation beam has been shown to reduce out-of-plane artefacts [29]. Therefore, the larger beamwidth of the open-UST system does not prevent its use as a research tool. The nominal beamwidth closely matches the predicted value of 16.4 mm at 1.22 MHz from the simulations in Section 2.2, demonstrating that the mean effective radiating length of the
Figure 8: **A:** Mean elevational response. **B**: Standard deviation, relative to maximum of mean. **C**: Mean far field directional response. **D**: Standard deviation, relative to maximum of mean.
Figure 7: Histograms showing the distribution of elevational skew angle (**A**), lateral skew angle (**B**), elevational beamwidth (**C**), lateral opening angle (**D**), on-axis SNR (**E**) and receive cross talk (**F**).
elements closely matches the ideal value of 10 mm. The nominal opening angle of 55.4\({}^{\circ}\) is smaller than the predicted value of 95.1\({}^{\circ}\) at 1.22 MHz from Figure 2A, which could be due to the strong lateral resonance modifying the radiating pressure, producing an effective source width larger than the physical extent of the elements [36]. However, this opening angle is suitable for imaging, since it is similar to the 43\({}^{\circ}\) (-10 dB at 2.6 MHz) [27] opening angle reported for another UST system.
The low IEV in ADR is summarised by the small standard deviations in elevational beamwidth and lateral opening angle of 0.456 mm and 2.96\({}^{\circ}\). This demonstrates that the radiating source pressure distribution was consistent between elements, meaning that the variation in matching layer geometry and acoustic properties was small.
### On-axis Transmit-Receive Response
Figure 9 shows the transmit-receive response for 256 on-axis transmit-receive element pairs, measured in a ring array configuration in deionised water. To acquire the watershed, each transmitter was driven with a 1-cycle 1.4045 MHz 80 V tri-state pulse, receiver data was measured on all other elements, and this was repeated for all transmitters. The transmit-receive bandwidth was 54 % at -12 dB, and 170 % at -40 dB, with a centre frequency of 1.21 MHz. For FWI reconstruction, energy is required at low frequencies to generate a starting model. Excellent reconstructions have been achieved for data with a 766 kHz -40 dB cutoff frequency, starting in the 50 kHz - 500 kHz range [5]. The open-UST -40 dB cutoff frequency is even lower at 528 kHz, and is therefore compatible with FWI methods.
The IEV in on-axis transmit-receive response of the transducer elements was low, with only a small amplitude deviation at resonance of 7.9 %, which captures the combined uniformity in transmit pressure, receive sensitivity and beam axis alignment. Table 2 shows a 0.07 \(\mu\)s standard deviation in arrival time for the on-axis signals, corresponding to a 104 \(\mu\)m deviation in acoustic path length or 8.4 % of a cycle at 1.21 MHz. For comparison, the position errors for a UST bowl array manufactured with a 10 \(\mu\)m tolerance were between 300 \(\mu\)m and 1 mm [37]. This shows that the low cost techniques used for PZT element alignment are accurate.
The distribution of on-axis SNR is shown in Figure 7E, calculated using the first 7.9 \(\mu\)s of each received signal and 3.9 \(\mu\)s of noise, with a nominal value of 61.2 dB. This does not include the insertion loss due to breast tissue, which can be as high as 37 dB at 3.2 MHz [38], or 12 dB at 1 MHz, assuming a linear frequency dependence. This would decrease the SNR to 49.2 dB, but this is still high and averaging could be used to improve SNR further.
### Directional Transmit-Receive Response
Figure 10 shows the off-axis transmit-receive response, defined as the peak value of the amplitude spectrum for the watershot dataset above, with the rays grouped into 5\({}^{\circ}\) bins based on their emission and incidence angle.
Although data is not available for all bins, Figure 10 shows that the transmit-receive directional response is smooth, suggesting that the transducer elements could be modelled using an ideal rectangular source during image reconstruction. The SNR is also shown to be reduced by up to -14.9 dB when the emission and incidence angles are greater than 45\({}^{\circ}\)
Figure 9: On-axis transmit-receive response. **A:** Measured voltages, aligned in time. **B:** Amplitude spectra. The mean and the entire measured range of the data are shown.
The IEV in off-axis response was low, with a maximum standard deviation of 8.3%, again showing that the beam axis skew angles are small, and that the effective radiating dimensions of the sources are uniform.
### Receive Crosstalk
Figure 11 shows an example of receive cross talk between channels in the watershot data due to capacitive coupling in the bundled ribbon cable. The receive cross talk distribution is shown in Figure 7F, defined as a power ratio between the cross talk and acoustic signal for each receive waveform. The mean cross talk was -37.1 dB, which did not affect the accuracy of the time of flight picking during the imaging experiment in Section 4. The coupling could be reduced using microcoaxial cables or individually shielded twisted pairs, but these are expensive, less widely available and would increase manufacture time. Further work is required to assess the effect of the receive cross talk on FWI reconstructions.
## 4 Ultrasound Tomography Imaging Experiment
A phantom UST experiment was performed to demonstrate suitability of the open-UST system for imaging research. The phantom was constructed to mimic the coronal plane of the breast, with a constant elevational cross section to reduce the out-of-plane errors arising from the finite elevation beamwidth of the transducer elements. Figure 12B-F shows the phantom manufacture, the tissue mimicking liquids and their sound speeds measured using through-transmission on homogeneous samples. Phantom and watershot UST datasets were acquired using the method described in Section 3.6.
Figure 12A shows the reconstructed sound speed, calculated using Kaczmarz's method of projections [39] to invert the relative time of flight data (this code has been made available on GitHub [40]). The adipose and fibroglandular regions, and all four of the 9 mm inclusions were resolved, but the three 5 mm inclusions and left fibroglandular boundary are distorted because the straight ray model does not capture refraction or diffraction. There is also a streaking artefact due to the relatively small number of elements. Nevertheless, this is a good proof of principle that the open-UST system is suitable for imaging.
Figure 11: Example of receive crosstalk. A single aggressor channel is shown, but the crosstalk on the receptor channel is the superposition of the coupling to all other channels in the transducer module.
Figure 10: Transmit-receive directional response. **A:** Mean amplitude. **B:** Standard deviation, relative to maximum of mean.
## 5 Discussion and Summary
The total material cost of the 256-element transducer array including cables was \(\epsilon\)2k, which is very low. Users without access to a 3D-printer and vacuum chamber could purchase this equipment for \(<\)\(\epsilon\)5k. The cost could be reduced further by using a thin backing layer with a phase cancelling structure on the rear face [41] to reduce the required volumes of tungsten and epoxy. Also, the electromagnetic shielding could be omitted to reduce time and cost, and receive averaging could be used instead to reduce noise. The 4 month manufacture of the open-UST system is a short lead time for a transducer ring array, but adds staffing costs since the majority of the manufacture requires manual assembly. This could be addressed by omitting the matching layers, as discussed in Section 3.2. For this work, a commercial 256-channel DAQ was used, but lower cost alternatives are available [22], or a multiplexer [42] could be built to sequentially switch a pulser between each transmit channel and an oscilloscope between each receive channel, since this equipment is widely available.
The open-UST system has a similar cost and lead time to purchasing a pair of single element or clinical array transducers, and using a rotation stage to create a virtual array. However, these configurations have a significantly higher data acquisition time and mechanical complexity. Also, the open-UST system can be modified during the design phase, for example changing the ADR of the elements by adjusting their dimensions, which is not possible with off-the-shelf clinical probes, and would instead require expensive custom commercial arrays.
Due to its open-source design, the functionality of the open-UST system could be extended by adding temperature measurement, on-board multiplexing, or electrical impedance matching to the interconnect PCB. The impulse response could be modified by adjusting the thickness and acoustic properties of the matching and backing layers. However, further work is needed to create a publicly available database of the acoustic properties of various metal filler/polymer composites, to reduce the upfront time spent on tuning the compositions to achieve the desired properties. The PZT element size could also be modified, but Section 2.3 demonstrated that it is not straightforward to predict the resonance behaviour of small PZT elements from their dimensions alone, without using finite element analysis. The open-UST system could also be a useful starting point for the rapid prototyping of low cost transducer arrays for applications outside of breast UST, for example in ultrasound therapy, rewarming, or industrial non destructive testing.
The open-UST manufacture was designed to be accessible, without using specialist equipment. Tight manufacturing tolerances were achieved, but these depended heavily on calibrated offsets added to CAD models to compensate for systematic 3D-printing errors. Further work is needed to assess whether end users could replicate these results, without the experience gained during the prototyping phase.
Figure 12: **A:** Reconstructed sound speed. **B:** True phantom sound speed map and tissue mimicking liquids. **C-F:** Phantom manufacture using PET bottles and straws.
The nominal bandwidth, beam pattern and SNR are similar to other UST arrays and are compatible with FWI methods, and so the open-UST system is suitable for UST imaging. The smooth ADR of the transducer elements could be modelled by representing the elements as ideal rectangular sources [43] during FWI reconstruction, with dimensions chosen to best match simulated and measured watershed datasets. This removes the need for an individual source definition based on additional hydrophone measurements, simplifying the calibration for the user. The electro-mechanical impulse response could also be derived from Figure 9 using de-autoconvolution [44].
Section 3 showed that the IEV in ADR was low, that the on-axis position errors were small enough to be calibrated using simple time of flight methods [37], and that the beam axis skew angles were negligible. Therefore, users could model the transducers identically during image reconstruction, avoiding the need to characterise individual elements using hydrophone scans, which would add complexity. Only the ADR amplitude information was assessed in this paper, but since the image reconstruction was successful the IEV in the phase is also expected to be low. Further work is required to assess the reconstruction accuracy using FWI methods, in the case where the transducers are assumed to be identical.
The IEV in transmit-receive response was characterised for all of the elements in the array, and was similar to the IEV in the other characteristics. Therefore, the summary statistics in Table 2 calculated for an array subset are likely to reflect the acoustic performance distribution of the entire array. The IEV in electrical impedance, fractional bandwidth, opening angle and element position were similar to other UST systems manufactured using advanced equipment. This demonstrates that the low cost techniques used for open-UST manufacturing framework also achieved high precision and low variation.
This paper presented open-UST: a manufacturing framework for a low cost transducer ring array. The acoustic performance and inter-element variation were evaluated, and a phantom experiment was carried out demonstrating the suitability of open-UST for imaging research. A manufacture guide has been made available online [12].
|
2301.09115 | Dressed bound states at chiral exceptional points | Atom-photon dressed states are a basic concept of quantum optics. Here, we
demonstrate that the non-Hermiticity of open cavity can be harnessed to form
the dressed bound states (DBS) and identify two types of DBS, the vacancy-like
DBS and Friedrich-Wintgen DBS, in a microring resonator operating at a chiral
exceptional point. With the analytical DBS conditions, we show that the
vacancy-like DBS occurs when an atom couples to the standing wave mode that is
a node of photonic wave function, and thus is immune to the cavity dissipation
and characterized by the null spectral density at cavity resonance. While the
Friedrich-Wintgen DBS can be accessed by continuously tuning the system
parameters, such as the atom-photon detuning, and evidenced by a vanishing Rabi
peak in emission spectrum, an unusual feature in the strong-coupling
anticrossing. We also demonstrate the quantum-optics applications of the
proposed DBS. Our work exhibits the quantum states control through
non-Hermiticity of open quantum system and presents a clear physical picture on
DBS at chiral exceptional points, which holds great potential in building
high-performance quantum devices for sensing, photon storage, and nonclassical
light generation. | Yuwei Lu, Haishu Tan, Zeyang Liao | 2023-01-22T12:51:34Z | http://arxiv.org/abs/2301.09115v1 | # Dressed bound states at chiral exceptional points
###### Abstract
Atom-photon dressed states are a basic concept of quantum optics. Here, we demonstrate that the non-Hermiticity of open cavity can be harnessed to form the dressed bound states (DBS) and identify two types of DBS, the vacancy-like DBS and Friedrich-Wintgen DBS, in a microring resonator operating at a chiral exceptional point. With the analytical DBS conditions, we show that the vacancy-like DBS occurs when an atom couples to the standing wave mode that is a node of photonic wave function, and thus is immune to the cavity dissipation and characterized by the null spectral density at cavity resonance. While the Friedrich-Wintgen DBS can be accessed by continuously tuning the system parameters, such as the atom-photon detuning, and evidenced by a vanishing Rabi peak in emission spectrum, an unusual feature in the strong-coupling anticrossing. We also demonstrate the quantum-optics applications of the proposed DBS. Our work exhibits the quantum states control through non-Hermiticity of open quantum system and presents a clear physical picture on DBS at chiral exceptional points, which holds great potential in building high-performance quantum devices for sensing, photon storage, and nonclassical light generation.
+
Footnote †: Corresponding Author: [email protected]
+
Footnote †: Corresponding Author: [email protected]
+
Footnote †: Corresponding Author: [email protected]
## I Introduction
Dressed states are a hallmark of strong atom-photon interaction [1], which provide a basis for coherent control of quantum states and give rise to a rich variety of important technologies and applications, such as quantum sensing [2], entanglement transport [3; 4], photon blockade for quantum light generation [5; 6; 7], and many-body interaction for scalable quantum computing and quantum information processing [8; 9]. Dressed states with slow decay, i.e., narrow linewidth, are appealing in practical applications. Though the linewidth of dressed states is the average of atomic and photonic components, it often limited by the latter since the linewidth of quantum emitter (QE) is much smaller than the cavity at cryogenic environment. Therefore, a natural approach to reduce the linewidth of dressed states is by mean of high-\(Q\) cavity, which, however, is often at the price of large mode volume [10; 11] or requires elaborate design [12; 13; 14]. Furthermore, light trapping and release are time reversal processes in linear time-invariant systems, thus a cavity with high \(Q\) in general leads to low excitation efficiency, which is undesirable in practical applications. These disadvantages stimulate the exploration of alternative scheme to suppress the decay of dressed states.
Despite leakage is inevitable for optical resonators, it also opens up new avenues for manipulating light-matter interaction by exploiting the non-Hermitian degeneracies [15; 16], known as exceptional points. The presence of exceptional points renders the exotic features to the system dynamics due to the reduced dimensionality of underlying state space at exceptional points [17; 18; 19]. Particularly, previous studies have shown that the coalescence of counterclockwise (CCW) and clockwise (CW) modes in whispering-gallery-mode (WGM) microcavity gives rise to a special type of exceptional points, called chiral exceptional points (CEP) [20; 21], which exhibits the unprecedented degree of freedom in state control, such as the quantum and optical states with chirality [17; 20; 22] and the spontaneous emission enhancement associated with squared Lorentzian response [23; 24; 18; 25].
In this work, we propose and identify the formation of dressed bound states (DBS) in an open microring resonator with CEP, which we call CEP cavity hereafter. A theoretical framework is established to unveil the origin and derive the analytical conditions of DBS. We show that DBS in CEP cavity can be classified into two types, the vacancy-like DBS [26] and Friedrich-Wintgen DBS [6; 27; 28; 29]. The vacancy-like DBS has a unique feature that its condition is irrespective of atom-photon coupling strength since the cavity mode the atom coupled to is a node of photonic wavefunction. By contrast, DBS with Friedrich-Wintgen origin depends on the system parameters, such as the frequency detuning and coupling strength between different system components, which are required to fulfill the condition of destructive interference between two coupling pathways. We also discuss the characteristics of spontaneous emission (SE) spectrum
and dynamics associated with DBS and demonstrate the corresponding quantum-optics applications.
## II Results and discussion
### Model and Theory
The CEP cavity we study is depicted in Fig. 1(a), where a WGM microring resonator is coupled to a semi-infinite waveguide with a perfect mirror (i.e., unity reflectivity) at the end. The mirror results in chiral coupling from CCW mode to CW mode and creates a CEP [23]. A linearly polarized QE couples to CEP cavity with coupling strength \(g\). We assume that the QE is embedded inside the cavity, thus its coupling to free space via modes other than cavity modes is suppressed. The quantum dynamics of the cavity QED system is described by the extended cascaded quantum master equation (see Refs. [30; 31] and also Appendix. A for detailed derivation)
\[\begin{split}&\frac{d}{dt}\rho=-i[H,\rho]+\kappa\mathcal{L}\left[ c_{cew}\right]\rho+\kappa\mathcal{L}\left[c_{cw}\right]\rho\\ &+\kappa\left(e^{i\phi}\left[c_{cew}\rho,c_{cw}^{\dagger}\right] +e^{-i\phi}\left[c_{cew},\rho c_{cew}^{\dagger}\right]\right)\end{split} \tag{1}\]
where \(\mathcal{L}[O]\rho=O\rho O^{\dagger}-\left\{O^{\dagger}O,\rho\right\}/2\) is the Liouvillian superoperator for dissipation of operator \(O\). The Hamiltonian is given by \(H=H_{0}+H_{I}\), where the free Hamiltonian \(H_{0}\) and the interaction Hamiltonian \(H_{I}\) read
\[H_{0}=\omega_{0}\sigma_{+}\sigma_{-}+\omega_{c}c_{cew}^{\dagger}c_{cew}+ \omega_{c}c_{cw}^{\dagger}c_{cw} \tag{2}\]
\[H_{I}=g\left(c_{cew}^{\dagger}\sigma_{-}+\sigma_{+}c_{ccw}\right)+g\left(c_{ cw}^{\dagger}\sigma_{-}+\sigma_{+}c_{cw}\right) \tag{3}\]
where \(\sigma_{-}\) is the lowering operator of QE, while \(c_{cew}/c_{cw}\) is the bosonic annihilation operator for CCW/CW mode. \(\omega_{0}\) and \(\omega_{c}\) are the transition frequency of QE and the resonance frequency of cavity modes, respectively. Considering the high-\(Q\) feature of WGM modes, the intrinsic decay of cavity is omitted, thus its dissipation is determined by the evanescent coupling \(\kappa\) to the guided mode of waveguide. The second line of Eq. (1) describes the chiral coupling that the CW mode is driven by the output field from the CCW mode, where \(\phi=2\beta L\) is the accumulated phase factor of light propagation, with \(\beta\) and \(L\) being the propagation constant of waveguide and the distance between the waveguide-resonator junction and the mirror, respectively.
We consider the SE process that there is at most one photon in the system and the resonant QE-cavity cou
Figure 1: (a) Schematic of the CEP cavity where a WGM microring coupled to a QE and a waveguide with a mirror at the right end. (b) Illustration of the origin of vacancy-like DBS: the standing wave mode that the QE couples to is a node of wavefunction. (c) Illustration of the formation of Friedrich-Wintgen DBS via the destructive interference between the coupling pathways mediated by the QE and the waveguide. The CW mode is flipped to another CCW mode via mirror symmetry. Accordingly, the linearly polarized QE becomes circularly polarized.
pling (\(\omega_{0}=\omega_{c}\)). The equations of motion in the single-excitation subspace can be obtained from Eq. (1)
\[\frac{d}{dt}\vec{p}=-i\mathbf{M}_{c}\vec{p} \tag{4}\]
with \(\vec{p}=[\left\langle\sigma_{-}\right\rangle,\left\langle c_{ccw}\right\rangle,\left\langle c_{cw}\right\rangle]^{T}\) and the matrix \(\mathbf{M}_{c}\)
\[\mathbf{M}_{c}=\left[\begin{array}{ccc}\omega_{c}&g&g\\ g&\omega_{c}-i\frac{\kappa}{2}&0\\ g&-i\kappa e^{i\vec{\sigma}}&\omega_{c}-i\frac{\kappa}{2}\end{array}\right] \tag{5}\]
The emission spectrum is experimentally relevant and also critical to understand the quantum dynamics of a QE. Therefore, we investigate the spectrum properties of DBS via the SE spectrum of QE, which can be measured via fluorescence of QE and is defined as \(S(\omega)=\lim_{t\rightarrow\infty}\text{Re}\left[\int_{0}^{\infty}d\tau \left\langle\sigma_{+}(t+\tau)\sigma_{-}(t)\right\rangle e^{i\omega\tau}\right]\)[1, 32], where \(\left\langle\sigma_{+}(t+\tau)\sigma_{-}(t)\right\rangle\) can be calculated from the equations of single-time averages (Eqs. (4)-(5)) using the quantum regression theorem [1]
\[\frac{d}{d\tau}\left[\begin{array}{c}\left\langle\sigma_{+}(\tau)\sigma_{-}( 0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{ccw}(0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{cw}(0)\right\rangle\end{array}\right]=-i \mathbf{M}_{c}\left[\begin{array}{c}\left\langle\sigma_{+}(\tau)\sigma_{-}( 0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{ccw}(0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{cw}(0)\right\rangle\end{array}\right] \tag{6}\]
The above equations can be solved via the Laplace transform with the initial conditions \(\left\langle\sigma_{+}(0)\sigma_{-}(0)\right\rangle=1\), \(\left\langle\sigma_{+}(0)c_{ccw}(0)\right\rangle=0\), and \(\left\langle\sigma_{+}(0)c_{cw}(0)\right\rangle=0\). The SE spectrum of QE is expressed as (see Appendix. B for detailed derivation)
\[S(\omega)=\frac{1}{\pi}\frac{\Gamma(\omega)}{\left[\omega-\omega_{c}-\Delta( \omega)\right]^{2}+\left[\frac{\Gamma(\omega)}{2}\right]^{2}} \tag{7}\]
where \(\Gamma(\omega)=-2g^{2}\,\text{Im}[\chi(\omega)]\) is the local coupling strength and \(\Delta(\omega)=g^{2}\,\text{Re}[\chi(\omega)]\) denotes the photonic Lamb shift, with \(\chi(\omega)\) being the response function of CEP cavity
\[\chi(\omega)=\frac{2}{(\omega-\omega_{c})+i\frac{\kappa}{2}}-\frac{i\kappa e^ {i\phi}}{\left[(\omega-\omega_{c})+i\frac{\kappa}{2}\right]^{2}} \tag{8}\]
The SE dynamics of QE can be retrieved from \(\mathcal{F}[S(\omega)]\), the Fourier transform of SE spectrum.
Eqs. (1)-(8) constitute the basic theoretical framework for studying the cavity quantum electrodynamics in CEP cavity. In the following subsections, we derive the conditions of single-photon DBS in CEP cavity based on Eqs. (4)-(5).
### Vacancy-like dressed bound state
The coupled cavity is the simplest model that supports the vacancy-like DBS, where the QE interacts with one of two cavities [26]. At first glance our model is different from the coupled cavity proposed in Ref. [26], it would be clear by changing the basis of cavity modes. To find its condition in CEP cavity, we rewrite \(c_{ccw}\) and \(c_{cw}\) in terms of the operators that represent the standing wave modes \(c_{1}\) and \(c_{2}\)[33]
\[c_{cw}=\frac{1}{\sqrt{2}}\left(c_{1}+c_{2}\right),\quad c_{cew}=\frac{1}{ \sqrt{2}}\left(c_{1}-c_{2}\right) \tag{9}\]
Substituting Eq. (9) into Eq. (5), we obtain \(d\vec{s}/dt=-i\mathbf{M}_{s}\vec{s}\) with \(\vec{s}=\left[\left\langle\sigma_{-}\right\rangle,\left\langle c_{1}\right\rangle,\left\langle c_{2}\right\rangle\right]^{T}\). The matrix \(\mathbf{M}_{s}\) takes the form
\[\mathbf{M}_{s}=\left[\begin{array}{ccc}\omega_{c}&\sqrt{2}g&0\\ \sqrt{2}g&\omega_{c}-i\frac{\kappa(1+e^{i\phi})}{2}&i\frac{\kappa}{2}e^{i\phi }\\ 0&-i\frac{\kappa}{2}e^{i\phi}&\omega_{c}-i\frac{\kappa(1-e^{i\phi})}{2}\end{array}\right] \tag{10}\]
It shows that the QE is decoupled from the standing wave mode \(c_{2}\). The vacancy-like DBS forms when the decay of \(c_{2}\) is vanishing, i.e., \(\phi=2n\pi\) (\(n\) is an integer). In this case, the eigenstate is
\[\left|\psi_{VL}\right\rangle=\left(\frac{-i\kappa}{\sqrt{8g^{2}+\kappa^{2}}},0,\frac{2\sqrt{2}g}{\sqrt{8g^{2}+\kappa^{2}}}\right)^{T} \tag{11}\]
with energy \(\omega_{VL}=\omega_{c}\), the same as bare QE. It indicates that the photon cannot be found at \(c_{1}\) since its wavefunction is zero. As a consequence, the DBS can exist despite
Figure 2: Spectral density of a realistic CEP cavity with parameters: Outer radius \(R=5\mu\)m, width \(w=0.25\mu\)m, refractive index \(n_{c}=3.47\), edge-to-edge separation to the waveguide \(d=0.2\mu\)m. The width of waveguide is \(d\) and the mirror is made of 100-nm thick silver. The refractive index of background medium is \(n_{b}=1.44\). The blue circles plot the numerical result of Lorentz cavity (CEP cavity without the mirror at the end), while the blue solid line shows the fitting result with Lorentz spectral function. The pink solid line and circles represent the analytical and numerical results of CEP cavity, respectively. The insets show the electric field distribution of vacancy-like DBS.
the presence of cavity dissipation, a feature not reported in the previous work [26]. Accordingly, the standing wave mode \(c_{1}\) is called the vacancy cavity. Fig. 1(b) illustrates the concept of vacancy-like DBS in our model.
The existence of vacancy-like DBS can be confirmed by inspecting the spectral density of CEP cavity, which is given by \(J(\omega)=\text{Re}\int_{-\infty}^{+\infty}d\tau e^{i\omega\tau}2g^{2}\left<c_{1 }^{\dagger}(\tau)c_{1}(0)\right>\) for \(\phi=2n\pi\)[34; 35], where the two-time correlation \(\left<c_{1}^{\dagger}(\tau)c_{1}(0)\right>\) can be calculated in similar fashion as \(\left<\sigma_{+}^{\dagger}(\tau)\sigma_{-}(0)\right>\) using the quantum regression theorem. With the initial conditions \(\left<c_{1}^{\dagger}(0)c_{1}(0)\right>=1\) and \(\left<c_{1}^{\dagger}(0)c_{2}(0)\right>=0\), the spectral density can be analytically obtained
\[J(\omega)=\frac{2g^{2}\kappa}{\pi}\left[\frac{\omega-\omega_{c}}{\left(\omega -\omega_{c}\right)^{2}+\left(\frac{\kappa}{2}\right)^{2}}\right]^{2} \tag{12}\]
It indicates that on resonance (\(\omega=\omega_{c}\)) the spectral density is zero, implying the null electric field amplitude at QE location. Physically, it means that there is no available channel for QE to decay, consistent with the nature of vacancy-like DBS. Fig. 2 compares the analytical spectral density of a realistic CEP cavity (pink solid line) with the numerical results obtained from electromagnetic simulations (pink circles), where a good accordance can be seen. The insets of Fig. 2 show the electric field distribution at \(J(\omega)=0\), where we can see that the QE is located at a node of cavity modes, and thus decoupled from \(c_{1}\), contract to the conventional Lorentz cavity (blue line and circles), i.e., CEP cavity without the mirror, where the QE location is exactly the antinode of standing wave mode. We thus understand that in CEP cavity, the physical origin of vacancy-like DBS can be interpreted as a result of the destructive interference between the cavity field of CCW mode and the reflected field of CW mode.
Fig. 3(a) shows that the SE spectrum is triplet deviated from DBS, with a Fano-type lineshape around the cavity resonance. As the QE energy approaches to the cavity resonance, the central peak in SE spectrum becomes sharper and goes upwards; on resonance (\(\omega_{0}=\omega_{c}\)) the central peak disappears, implying the formation of vacancy-like DBS. In this case, the SE spectrum exhibits a symmetrical Rabi splitting with a width of approximately \(\sqrt{2}g\) (blue line).
Fig. 3(b) plots the time evolution of the population on the excited QE. It can be seen that the population of QE can be fractionally trapped for various \(g/\kappa\). As the eigenstate \(\left|\psi_{VL}\right>\) indicates, the steady-state population remains finite but declines as \(g\) increases due to the stronger population transfer from the QE to the cavity. By contrast, the population of \(c_{1}\) is depleted at the steady state (blue dashed line) as expected.
Since the vacancy-like DBS occurs in the case of resonant QE-cavity coupling, it is beneficial for numerous quantum-optics applications especially for those involving the energy transfer mediated by cavity, such as the spontaneous entanglement generation (SEG) between qubits [36; 4; 37]. It is straightforward to extend our model to multi-QE case by replacing \(H\) in Eq. (1) with the multi-QE Hamiltonian \(H^{M}\), which is given by \(H^{M}=H_{0}^{M}+H_{I}^{M}\), where \(H_{0}^{M}=\omega_{0}\sum_{i}\sigma_{+}^{(i)}\sigma_{-}^{(i)}+\omega_{c}c_{ccw} ^{\dagger}c_{ccw}+\omega_{c}c_{ccw}^{\dagger}\) and \(H_{I}^{M}=g\sum_{i}\left(c_{ccw}^{\dagger}\sigma_{-}^{(i)}+\sigma_{+}^{(i)}c_ {ccw}\right)+g\sum_{i}\left(c_{cw}^{\dagger}\sigma_{-}^{(i)}+\sigma_{+}^{(i)}c_ {ccw}\right)\). With an initially excited qubit, the generated entanglement between two qubits is quantified by the concurrence \(C(t)=2\left|C_{eg}(t)C_{ge}^{*}(t)\right|\)[36; 38], where \(C_{eg}(t)\) and \(C_{ge}(t)\) are the probability amplitudes of two single-excitation states that one qubit in the excited state while another in the ground state (detailed derivation is given in Appendix. C). The inset of Fig. 3(c) shows the illustration of SEG mediated by CEP cavity, where the long-distance entanglement can be generated between two qubits. As the results shown in Fig. 3(c), the higher and faster steady-state entanglement can be achieved as \(g\) increases and reaches the maximum 0.5 with \(g/\kappa=1\). Since the \(Q\) factor of WGM cavity is
Figure 3: (a) SE spectrum versus the QE-cavity detuning \(\Delta\omega_{0c}=\omega_{0}-\omega_{c}\) for \(g=\kappa\). The white circle in the inset indicates the vacancy-like DBS. (b) and (c) SE dynamics and dynamical concurrence of vacancy-like DBS with an excited QE for various \(g/\kappa\), respectively. The blue dashed line in (b) shows the population of \(c_{1}\) for \(g/\kappa=1/2\). The inset in (c) illustrates the configuration of SEG.
typically \(10^{5}\) at near infrared [20], the vacancy-like DBS allows for a fast and perfect entanglement without requiring a demanding coupling strength between the qubits and the cavity. In addition, we can see that Figs. 3(b) and (c) present opposite trends for a large \(g\). It indicates that the strong population transfer from QE to cavity is unfavourable for population trapping of a single QE, but can lead to the efficient QE-QE interaction mediated by cavity, and thus is beneficial for achieving SEG with long-lived entanglement.
### Friedrich-Wintgen dressed bound state
Different from vacancy-like DBS, Friedrich-Wintgen DBS originates from the destructive interference of two coupling pathways, one mediated by QE while another mediated by waveguide, as Fig. 1(c) depicts. To derive the condition of Friedrich-Wintgen DBS, we recast \(\mathbf{M}_{c}\) in the following form [39]
\[\mathbf{M}_{c}=H_{B}-i\Gamma \tag{13}\]
with the Hermitian part giving rise to real energy for DBS
\[H_{B}=\left[\begin{array}{ccc}\omega_{c}&g&g\\ g&\omega_{c}&i\frac{\pi}{2}e^{-i\phi}\\ g&-i\frac{\pi}{2}e^{i\phi}&\omega_{c}\end{array}\right] \tag{14}\]
and the dissipative operator governing the imaginary part of eigenenergies
\[\Gamma=D^{\dagger}D=\left[\begin{array}{ccc}0&0&0\\ 0&\frac{\pi}{2}&\frac{\pi}{2}e^{-i\phi}\\ 0&\frac{\pi}{2}e^{i\phi}&\frac{\pi}{2}\end{array}\right] \tag{15}\]
Subsequently, we can determine the coupling matrix \(D=\left(0,\sqrt{\kappa/2},\sqrt{\kappa/2}e^{-i\phi}\right)\) and introduce an unnormalized null vector of \(D\), \(\ket{\psi_{0}}=\left(\alpha,-e^{-i\phi},1\right)^{T}\), satisfying \(D\ket{\psi_{0}}=0\), where \(\alpha\) is an undetermined coefficient. The Friedrich-Wintgen DBS appears when \(\ket{\psi_{0}}\) fulfills \(H_{B}\ket{\psi_{0}}=\omega_{FW}\ket{\psi_{0}}\). The solutions yield the energy and condition of Friedrich-Wintgen DBS
\[\omega_{FW}=\omega_{c}\pm\frac{\sqrt{8g^{2}-\kappa^{2}}}{2} \tag{16}\]
\[\phi_{FW}=-i\ln\left(-\frac{\left(4g^{2}-\kappa^{2}\right)\pm i\kappa\sqrt{8 g^{2}-\kappa^{2}}}{4g^{2}}\right) \tag{17}\]
Fig. 4(a) plots \(\phi_{FW}\) versus \(g/\kappa\), where it shows that \(\phi_{FW}\) tends to \(\pi\) for a large \(g\). With \(\phi_{FW}\), there are only two peaks seen in SE spectrum, the Rabi peak corresponding to Friedrich-Wintgen DBS is invisible due to the vanishing linewidth, as the inset of Fig. 4(a)
Figure 4: (a) Condition of Friedrich-Wintgen DBS versus \(g/\kappa\). The red star indicates the parameters for (b). The inset shows the corresponding SE spectrum, where the white dashed lines track the real eigenenergies. (b) Logarithmic plot of SE spectrum versus the QE-cavity detuning \(\Delta\omega_{0c}\). SE spectra for \(\Delta\omega_{0c}=-0.4g\), \(0\), and \(0.4g\) are shown in (c). The black arrow indicates the energy of vanishing Rabi peak. (d) and (e) The real and imaginary parts of eigenenergies versus \(g/\kappa\), respectively, for CEP cavity with Friedrich-Wintgen DBS (solid lines) and Lorentz cavity (circles). Note that the blue circles are overlapped with the pink in (e). (f) SE dynamics with Friedrich-Wintgen DBS for various \(g/\kappa\).
shows. On the other hand, by continuously varying the QE-cavity detuning, we observe an unusual behavior of strong-coupling anticrossing shown in Figs. 4(b) and (c), where the linewidth of one of bands is narrower and the peak disappears at a specific frequency (on resonance here, see the green circle in Fig. 4(b) and the pink line in 4(c)), a signature of Friedrich-Wintgen-type bound states [27; 40]. The real and imaginary parts of eigenenergies versus \(g/\kappa\) are plotted in Figs. 4(d) and (e), respectively, where it shows that the real energies of CEP cavity are nearly the same as Lorentz cavity, while the imaginary parts are dissimilar. It is worth noting that the linewidth of the remaining Rabi peak significantly narrows for \(g>\kappa\) compared to the Lorentz cavity, and approaches to zero as \(g\) gradually increases, see the pink solid line shown in Fig. 4(e). The corresponding linewidth is found to be \(\sim\left(1+\cos\left(\phi_{FW}\right)\right)/2\) for \(g\gg\kappa\). The linewidth narrowing of dressed states is accompanied by the decay suppression of Rabi oscillation in time domain, see the SE dynamics for various \(g/\kappa\) shown in Fig. 4(f). Therefore, for Friedrich-Wintgen DBS, a large \(g\) is beneficial to achieve a long decoherence time.
Though both are DBS in the same cavity QED system, there are two great differences between the vacancy-like DBS and the Friedrich-Wintgen DBS. One difference is that the steady-state population of the former depends on the coupling strength \(g\) (see Fig. 3(b)) while the latter does not, as Fig. 4(f) shows. We find that half the energy can be trapped in the system via Friedrich-Wintgen DBS and the steady-state population of QE is \(1/4\) irrespective of \(g\). Another difference lies in the energy of DBS. The energy of vacancy-like DBS is equal to bare QE for any \(g\), while Friedrich-Wintgen DBS occurs at one of the anharmonic energy levels that the energy spacing is proportional to \(g\). This feature offers Friedrich-Wintgen DBS unique potential for single-photon generation utilizing the photon blockade effect [42; 43; 5]. Fig. 5 compares the performance of single-photon blockade of CEP cavity with that of Lorentz cavity. It shows that the best performance is achieved at Friedrich-Wintgen DBS (vertical dashed line), where both the single-photon efficiency \(I_{c}=\left\langle c_{ew}^{\dagger}c_{ew}^{\dagger}c_{ew}\right\rangle\) and the photon correlation \(g^{(2)}(0)=\left\langle c_{ew}^{\dagger}c_{ew}^{\dagger}c_{ew}^{\dagger}c_{ew }c_{ew}\right\rangle/I_{c}^{2}\) manifest a remarkable enhancement by over two orders of magnitude compared to the dressed states in conventional Lorentz cavity (dashed lines).
## III Conclusion
In conclusion, we demonstrate and unveil the origin of DBS in a prototypical microring resonator operating at CEP, which are classified into two types, the vacancy-like and Friedrich-Wintgen-type bound states. DBS studied in this work exists in the single-photon manifold, while the principles can be applied to higher-excitation manifold for exploring multi-photon DBS. Besides the SEG and single-photon generation demonstrated here, we envision the prominent advantages of DBS in diverse applications, such as quantum logic gate operation and quantum sensing, due to the long decoherence time and extremely sharp lineshape of DBS. We believe our work not only deepens the understanding of DBS at CEP, but also paves the way for harnessing the non-Hermitian physics to manipulate quantum states in a novel way.
###### Acknowledgements.
Y. Lu acknowledges the support of the National Natural Science Foundation of China (Grant No. 62205061) and the Postdoctor Startup Project of Foshan (Grant No. BKS205043). Z. Liao is supported by the National Key R&D Program of China (Grant No. 2021YFA1400800) and the Natural Science Foundations of Guangdong (Grant No. 2021A1515010039).
## Appendix A Derivation of the extended cascaded quantum master equation
The extended cascaded quantum master equation (QME) in Eq. (1) can be derived from tracing out the waveguide modes based on the model depicted in Fig. 1(c). The system Hamiltonian including the waveguide modes is written as (\(\hbar=1\))
\[H_{S}=H+H_{B}+H_{SB} \tag{1}\]
Figure 5: Comparison of single-photon blockade of CEP cavity (circles for numerical results and solid lines for analytical results) with Lorentz cavity (dashed lines). The results are obtained by implementing a driving Hamiltonian \(H_{d}=\Omega\left(e^{-i\omega_{L}t}\sigma_{+}+e^{i\omega_{L}t}\sigma_{-}\right)\) and a Liouvillian superoperator for QE dissipation \(\gamma\mathcal{L}\left[\sigma_{-}\right]\rho\) in Eq. (1). The numerical results are obtained using QuTip [41]. The parameters used in the simulations are \(g=\kappa/2\), \(\gamma=\kappa/20\), and \(\Omega=10^{-2}\gamma\). The analytical expressions of \(I_{c}\) and \(g^{(2)}(0)\) are derived in Appendix. D. The vertical dashed line indicates \(\omega_{FW}\) given by Eq. (16).
where \(H=H_{0}+H_{I}\) is given in Eq. (1). \(H_{B}\) is the free Hamiltonian of waveguide
\[H_{B}=\int d\omega\omega b_{R}^{\dagger}b_{R} \tag{10}\]
and \(H_{SB}\) describes the Hamiltonian of cavity-waveguide interaction
\[H_{SB}=i\sum_{j=ccw,cw}\int d\omega\sqrt{\frac{\kappa}{2\pi}}b_{R}^{\dagger}e^{ -ikx_{j}}c_{j}+H.c. \tag{11}\]
where \(b_{R}\) is the bosonic annihilation operator of the right-propagating waveguide mode with frequency \(\omega\) and wave vector \(k=\omega_{c}/v\) with \(v\) being the group velocity. \(x_{ccw}\) and \(x_{cw}\) are the locations of CCW mode and the mirrored CW mode. Applying the transformation \(\widetilde{H}=UHU^{\dagger}-idU/dtU^{\dagger}\) with \(U=\exp\left[i\left(\omega_{c}\sum_{j=ccw,cw}c_{j}^{\dagger}c_{j}+\int d\omega \omega b_{R}^{\dagger}b_{R}\right)\right]\), we have
\[\widetilde{H}_{SB}(t)=i\sum_{j=ccw,cw}\int d\omega\sqrt{\frac{\kappa}{2\pi}}b_ {R}^{\dagger}e^{i(\omega-\omega_{c})t}e^{-i\omega x_{j}/v}c_{j}+H.c. \tag{12}\]
The equation of motion of \(b_{R}\) can be obtained from the Heisenberg equation
\[\frac{d}{dt}b_{R}(t)=\sum_{j=ccw,cw}\sqrt{\frac{\kappa}{2\pi}}c_{j}e^{i(\omega -\omega_{c})t}e^{-i\omega x_{j}/v} \tag{13}\]
The above equation can be formally integrated to obtain
\[b_{R}(t)=\sum_{j=ccw,cw}\int_{0}^{t}d\tau\sqrt{\frac{\kappa}{2\pi}}c_{j}e^{i( \omega-\omega_{c})\tau}e^{-i\omega x_{j}/v} \tag{14}\]
where we have taken \(b_{R}(0)=0\) since the waveguide is initially in the vacuum state. On the other hand, the equation of motion of arbitrary operator \(O\) is given by
\[\frac{d}{dt}O(t)=\sum_{j=ccw,cw}\int d\omega\sqrt{\frac{\kappa}{2\pi}}\left\{b _{R}^{\dagger}(t)e^{i(\omega-\omega_{c})t}e^{-i\omega x_{j}/v}\left[O(t),c_{j} (t)\right]-\left[O(t),c_{j}^{\dagger}(t)\right]b_{R}(t)e^{-i(\omega-\omega_{c })t}e^{i\omega x_{j}/v}\right\} \tag{15}\]
Substituting \(b_{R}(t)\) into the above equation, we have
\[\begin{split}\frac{d}{dt}O(t)=\frac{\kappa}{2\pi}\sum_{j,l=ccw,cw }&\int_{0}^{t}d\tau\int d\omega\left\{e^{i(\omega-\omega_{c})(t- \tau)}e^{-i\omega x_{j}/v}c_{l}^{\dagger}(\tau)\left[O(t),c_{j}(t)\right]\right. \\ &\left.-\left[O(t),c_{j}^{\dagger}(t)\right]c_{l}(\tau)e^{-i( \omega-\omega_{c})(t-\tau)}e^{i\omega x_{jl}/v}\right\}\end{split} \tag{16}\]
where \(x_{jl}=x_{j}-x_{l}\). We apply the Markov approximation by assuming the time delay \(x_{jl}/v\) between the CCW mode and the mirrored CW mode can be neglected. Therefore,
\[\begin{split}\frac{\kappa}{2\pi}\sum_{l=ccw,cw}\int_{0}^{t}d\tau \int d\omega e^{i(\omega-\omega_{c})(t-\tau)}e^{-i\omega x_{jl}/v}c_{l}^{ \dagger}(\tau)=\kappa\sum_{l=ccw,cw}\int_{0}^{t}d\tau\delta\left(t-\frac{x_{jl }}{v}-\tau\right)e^{-ikx_{jl}}c_{l}^{\dagger}(\tau)\\ \approx\frac{\kappa}{2}c_{j}^{\dagger}(t)+\kappa\sum_{l=ccw,cw} \Theta\left(t-\frac{x_{jl}}{v}\right)e^{-ikx_{jl}}c_{l}^{\dagger}(t)\end{split} \tag{17}\]
where \(x_{jl}>0\) and \(\Theta(t)\) is the step function. With Eq. (17) and taking the averages of Eq. (16), we have
\[\begin{split}\frac{d}{dt}\langle O(t)\rangle=\frac{\kappa}{2}& \sum_{j=ccw,cw}\left\{\left\langle c_{j}^{\dagger}(t)\left[O(t),c_{j}(t) \right]\right\rangle-\left\langle\left[O(t),c_{j}^{\dagger}(t)\right]c_{j}(t) \right\rangle\right\}\\ &+\kappa\sum_{j,l=ccw,cw,j\neq l}\left\{e^{-ikx_{jl}}\left\langle c _{l}^{\dagger}(t)\left[O(t),c_{j}(t)\right]\right\rangle-e^{ikx_{jl}}\left\langle \left[O(t),c_{j}^{\dagger}(t)\right]c_{l}(t)\right\rangle\right\}\end{split} \tag{18}\]
Since \(\langle O(t)\rangle=\text{Tr}[O(t)\rho(0)]=\text{Tr}[O\rho(t)]\), we can simplify the averages of operators in the above equation by using the cyclic property of trace. For example,
\[\left\langle\left[O(t),c_{j}^{\dagger}(t)\right]c_{j}(t)\right\rangle=\text{Tr} \left[Oc_{j}^{\dagger}c_{j}\rho(t)-c_{j}^{\dagger}Oc_{j}\rho(t)\right]=\text{ Tr}\left[Oc_{j}^{\dagger}c_{j}\rho(t)-Oc_{j}\rho(t)c_{j}^{\dagger}\right]= \text{Tr}\left\{O\left[c_{j}^{\dagger},c_{j}\rho(t)\right]\right\} \tag{11}\]
Therefore, we can obtain a QME in the following form
\[\begin{split}&\frac{d}{dt}\rho(t)=-i[H,\rho(t)]+\frac{\kappa}{2} \sum_{j=ccw,cw}\left\{\left[c_{j},\rho(t)c_{j}^{\dagger}\right]-\left[c_{j}^{ \dagger},c_{j}\rho(t)\right]\right\}\\ &\quad+\kappa\sum_{j,l=ccw,cw,j\neq l}\left\{e^{-ikx_{jl}}\left[ c_{j},\rho(t)c_{l}^{\dagger}\right]-e^{ikx_{jl}}\left[c_{j}^{\dagger},c_{l} \rho(t)\right]\right\}\end{split} \tag{12}\]
Note that \(kx_{jl}=\phi\), and thus \(j=cw\) and \(l=ccw\) in the third term on the right-hand side. In addition, the second term on the right-hand side can be expanded and rewritten using the Liouvillian superoperator. We thus arrive at the extended cascaded QME in Eq. (1).
## Appendix B Derivation of the spontaneous emission spectrum
The spontaneous emission (SE) spectrum, also called the polarization spectrum, reflects the local dynamics of a quantum emitter (QE). The SE spectrum is given by \(S(\omega)=\lim_{t\rightarrow\infty}2\,\text{Re}\left[\int_{0}^{\infty}d\tau \left\langle\sigma_{+}(t+\tau)\sigma_{-}(t)\right\rangle e^{i\omega\tau}\right]\), where the correlation \(\left\langle\sigma_{+}(t+\tau)\sigma_{-}(t)\right\rangle\) can be solved from Eqs. (4)-(5) using the quantum regression theorem, which yields the following equations of motion
\[\frac{d}{d\tau}\left[\begin{array}{c}\left\langle\sigma_{+}(\tau)\sigma_{-} (0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{cew}(0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{cew}(0)\right\rangle\end{array}\right]=\left[ \begin{array}{ccc}\omega_{0}&g&g\\ g&\omega_{c}-i\frac{\kappa}{2}&0\\ g&-i\kappa e^{i\phi}&\omega_{c}-i\frac{\kappa}{2}\end{array}\right]\left[ \begin{array}{c}\left\langle\sigma_{+}(\tau)\sigma_{-}(0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{cew}(0)\right\rangle\\ \left\langle\sigma_{+}(\tau)c_{cew}(0)\right\rangle\end{array}\right] \tag{13}\]
Using the initial conditions \(\left\langle\sigma_{+}(0)\sigma_{-}(0)\right\rangle=1\), \(\left\langle\sigma_{+}(0)c_{ccw}(0)\right\rangle=0\), and \(\left\langle\sigma_{+}(0)c_{cw}(0)\right\rangle=0\), the above correlations can be easily obtained by taking the Laplace transform \(\left\langle O(\tau)\right\rangle\rightarrow\left\langle O(s)\right\rangle\)
\[s\left[\begin{array}{c}\left\langle\sigma_{+}\sigma_{-}(s)\right\rangle\\ \left\langle\sigma_{+}c_{cew}(s)\right\rangle\\ \left\langle\sigma_{+}c_{cew}(s)\right\rangle\end{array}\right]=\left[ \begin{array}{ccc}\omega_{0}&g&g\\ g&\omega_{c}-i\frac{\kappa}{2}&0\\ g&-i\kappa e^{i\phi}&\omega_{c}-i\frac{\kappa}{2}\end{array}\right]\left[ \begin{array}{c}\left\langle\sigma_{+}\sigma_{-}(s)\right\rangle\\ \left\langle\sigma_{+}c_{cew}(s)\right\rangle\\ \left\langle\sigma_{+}c_{cew}(s)\right\rangle\end{array}\right]+\left[ \begin{array}{c}1\\ 0\\ 0\end{array}\right] \tag{14}\]
The solutions are given by
\[\left\langle\sigma_{+}\sigma_{-}(s)\right\rangle=\frac{1}{s+i\omega_{0}+\frac{ g^{2}}{s+i\left(\omega_{c}-i\frac{\kappa}{2}\right)}\left[2-\frac{\kappa e^{i\phi}}{s+i \left(\omega_{c}-i\frac{\kappa}{2}\right)}\right]} \tag{15}\]
Transforming into the frequency domain by replacing \(s=-i\omega\), we have
\[\left(-i\left(\omega-\omega_{0}\right)+\frac{2g^{2}}{-i\left(\omega-\omega_{c} \right)+\kappa+i\frac{\left(\frac{g}{2}\right)^{2}}{\omega-\omega_{c}}}\right) \left\langle\sigma_{+}\sigma_{-}(\omega)\right\rangle=1 \tag{16}\]
Therefore,
\[\left\langle\sigma_{+}\sigma_{-}(\omega)\right\rangle=\frac{i}{\left(\omega- \omega_{0}\right)-g^{2}\left\{\frac{2}{\left(\omega-\omega_{c}\right)+i\frac{ \kappa}{2}}-\frac{i\kappa e^{i\phi}}{\left[\left(\omega-\omega_{c}\right)+i \frac{\kappa}{2}\right]^{2}}\right\}} \tag{17}\]
We identify the response function of CEP cavity as
\[\chi(\omega)=\frac{2}{\left(\omega-\omega_{c}\right)+i\frac{\kappa}{2}}-\frac {i\kappa e^{i\phi}}{\left[\left(\omega-\omega_{c}\right)+i\frac{\kappa}{2} \right]^{2}} \tag{18}\]
where the first term of right-hand side denotes the usual Lorentz response, with a factor 2 representing the coupling of QE to two cavity modes. The second term of right-hand side demonstrates the characteristic of squared Lorentz response and thus is contributed by CEP. Eq. (10) can be rewritten as
\[\langle\sigma_{+}\sigma_{-}(\omega)\rangle=\frac{i}{\omega-\omega_{0}-\Delta( \omega)+i\frac{\Gamma(\omega)}{2}} \tag{12}\]
Therefore, the SE spectrum is expressed as
\[S(\omega)=\frac{2}{\pi}\operatorname{Re}[\langle\sigma_{+}\sigma_{-}(\omega) \rangle]=\frac{1}{\pi}\frac{\Gamma(\omega)}{\left[\omega-\omega_{0}-\Delta( \omega)\right]^{2}+\left[\frac{\Gamma(\omega)}{2}\right]^{2}} \tag{13}\]
with the photon induced Lamb shift
\[\Delta(\omega)=g^{2}\operatorname{Re}[\chi(\omega)]=\frac{\left[\left(\omega -\omega_{c}\right)^{2}-\left(\frac{\kappa}{2}\right)^{2}\right]\left[2\left( \omega-\omega_{c}\right)+\kappa\sin(\phi)\right]+\kappa^{2}\left(\omega- \omega_{c}\right)\left[1-\cos(\phi)\right]}{\left[\left(\omega-\omega_{c} \right)^{2}+\left(\frac{\kappa}{2}\right)^{2}\right]^{2}} \tag{14}\]
and the local coupling strength
\[\Gamma(\omega)=-2g^{2}\operatorname{Im}[\chi(\omega)]=-2\frac{\left[\left( \omega-\omega_{c}\right)^{2}-\left(\frac{\kappa}{2}\right)^{2}\right]\kappa[1 -\cos(\phi)]-\kappa\left(\omega-\omega_{c}\right)\left[2\left(\omega-\omega_{ c}\right)+\kappa\sin(\phi)\right]}{\left[\left(\omega-\omega_{c}\right)^{2}+ \left(\frac{\kappa}{2}\right)^{2}\right]^{2}} \tag{15}\]
For vacancy-like bound state (\(\phi=2n\pi\)), the local coupling strength is
\[\Gamma(\omega)=4g^{2}\kappa\left[\frac{\omega-\omega_{c}}{\left(\omega-\omega _{c}\right)^{2}+\left(\frac{\kappa}{2}\right)^{2}}\right]^{2}=2\pi J(\omega) \tag{16}\]
where \(J(\omega)\) is given in Eq. (12).
## Appendix C Spontaneous entanglement generation at vacancy-like bound state
The system Hamiltonian for spontaneous entanglement generation (SEG) is written as
\[H^{M}=H_{0}^{M}+H_{I}^{M} \tag{17}\]
where \(H_{0}^{M}\) and \(H_{I}^{M}\) are given by
\[H_{0}^{M}=\omega_{c}\sum_{j=1,2}\sigma_{+}^{(j)}\sigma_{-}^{(j)}+\omega_{c}c_{ ccw}^{\dagger}c_{ccw}+\omega_{c}c_{cw}^{\dagger}c_{ccw} \tag{18}\]
\[H_{I}^{M}=\sum_{j=1,2}g\left(\sigma_{-}^{(j)}c_{ccw}^{\dagger}+c_{ccw}\sigma_{+ }^{(j)}\right)+g\left(\sigma_{-}^{(j)}c_{cw}^{\dagger}+c_{cw}\sigma_{+}^{(j)}\right) \tag{19}\]
With the extended cascaded QME (Eq. (1)), we can obtain the effective Hamiltonian in the single-excitation subspace
\[\begin{split} H_{\text{eff}}=\omega_{c}\sum_{j=1,2}& \sigma_{+}^{(j)}\sigma_{-}^{(j)}+\left(\omega_{c}-i\frac{\kappa}{2}\right)c_{ ccw}^{\dagger}c_{ccw}+\left(\omega_{c}-i\frac{\kappa}{2}\right)c_{cw}^{ \dagger}c_{ccw}\\ &+\sum_{j=1,2}g\left(\sigma_{-}^{(j)}c_{ccw}^{\dagger}+c_{ccw} \sigma_{+}^{(j)}\right)+g\left(\sigma_{-}^{(j)}c_{cw}^{\dagger}+c_{cw}\sigma_{+ }^{(j)}\right)-i\kappa e^{i\phi}c_{ccw}c_{cw}^{\dagger}\end{split} \tag{20}\]
The corresponding state vector is given by
\[\begin{split}|\Psi(t)\rangle=C_{gg}(t)|gg00\rangle+C_{eg}(t)|gg0 0\rangle+C_{ge}(t)|ge00\rangle+C_{10}(t)|gg10\rangle+C_{01}(t)|gg01\rangle \end{split} \tag{21}\]
where \(\left|n_{1}n_{2}mp\right\rangle=\left|n_{1}\right\rangle\otimes\left|n_{2} \right\rangle\otimes\left|m\right\rangle\otimes\left|p\right\rangle\) with \(\left|n_{1}\right\rangle\) and \(\left|n_{2}\right\rangle\) representing that the QE is either in the excited state \(\left(\left|n_{1}\right\rangle,\left|n_{2}\right\rangle=\left|e\right\rangle\right)\) or in the ground state \(\left(\left|n_{1}\right\rangle,\left|n_{2}\right\rangle=\left|g\right\rangle\right)\), and \(\left|m\right\rangle\) and \(\left|p\right\rangle\) denoting that there is \(m\) photon in the CCW mode and \(p\) photon in the mirrored CW mode, respectively. With the Schrodinger equation \(id|\Psi(t)\rangle/dt=H_{\mathrm{eff}}\left|\Psi(t)\right\rangle\), we can obtain the equations of coefficients
\[i\frac{d}{dt}C_{eg}(t)=\omega_{c}C_{eg}(t)+gC_{10}(t)+gC_{01}(t) \tag{10}\]
\[i\frac{d}{dt}C_{ge}(t)=\omega_{c}C_{ge}(t)+gC_{10}(t)+gC_{01}(t) \tag{11}\]
\[i\frac{d}{dt}C_{10}(t)=\left(\omega_{c}-i\frac{\kappa}{2}\right)C_{10}(t)+gC_ {eg}(t)+gC_{ge}(t) \tag{12}\]
\[i\frac{d}{dt}C_{01}(t)=\left(\omega_{c}-i\frac{\kappa}{2}\right)C_{01}(t)+gC_ {eg}(t)+gC_{ge}(t)-i\kappa e^{i\phi}C_{10}(t) \tag{13}\]
For vacancy-like bound state (\(\phi=2n\pi\)), the equations can be easily solved through the Laplace transform
\[C_{eg}(t)=\frac{8g^{2}+\kappa^{2}+2ge^{-\frac{5}{2}t}[4g\cos(2gt)+\kappa\sin( 2gt)]}{16g^{2}+\kappa^{2}} \tag{14}\]
\[C_{ge}(t)=\frac{2ge^{-\frac{5}{2}t}\left[-4ge^{\frac{5}{2}t}+4g\cos(2gt)+ \kappa\sin(2gt)\right]}{16g^{2}+\kappa^{2}} \tag{15}\]
Then the dynamical concurrence can be obtained as \(C(t)=2\left|C_{eg}(t)C_{ge}^{*}(t)\right|\).
## Appendix D Single-photon generation at Friedrich-Wintgen bound state
The single-photon generation through photon blockade requires the weak coherent pumping. In this section, we present a derivation of the analytical expressions for averaged photon number and zero-time-delay second-order correlation function of CW mode using the perturbation theory. A driving Hamiltonian is implemented in the extended cascaded QME for QE driven case, which is
\[H_{\mathrm{driving}}=\Omega\left(e^{-i\omega_{L}t}\sigma_{+}+\sigma_{-}e^{i \omega_{L}t}\right) \tag{16}\]
where \(\omega_{L}\) is the frequency of laser field and \(\Omega\) is the driving strength. Applying the unitary transformation \(U=\exp\left[-i\omega_{L}\left(c_{ccw}^{\dagger}c_{ccw}+c_{cw}^{\dagger}c_{cw}+ \sigma_{+}\sigma_{-}\right)t\right]\), we can obtain the effective Hamiltonian
\[H_{\mathrm{eff}}^{t}=H^{t}+EV \tag{17}\]
with
\[\begin{gathered} H^{t}=\Delta_{0}\sigma_{+}\sigma_{-}+\Delta_{c}c _{ccw}^{\dagger}c_{ccw}+\Delta_{c}c_{cw}^{\dagger}c_{cw}+g\left(\sigma_{-}c_{ ccw}^{\dagger}+c_{ccw}\sigma_{+}\right)+g\left(\sigma_{-}c_{cw}^{\dagger}+c_{ cw}\sigma_{+}\right)\\ -i\kappa e^{i\phi}c_{ccw}c_{cw}^{\dagger}\end{gathered} \tag{18}\]
and
\[V=\Omega\left(\sigma_{+}+\sigma_{-}\right) \tag{19}\]
where \(\Delta_{0}=\Delta_{cL}-i\gamma/2\) and \(\Delta_{c}=\Delta_{cL}-i\kappa/2\) with \(\Delta_{cL}=\omega_{c}-\omega_{L}\) being the frequency detuning between the system and the laser field. \(E\) is a perturbative parameter of laser intensity. Since the evaluation of \(g^{(2)}(0)=\left\langle c_{ccw}^{\dagger}c_{cw}^{\dagger}c_{cw}c_{cw}\right\rangle/ I_{c}^{2}\) requires calculating the second-order correlation function of cavity operator, we expand the time-dependent wave function \(\left|\Psi(t)\right\rangle\) in terms of \(E\) as \(\left|\Psi(t)\right\rangle=\sum_{l=2}E^{l}\left|\psi_{l}(t)\right\rangle\), where we have truncated the state space by two-excitation manifold and as a result, \(\left|\psi_{l}(t)\right\rangle\) is expressed as
\[\left|\psi_{l}(t)\right\rangle=\sum_{n+m+p\leq 2,n=0,1}C_{nmp}^{l}|n\rangle_{e}|m \rangle_{ccw}|p\rangle_{cw} \tag{20}\]
where \(C_{nmp}^{l}\) is the coefficient of quantum state \(|n\rangle_{e}|m\rangle_{ccw}|p\rangle_{cw}\) in \(l\)-order expansion, where there are \(m\) photons in CCW mode and \(p\) photons in CW mode, while the QE is either excited (\(n=1\)) or unexcited (\(n=0\)). For \(l=1\) and 2, the state vector is given by
\[|\psi_{1}(t)\rangle=C_{100}^{1}|100\rangle+C_{010}^{1}|010\rangle+C_{001}^{1}|0 01\rangle \tag{10}\]
\[|\psi_{2}(t)\rangle=C_{011}^{2}|011\rangle+C_{10}^{2}|110\rangle+C_{101}^{2}|1 01\rangle+C_{020}^{2}|020\rangle+C_{002}^{2}|002\rangle \tag{11}\]
From the Schrodinger equation \(id|\Psi(t)\rangle/dt=H_{\text{eff}}^{t}|\Psi(t)\rangle\), we have
\[i\frac{d}{dt}\left|\psi_{0}(t)\right\rangle=H^{t}\left|\psi_{0}(t)\right\rangle \tag{12}\]
\[i\frac{d}{dt}\left|\psi_{l}(t)\right\rangle=H^{t}\left|\psi_{l}(t)\right\rangle +V\left|\psi_{l-1}(t)\right\rangle \tag{13}\]
Substituting \(H^{t}\) (Eq. (4)) and \(V\) (Eq. (5)) into Eqs. (12) and (13), we can obtain the following equations of motion for coefficients
\[i\frac{d}{dt}C_{100}^{1}=\Delta_{0}C_{100}^{1}+gC_{010}^{1}+gC_{001}^{1}+\Omega \tag{14}\]
\[i\frac{d}{dt}C_{010}^{1}=\Delta_{c}C_{010}^{1}+gC_{100}^{1} \tag{15}\]
\[i\frac{d}{dt}C_{001}^{1}=\Delta_{c}C_{001}^{1}+gC_{100}^{1}-i\kappa e^{i\phi} C_{010}^{1} \tag{16}\]
and \(C_{000}^{0}\approx 1\) due to the assumption of weak pump. The above equations yield
\[C_{001}^{1}=\Omega g\frac{\Delta_{c}+i\kappa e^{i\phi}}{D_{1}} \tag{17}\]
with
\[D_{1}=\left|\begin{array}{ccc}\Delta_{0}&g&g\\ g&\Delta_{c}&0\\ g&-i\kappa e^{i\phi}&\Delta_{c}\end{array}\right| \tag{18}\]
Therefore, the averaged photon number of CW mode is given by
\[I_{c}=\left\langle\Psi(0)\left|c_{cw}^{\dagger}c_{cw}^{\dagger}\right|\Psi(0) \right\rangle\approx\left|C_{001}^{1}\right|^{2}=\left|\Omega g\frac{\Delta_{ c}+i\kappa e^{i\phi}}{D_{1}}\right|^{2} \tag{19}\]
We can see that the eigenvalues of \(D_{1}\) are the same as the matrix \(\mathbf{M}_{c}\) in Eq. (13), and thus the cavity photon \(I_{c}\) diverges at the Friedrich-Wintgen DBS due to the zero decay, and the perfect single-photon purity can be achieved since \(g^{(2)}(0)\propto I_{c}^{-2}\). This unphysical result comes from the truncation of state space with at most one excitation. \(I_{c}\) will remain finite when taking into account the higher-order manifold. However, the analytical expression of \(I_{c}\) predicts that the formation of bound state in the single-excitation subspace can produce prominent enhancement of both the efficiency and single-photon purity of single-photon blockade.
From Eq. (13), we can also obtain the equations of two-excitation subspace
\[i\frac{d}{dt}C_{011}^{2}=2\Delta_{c}C_{011}^{2}+gC_{101}^{2}+gC_{110}^{2}-i \sqrt{2}\kappa e^{i\phi}C_{020}^{2} \tag{20}\]
\[i\frac{d}{dt}C_{110}^{2}=\left(\Delta_{0}+\Delta_{c}\right)C_{110}^{2}+\sqrt{ 2}gC_{020}^{2}+gC_{011}^{2}+\Omega C_{010}^{1} \tag{21}\]
\[i\frac{d}{dt}C_{101}^{2}=\left(\Delta_{0}+\Delta_{c}\right)C_{101}^{2}+gC_{011}^{2 }+\sqrt{2}gC_{002}^{2}-ike^{i\phi}C_{110}^{2}+\Omega C_{001}^{1} \tag{47}\]
\[i\frac{d}{dt}C_{020}^{2}=2\Delta_{c}C_{020}^{2}+\sqrt{2}gC_{110}^{2} \tag{48}\]
\[i\frac{d}{dt}C_{002}^{2}=2\Delta_{c}C_{002}^{2}+\sqrt{2}gC_{101}^{2}-i\sqrt{2} \kappa e^{i\phi}C_{011}^{2} \tag{49}\]
We thus can obtain
\[\begin{split} C_{002}^{2}=2\sqrt{2}gD_{2}^{-1}\left\{C_{001}^{1} \left\{\Delta_{c}\left[2\Delta_{c}\left(\Delta_{c}+\Delta_{0}\right)-3g^{2} \right]+i\kappa e^{i\phi}\left[\Delta_{c}\left(\Delta_{c}+\Delta_{0}\right)-2 g^{2}\right]\right\}\right.\\ \left.+C_{010}^{1}\left\{\Delta_{c}g^{2}+i\kappa e^{i\phi}\left[ \Delta_{c}\left(3\Delta_{c}+\Delta_{0}\right)+g^{2}\right]-\kappa^{2}e^{2i \phi}\left(2\Delta_{c}+\Delta_{0}\right)\right\}\right\}\end{split} \tag{50}\]
with
\[D_{2}=\left|\begin{array}{cccc}2\Delta_{c}&g&g&-i\sqrt{2}\kappa e^{i\phi}& 0\\ g&\Delta_{0}+\Delta_{c}&0&\sqrt{2}g&0\\ g&-i\kappa e^{i\phi}&\Delta_{0}+\Delta_{c}&0&\sqrt{2}g\\ 0&\sqrt{2}g&0&2\Delta_{c}&0\\ -i\sqrt{2}\kappa e^{i\phi}&0&\sqrt{2}g&0&2\Delta_{c}\end{array}\right|=4D_{1} \left[-2g^{2}+\Delta_{c}\left(3\Delta_{c}+2\Delta_{0}\right)\right]+4\Delta_{c }^{4}\left(2\Delta_{c}+\Delta_{0}\right) \tag{51}\]
Then the zero-time-delayed second-order correlation function is evaluated as
\[g^{(2)}(0)=\left\langle\Psi(0)\left|c_{cw}^{\dagger}c_{cw}^{\dagger}c_{cw}c_{ cw}\right|\Psi(0)\right\rangle/I_{c}^{2}\approx\left|C_{002}^{2}\right|^{2}/I_{c}^{2} \tag{52}\]
|
2301.06226 | Deep Learning based Novel Cascaded Approach for Skin Lesion Analysis | Automatic lesion analysis is critical in skin cancer diagnosis and ensures
effective treatment. The computer aided diagnosis of such skin cancer in
dermoscopic images can significantly reduce the clinicians workload and help
improve diagnostic accuracy. Although researchers are working extensively to
address this problem, early detection and accurate identification of skin
lesions remain challenging. This research focuses on a two step framework for
skin lesion segmentation followed by classification for lesion analysis. We
explored the effectiveness of deep convolutional neural network based
architectures by designing an encoder-decoder architecture for skin lesion
segmentation and CNN based classification network. The proposed approaches are
evaluated quantitatively in terms of the Accuracy, mean Intersection over Union
and Dice Similarity Coefficient. Our cascaded end to end deep learning based
approach is the first of its kind, where the classification accuracy of the
lesion is significantly improved because of prior segmentation. | Shubham Innani, Prasad Dutande, Bhakti Baheti, Ujjwal Baid, Sanjay Talbar | 2023-01-16T01:08:32Z | http://arxiv.org/abs/2301.06226v1 | # Deep Learning based Novel Cascaded Approach for Skin Lesion Analysis
###### Abstract
Patients diagnosed with skin cancer like melanoma are prone to a high mortality rate. Automatic lesion analysis is critical in skin cancer diagnosis and ensures effective treatment. The computer-aided diagnosis of such skin cancer in dermoscopic images can significantly reduce the clinicians' workload and help improve diagnostic accuracy. Although researchers are working extensively to address this problem, early detection and accurate identification of skin lesions remain challenging. This research focuses on a two-step framework for skin lesion segmentation followed by classification for lesion analysis. We explored the effectiveness of deep convolutional neural network (CNN) based architectures by designing an encoder-decoder architecture for skin lesion segmentation and CNN based classification network. The proposed approaches are evaluated quantitatively in terms of the Accuracy, mean Intersection over Union(mIoU) and Dice Similarity Coefficient. Our cascaded end-to-end deep learning-based approach is the first of its kind, where the classification accuracy of the lesion is significantly improved because of prior segmentation. The code is available at [https://www.github.com/shubhaminnani/skin/lesion](https://www.github.com/shubhaminnani/skin/lesion)
Keywords:Skin Lesion Deep Learning Classification Segmentation
## 1 Introduction and Related Work
Skin cancer is one of the fatal illnesses in today's world. Even though it is the least common, the disease is responsible for around 91,000 deaths every year until now [2]. Regular monitoring and early detection play a vital function in reducing the mortality rate of skin cancer and can help in precise treatment planning and improving life. Survival rates decrease significantly if skin cancer is left to be treated in an advanced stage of the disease [23]. A dermatologist examines skin images with Dermoscopy, a non-invasive diagnostic tool. This enables
dermatologists to visualize delicate clinical designs of skin lesions and subsurface skin structures that are generally not visible to the unaided eye. These images of the skin are studied under a microscope to point out skin abnormalities and classify them into various types of skin cancers [1]. The enhanced dermoscopic images are free from any skin surface reflections, which helps the dermatologist to diagnose skin cancer accurately.
Skin Cancer Detection, i.e., lesion segmentation, is one of the essential and primary steps in accurate and precise treatment planning for various skin diseases. Automatic skin cancer lesion segmentation is very challenging because of the significant variations in lesion characteristics, such as size, location, shape, color, texture, and skin. The segmentation becomes more arduous due to fuzzy boundaries of lesions, poor color contrast with the surrounding skin, huge intraclass variation, and the existence of antiques such as veins and hairs. Various types of skin cancer images are shown in Fig. 1. Skin lesion segmentation has drawn researchers for over a decade because of its increased clinical applicability and demanding nature. Several image processing and supervised machine learning-based approaches are presented for accurate lesion segmentation, with pros and cons. Most studies pointed toward creating computer-aided design frameworks for skin lesions that would recognize anomalies or skin illnesses. The methods in literature usually follow a certain analysis pipeline [28]: The first step is to delineate the lesion area in the image from healthy skin, followed by automated feature extraction to compute the region of interest. The final step is to predict the type of skin lesion (classification task). Several conventional methods are available in the literature to handle the segmentation of skin lesions. The comprehensive review of different lesion segmentation algorithms is available at [9][8][27][20]. [22].
In recent times, deep learning techniques have outperformed all the existing state-of-the-art approaches in various computer vision studies like segmentation [7][18], detection [3], classification [6], etc. [5][4]. The availability of computing resources and huge annotated data has enabled researchers to develop supervised Deep Neural Network models to address these tasks. With the evolution of DCNN and various challenges in skin lesion latterly [11], multiple effective computational approaches have appeared to solve particular problems in this field. Regardless, the most current thriving strategies are based on CNN [31], [14], [32], [13]. Along with lesion segmentation, deep learning approaches have improved classification performance, leading to better diagnosis of diseases in med
Figure 1: Classwise sample images from HAM10000 classification dataset
ical imaging. The methodologies are used to anticipate the existence of illness and recognize the classes. Recent studies demonstrated remarkable performance in classifying skin cancer using deep learning algorithms in binary classification [13] but failed to achieve comparable performance in multi-class classification.
This research aims to introduce a two-step automated system that will segment the skin lesion and then classify the disease. After a thorough literature survey, the proposed approach is an end-to-end deep learning-based approach, the first for skin lesion segmentation and classification for seven types of lesions. There is no adequate dataset available having segmentation masks and classification labels in a single dataset for seven different types of lesions. To address this, for segmentation tasks, we work with the International Skin Imaging Collaboration (ISIC) 2018 dataset [12] where images with segmentation labels are available, and for classification HAM10000 dataset [30] which consists of seven different skin lesion classes. In our two-step proposed approach, the segmentation task is initially conducted with the ISIC 2018 dataset. With a trained segmentation model, the HAM10000 dataset is segmented, where only classification labels are available. The Region of Interest(ROI) is extracted from segmented images of the HAM10000 dataset fed as input to the classification framework. The two-step framework is shown in Fig. 2.
The rest of the article is arranged as follows: Database description is given in Section 2. Presented methods for segmentation and classification of lesions are described in section 3. Section 4 comprises evaluation metrics, experimental results, and performance analysis. This article is concluded in Section 5.
## 2 Dataset
International Skin Imaging Collaboration (ISIC) 2018 has 2594 images with corresponding ground truth labels for lesion Segmentation. The images have different sizes, from hundred to thousand, and varying width and height ratios. The image lesion has distinct appearances and is located in a different part of the skin. The HAM10000 [30] dataset is used for the classification task, consisting of seven types of lesion disease in the dermoscopy images. Fig. 1 provided few
Figure 2: Proposed two stage deep learning based framework for lesion segmentation and classification
images from the dataset. The standard pre-processing like scaling the values between [0,1] or [-1,1] is being implemented on entire dataset.
The classification dataset consists of around 10015 lesions with Actinic keratosis / Bowen's disease (intraepithelial carcinoma) (AKIEC), Basal cell carcinoma (BCC), Benign keratosis (solar lentigo / seborrheic keratosis/lichen planus-like keratosis) (BKL), Dermatofibroma (DF), Melanoma (MEL), Melanocytic nevus (NV), Vascular lesion (VASC) diseases. The data distribution is presented in Table 1, and we observe that high-class imbalance is challenging in the given datasets, where it is highly skewed towards certain classes. As a result, we observed sparse images for specific groups like DF, VASC, and AKIEC.
## 3 Proposed Methodology
We propose a two-step framework to handle the task of segmentation and classification in skin lesions. In the first step, the images with skin lesions are segmented to generate coarse-level masks. These segmented masks are multiplied
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Class** & **Number of Images Class Percentage** \\ \hline \hline AKIEC & 327 & 3.27 \\ BCC & 514 & 5.13 \\ BKL & 1099 & 10.97 \\ DF & 115 & 1.15 \\ MEL & 1113 & 11.11 \\ NV & 6705 & 66.95 \\ VASC & 142 & 1.42 \\ \hline \end{tabular}
\end{table}
Table 1: Class distribution in HAM10000 dataset
Figure 3: Proposed encoder-decoder architecture for skin lesion segmenation
with the corresponding image to extract the coarse level lesion part in the original image, as shown in Fig. 2, which removes redundant data in the image, and these ROI images are input to the classification network that signifies the type of lesion.
### Segmentation Approach
Encoder-decoder architectures are widely used in computer vision for image segmentation task [19][26]. Ronneberger et al. [24] presented U-Net, a breakthrough study for medical image segmentation comprising CNN. Generally, a feature learning block is the encoder module to capture spatial features of the input. It downsamples the input image progressively and decreases feature dimensions to catch high-level patterns of the input image. A decoder block consists of layers that upsample the feature map obtained from the encoder output with extracted spatial features. This article's encoder-decoder module is graphically presented in Fig. 3. In our approach, we designed three different encoder-decoder networks by replacing the encoder block in U-Net with popular CNN architectures such as ResNet [15], InceptionResNetV2 [33] and EfficientNets [29]. Our architecture based on an encoder-decoder module consists of contraction and expansion paths. The encoder consists of convolutional and max-pooling blocks, which downsample the image to extract high-level features. This CNN output contains denser and high-level feature maps. After every block, the number of feature maps doubles to learn the complex features accurately.
In the encoder, dense features are extracted with an output stride of 16 in each variation, where the output stride is the ratio of the input image shape to the output image shape. These extracted features work well in the classification task, but performance hampers while rebuilding the fine segmentation map. Hence, it is challenging to rebuild the segmentation map of the original input image dimensions from the feature map of the encoder. The decoder module builds the same U-Net decoder architecture to overcome this problem. This encoder output expands in the decoder consisting of convolutional and bilinear upsampling blocks. By concatenating low-level features from the encoder, low
Figure 4: General architecture for skin cancer classifier where Global Average Pooling is abbreviated as GAP.
level feature maps are enhanced to the corresponding block of respective size in the decoder to generate the segmented output more precisely.
### Classification Approach
Convolutional Neural Network has shown tremendous progress in the task of image classification. With advancements in computational resources and fine-tuning methods, CNN fulfills the demand for performance in terms of accuracy. As shown in Fig. 4, a conventional CNN architecture consists of combination blocks of convolutional layer and downsampling layers, followed by a fully connected layer (FC) and the output class. For accurate predictions, CNN automatically pulls the patterns known as features from the input image and carries information at the output block. In the classification step of the dermoscopy, images of the HAM10000 dataset having seven classes from the are to be predicted [30]. We propose to use various classification architectures, which are also used in segmentation encoders like ResNet, Xception, MobileNets, and EfficientNets for classification tasks with an output stride of 32.
**ResNets**[15]: Deep neural network have shattered performance due to the problem of vanishing gradient. To overcome this problem, He et al. proposed the idea of skip connections or residual networks, as shown in Fig. 5(a). This residual network, known as ResNets, achieved improved performance. ResNet has different variants formed by increasing the residual blocks, namely ResNet18, ResNet50, and so on. ResNet consists of \(3\times 3\) convolutional layers stacked with residual or skip connection to form the residual block. For denser prediction and deeper model, maps are periodically doubled. The output of the final layer is 32 times smaller than the input shape. For an image with input shape \(224\times 224\), the output is of shape \(7\times 7\).
**Xception**[10]: F. Chollet et al. presented the Xception network as having superior performance. This architecture is inspired by the Inception [33]. In Xception,
Figure 5: Basic building blocks for various CNN architecture used for classification of skin lesion classes.
the Inception module in the Inception network is replaced by Depthwise separable convolution (DSC). Xcpetion architecture consisting of 36 convolutional layers grouped in 14 blocks extracts features. All the blocks except the first and last block have skip connections from the previous block. Xception has DSC with residual or skip connection as the primary building layer, as in Fig. 5(d). The output stride of the final layer is 32.
**MobileNet**[16]: Mobilenet architecture is a lightweight architecture having depthwise separable convolution as the core layer in building this network, as shown in Fig. 5(c). DSC is a factorized convolution consisting of pointwise \(1\times 1\) and depthwise convolution. In mobilenet to each input channel, a single filter is used depthwise followed by pointwise convolution, fed with input from depthwise convolution for stacking them together. A convolution process combines and filters the input into output in a single step. The DSC is a two-step process in which filtering is carried out in separate layers and combing in another. This division has a significant effect on model size and reduces computation.
**EfficientNet**[29]: CNNs are developed depending on the resource, and scaling occurs for influential performance while increasing the resources. e.g., ResNet-18 [15] can be scaled to ResNet-101 by adding some layers. The traditional procedure for scaling the network is to increase the CNN depth or depth or feed with a higher input image size. These methods have proven to improve performance but with tedious manual tuning. In [29], the author proposed a novel approach to scaling the model that uses a compound coefficient that is highly effective for structural scaling of the CNNs. Rather than arbitrarily increasing network dimensions such as resolution, depth, and width, EfficientNet scales every parameter in the compound coefficient with a fixed set of scaling factors. This network is built with mobile inverted bottleneck convolution (MBConv) [25] and squeeze and excitation optimization [17] as shown in Fig. 5(b).
## 4 Result and Discussion
We randomly divided the ISIC training dataset into 80% training cohort and 20% testing cohort. The dataset comprises images of varying sizes rescaled to \(512\times 512\times 3\) for the segmentation task. The segmentation network is trained with a batch size of 8 with a loss function as the sum of cross-entropy and dice loss for 15 epochs setting the parameters for early stopping on Loss. The learning rate was maintained at 0.001 with the ADAM optimizer. In the classification task, we fed the network with an input size of \(224\times 224\) and loss function as categorical cross-entropy. The model is trained with a batch size of 8 for 30 epochs setting the parameters for early stopping on Loss. During training, we initialized the learning rate to 0.001 with the ADAM [21] optimizer. We augmented the data with various popular augmentation techniques like rotation, shearing, zooming, brightness, and flipping the original images for segmentation and classification tasks. The frameworks are designed with Tensorflow 2.0 and Keras open-source libraries, and the models are trained on NVIDIA P100 GPU with 16 GB memory.
For the segmentation task, U-Net is just a stack of convolutional layers, the original U-Net underperforms in this task. To experiment, we increase the depth of the network with various encoders like ResNet, MobileNet, and EffcientNet. Also, we design an asymmetric decoder, as seen in Fig. 3. Concatenation of low-level features occurs at some intervals rather than joining each block from the encoder, as proposed by Ronneberger et al. in U-Net improves performance. The modification improves performance with the proposed deep encoder with the asymmetric decoder.
After extracting the ROI by segmenting using an EfficientNet-based encoder, it is fed as input to the various state-of-the-art networks in classification like ResNet, MobileNet, EfficientNet, and Xception. As seen in Table 3, there is significant performance gain when ROI extracted skin lesion is used. For an in-depth comparison, classification is performed with different CNNs with and without ROI obtained from segmentation. The efficacy of the proposed approaches is evaluated in terms of various popular quantitative evaluation parameters. The performance of segmentation approaches is assessed in terms of the Dice Similarity Coefficient (DSC) and Mean Intersection over Union (mIoU) and the classification approach with accuracy. The performance for the segmentation task of various encoder backbones in DSC and mIoU is given in Table 2. It can be observed that EfficientNetB4 outperformed other encoders quantitatively. Seg
\begin{table}
\begin{tabular}{c c c} \hline \hline Encoder backbone & Dice Score & mIoU \\ \hline \hline Original U-Net & 71.53 & 60.58 \\ ResNet50 & 84.46 & 73.76 \\ ResNet101 & 86.30 & 76.77 \\ MobileNet & 83.90 & 71.32 \\ InceptionResNetV2 & 87.20 & 78.03 \\ EfficientNetB4 & 89.56 & 81.42 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance evaluation of segmentation task on test dataset in terms of Dice Score and Mean Intersection over Union
Figure 6: Sample results ROI extracted after segmentation for classification task.
mentation outputs predicted by the model for five different images are presented in Fig. 7.
From Fig. 7(a) and 7(b), it can be observed that the proposed approach performed well even if non-skin objects are present in the image. The architecture could segment lesions, even severe occlusion, because of hairs. These segmentation results are then multiplied with the original image to extract the skin lesion, as shown in Fig. 6. It can be observed that besides skin lesions, various surrounding patterns may hamper the classifier learning. ROI from Fig. 6 (b) and (e) clearly justifies the need of lesion segmentation before classification. The performance evaluation for the classification task with and without ROI is given in Table 3. The architectures trained on images containing only lesion ROI performed better in terms of accuracy, as shown in Table 3.
## 5 Conclusion
Skin lesion segmentation and classification are the primary steps in designing the Computer-Aided Diagnostic (CAD) Tool and are essential for precise treatment planning. This study proposed a two-step approach with two distinct databases for skin lesion segmentation and classification. It was observed that, except for lesions, various surrounding patterns might hamper the classifier's learning. To address this, we proposed a two-step approach where in the first step, skin lesions are segmented, and in the second step, ROIs are extracted, which are given input to the classification architecture. Experimentation results showed that classification accuracy with ROI as input outperformed lesion images with surrounding patterns and was improved by 5%. We currently have the performance of the proposed approach on the publicly available dataset.
|
2308.12758 | Quasi-invariance of Gaussian measures for the $3d$ energy critical
nonlinear Schr\" odinger equation | We consider the $3d$ energy critical nonlinear Schr\" odinger equation with
data distributed according to the Gaussian measure with covariance operator
$(1-\Delta)^{-s}$, where $\Delta$ is the Laplace operator and $s$ is
sufficiently large. We prove that the flow sends full measure sets to full
measure sets. We also discuss some simple applications. This extends a previous
result by Planchon-Visciglia and the second author from $1d$ to higher
dimensions. | Chenmin Sun, Nikolay Tzvetkov | 2023-08-24T13:04:00Z | http://arxiv.org/abs/2308.12758v2 | # Quasi-invariance of Gaussian measures for the \(3d\) energy critical nonlinear Schrodinger equation
###### Abstract.
We consider the \(3d\) energy critical nonlinear Schrodinger equation with data distributed according to the Gaussian measure with covariance operator \((1-\Delta)^{-s}\), where \(\Delta\) is the Laplace operator and \(s\) is sufficiently large. We prove that the flow sends full measure sets to full measure sets. We also discuss some simple applications. This extends a previous result by Planchon-Visciglia and the second author from \(1d\) to higher dimensions.
## 1. Introduction
### Motivation
The seminal paper [17] initiated the study of Hamiltonian PDE's with initial data distributed according to the Gibbs measure which is constructed from the Hamiltonian functional. The Gibbs measure construction is strongly inspired by earlier developments in quantum field theory (see e.g. [20, 37]). These Gibbs measures are absolutely continuous with respect to suitable Gaussian measures (or shifts of such Gaussian measures). They are at least formally invariant under the corresponding Hamiltonian flow and therefore the underlying Gaussian meaure (or its shift) are quasi-invariant under the flow.
In dimensions \(\geq 2\) in order to consider initial data distributed according to the Gibbs measure a renormalization of the equation under consideration is required, see e.g. [3, 6, 10, 12, 30, 33]. Such renormalizations have strong motivations from Physics but they also make the results not so natural from a classical PDE perspective. A notable exception is the cubic nonlinear Schrodinger equation for which a gauge transform links the (truncated) equation and its renormalized version.
One may also observe that full Gibbs measure sets cover a very tiny part of the phase space of a Hamiltonian PDE and also that the Gibbs measure plays no role in the dynamics of most of the initial distributions of the initial data. Observe that this is in sharp contrast with Langevin type dynamics where the (same) Gibbs measure plays a truly distinguished role because it attracts all initial distributions.
Motivated by the above observations, in recent years there has been an activity aiming to show that a more general set of gaussian measures are quasi-invariant under Hamiltonian PDE's, see [7, 11, 14, 15, 16, 18, 19, 21, 25, 26, 31, 32, 35, 34, 36, 38, 39]. Such results allow to give a statistical description of the Hamiltonian flow for a larger class of initial distributions of the initial data. In particular, one obtains results for data of arbitrary Sobolev regularity while
the Gibbs measures live in low regularity Sobolev spaces. Moreover, no renormalization of the equation is requited (even if renormalized energies may be used in the proof, see [36, 21, 39]). It is also worth to observe that it looks that the question of quasi-invariance of gaussian measures for Hamiltonian PDE's does not seem to have an analogue in the context of dissipative PDE's.
Most of the results quoted in the previous paragraph are dealing with \(1d\) models. The only results in dimensions \(\geq 2\) are [36, 21, 39]. They are dealing with non linear wave equations. The approach used in these works, based on renormalized energies, does not apply to the nonlinear Schrodinger equation (NLS) because of the lack of explicit smoothing in the equation. Our goal here is to resolve this issue and prove the quasi-invariance of Gaussian measures supported by sufficiently regular functions under the NLS flow in higher dimensions. Our approach is based on normal form reductions as in [31, 32, 35] combined with a soft analysis initiated in [25]. The main idea in this paper is the identification of a remarkable cancellation of the worse pairing when estimating the divergence of the Hamiltonian vector field with respect to a weighted Gaussian measure (see Section 7 below). The weight is naturally produced by the normal form reduction and therefore is related to the nature of the resonant set (while is the wave equation case the weight is related to the potential energy). This remarkable cancellation is certainly related to the Hamiltonian structure and hopefully may be used in other contexts. Our result is only giving qualitative quasi-invariance for sufficiently regular initial distributions. Therefore several challenging issues remain open (see the remarks after the statement of the main result).
### Main result
In this work we will study the most challenging model to which we succeeded to make work our approach. Therefore, we consider the defocusing energy-critical NLS
\[i\partial_{t}u+\Delta u=|u|^{4}u,\quad(t,x)\in\mathbb{R}\times \mathbb{T}^{3}, \tag{1.1}\]
where \(\mathbb{T}^{3}:=\mathbb{R}^{3}/(2\pi\mathbb{Z})^{3}\). Equation (1.1) is a Hamiltonian system with the conserved mass and energy:
\[M[u]:=\int_{\mathbb{T}^{3}}|u|^{2}\mathrm{d}x,\quad H[u]:=\frac{1} {2}\int_{\mathbb{T}^{3}}|\nabla u|^{2}\mathrm{d}x+\frac{1}{6}\int_{\mathbb{T}^ {3}}|u|^{6}\mathrm{d}x.\]
These conservation laws allow to construct relatively easily global weak solutions of (1.1) in the Sobolev spaces \(H^{1}(\mathbb{T}^{3})\) via basic compactness arguments. Unfortunately such techniques are not suitable to prove uniqueness and propagation of higher Sobolev regularities. Thanks to the remarkable work by Ionescu-Pausader [23] (based on the previous contributions [2, 4, 9, 22, 24]) we know that (1.1) is globally well posed in \(H^{s}(\mathbb{T}^{3})\), \(s\geq 1\). Namely, for every \(u_{0}\in H^{s}(\mathbb{T}^{3})\), \(s\geq 1\) there exists a unique global solution of (1.1) in \(C(\mathbb{R};H^{s}(\mathbb{T}^{3}))\) such that \(u(0,x)=u_{0}(x)\). Let us denote by \(\Phi(t)\) the Ionescu-Pausader flow of (1.1).
When study the statistical properties of (1.1), we assume that the initial data are distributed according to the Gaussian probability measure \(\mu_{s}\), formally defined as "\(\frac{1}{\mathcal{Z}}e^{-\frac{1}{2}\|u\|_{H^{s}}^{2}}du\)", induced
by the random Fourier series
\[\phi^{\omega}(x):=\sum_{k\in\mathbb{Z}^{3}}\frac{g_{k}(\omega)}{\sqrt{1+|k|^{2s}}} \mathrm{e}^{ik\cdot x}. \tag{1.2}\]
Thanks to the Kakutani theorem we know that for at least \(s\geq 10\) the measure \(\mu_{s}\) is absolutely continuous with respect to the Gaussian measure with covariance operator \((1-\Delta)^{-s}\) (see e.g. [36] for such an application of the Kakutani theorem). It is well-known that
\[\mathrm{supp}(\mu_{s})=H^{(s-\frac{3}{2})-}(\mathbb{T}^{3}):=\bigcap_{\sigma< s-\frac{3}{2}}H^{\sigma}(\mathbb{T}^{3})\]
and \(\mu_{s}(H^{s-\frac{3}{2}}(\mathbb{T}^{3}))=0\). Therefore bigger \(s\) is more regular are typical functions with respects to \(\mu_{s}\). Thanks to [23], when \(s>\frac{5}{2}\), the flow \(\Phi(t)\) of (1.1) exists globally on \(H^{\sigma}(\mathbb{T}^{3})\) for any \(1\leq\sigma<s-\frac{3}{2}\). In particular, a unique global solution exists for any initial data on \(\mathrm{supp}(\mu_{s})\), \(s>\frac{5}{2}\). Our main result reads as follows.
**Theorem 1.1**.: _Assume that \(s\geq 10\). Then \(\mu_{s}\) is quasi-invariant under \(\Phi(t)\). More precisely, for every \(t\in\mathbb{R}\), \((\Phi(t))_{*}\mu_{s}\ll\mu_{s}\ll(\Phi(t))_{*}\mu_{s}\), where \((\Phi(t))_{*}\mu_{s}\) is the push forward of \(\mu_{s}\) by \(\Phi(t)\)._
In the statement above, the notation \(\mu\ll\nu\) for two measures \(\mu,\nu\) temporarily means that \(\mu\) is absolutely continuous with respect to \(\nu\).
In the proof of Theorem 1.1 below, we retain us of using arithmetic arguments such as the divisor bound. Therefore the result of Theorem 1.1 remains valid for irrational tori with essentially the same proof.
In view of [1], it seems hopeless to construct a Gibbs measure for (1.1) (and any other energy critical problem). This gives a further motivation for studying quasi-invariant Gaussian measures for (1.1) or any other model for which the Gibbs measure construction fails.
The result of Theorem 1.1 remains true (with a simpler proof) for the cubic \(3d\) NLS
\[i\partial_{t}u+\Delta u=|u|^{2}u,\quad(t,x)\in\mathbb{R}\times\mathbb{T}^{3},\]
and also for the \(2d\) NLS with an arbitrary polynomial defocusing nonlinearity.
As already mentioned, Theorem 1.1 only gives qualitative quasi-invariance. It would be interesting to obtain quantitative bounds on the resulting Radon-Nikodym derivatives. Such quantitative bounds were obtained in some previous works on the subject, the most notable being the paper by Forlano-Tolomeo [16] where such quantitative informations on the Radon-Nikodym derivative are used in order to perform the Bourgain globalization argument, i.e. quasi-invariance is used in order to construct the flow. The Forlano-Tolomeo argument is performed for a \(1d\) model and it would be very interesting to extend it to higher dimensions. In particular, it would be interesting to decide whether Theorem 1.1 holds in the supercritical regime, i.e. for some \(s<\frac{5}{2}\) (in this regime the existence of the flow should rely on a probabilistic
well-posedness in the spirit of [28]). But at this stage it is even not clear to us how to prove Theorem 1.1 in the natural subcritical range \(s>\frac{5}{2}\). By using the dispersive effects we can relax slightly the assumption \(s\geq 10\) but it would still be away for the natural subcritical assumption \(s>\frac{5}{2}\). In summary, much more remains to be understood concerning the transport of \(\mu_{s}\) under the NLS flow and its connection with the probabilistic well-posedness theory.
### Applications
In this section we present two simple corollaries of Theorem 1.1. Recall that the random field (1.2) is a stationary Gaussian process on \(\mathbb{T}^{2}\). More precisely, for each fixed \(x\in\mathbb{T}^{2}\), \(\phi^{\omega}(x)\) is a complex Gaussian random variable with the law \(\mathcal{N}_{\mathbb{C}}(0,\sigma^{2})\), where \(\sigma^{2}=\sum_{k\in\mathbb{Z}^{3}}\frac{1}{1+|k|^{2s}}\) Consequently, the probability density of \(\phi^{\omega}(x)\) is \(\frac{1}{2\pi\sigma}\mathrm{e}^{-\frac{|y|^{2}}{2\sigma}}dy\) on \(\mathbb{C}=\mathbb{R}^{2}\). In particular, the law of \(\phi^{\omega}(x)\) is absolutely continuous with respect to the Lebesgue measure. A natural question is to study the regularity of the law for the random variable \(u(t,x)\), evolved by (1.1) with the initial data \(\phi^{\omega}\). This type of problems has been intensively studied in the field of Stochastic analysis. For many classes of stochastic (partial) differential equations, the regularity of laws of solutions can be obtained via the Malliavin Calculus (see the book of Nualart [29] and references therein). The Malliavin Calculus was originally developed by P. Malliavin [27] to bring a new proof of Hormander's theorem for hypoelliptic operators. We do not intend to include any element of the Malliavin Calculus in this article, but to give a simple application of the quasi-invariance property to obtain the absolutely continuity for the law of solutions of NLS with random initial data, which can be viewed as a pointwise version of the quasi-invariance property of the NLS equation displayed by Theorem 1.1.
**Corollary 1.2**.: _Assume that \(s\geq 10\) and fix \((t_{0},x_{0})\in\mathbb{R}\times\mathbb{T}^{3}\). Let \(u(t,x,\omega)\) be the solution of (1.1) with data (1.2). Then the law of the complex random variable \(\omega\mapsto u(t_{0},x_{0},\omega)\) has a density with respect to the Lebesgue measure on \(\mathbb{C}\)._
In order to prove Corollary 1.2, we observe that we need to study the composition of \(\Phi(t)\) and the evaluation map \(u\mapsto u(t_{0},x_{0})\). Then it suffices to apply Theorem 1.1 for \(\Phi(t)\) and the observation before the statement of Corollary 1.2 for the evaluation map. It is likely that the Malliavin Calculus can be useful to get regularity properties of the densities appearing in the statement of Corollary 1.2. In Corollary 1.2, one may replace the evaluation map by other finite dimensional projections. For instance, one may show that for every \(k\in\mathbb{Z}^{3}\), the law of the Fourier coefficient \(\widehat{u}(t,k,\omega)\) has a density with respect to the Lebesgue measure on \(\mathbb{C}\).
Let us also observe that the Malliavin Calculus methods can be applied to prove quasi-invariance for maps from infinite dimensional gaussian spaces to finite dimensional spaces, while in Theorem 1.1 we deal with the more complex situation of a map from an infinite dimensional gaussian space to itself.
Another simple consequence of Theorem 1.1 is the following \(L^{1}\)-stability result.
**Corollary 1.3**.: _Assume that \(s\geq 10\). Let \(f_{1},f_{2}\in L^{1}(d\mu_{s})\) and \(\Phi(t)\) the flow of (1.1). Then for any \(t\in\mathbb{R}\), the transports of measures \(f_{1}(u)d\mu_{s}(u)\), \(f_{2}(u)d\mu_{s}(u)\) by \(\Phi(t)\) are given by
\(F_{1}(t,u)d\mu_{s}(u)\) and \(F_{2}(t,u)d\mu_{s}(u)\) respectively, for suitable \(F_{1}(t,\cdot),F_{2}(t,\cdot)\in L^{1}(d\mu_{s})\). Moreover,_
\[\|F_{1}(t,\cdot)-F_{2}(t,\cdot)\|_{L^{1}(d\mu_{s})}=\|f_{1}-f_{2}\|_{L^{1}(d\mu _{s})}.\]
One may prove Corollary 1.3 by performing the computations from [40]. A more direct proof can be given by observing that \(\Phi(t)\) is a measurable map and therefore the total variation distance between \(F_{1}(t,u)d\mu_{s}(u)\) and \(F_{2}(t,u)d\mu_{s}(u)\) is smaller than the total variation distance between \(f_{1}(u)d\mu_{s}(u)\) and \(f_{2}(u)d\mu_{s}(u)\). This implies that
\[\|F_{1}(t,\cdot)-F_{2}(t,\cdot)\|_{L^{1}(d\mu_{s})}\leq\|f_{1}-f_{2}\|_{L^{1}( d\mu_{s})}.\]
Using the reversibility of the NLS flow we get the inverse inequality.
The remaining part of this paper is devoted to the proof of Theorem 1.1. In Section 2 we perform the normal form reduction, we define accordingly suitable weighted Gaussian measures and we state the key energy estimates. In Section 3 we perform the soft analysis leading from the energy estimates to the quasi-invariance result stated in Theorem 1.1. In Section 4 we introduce our basic counting tool and the Wiener chaos estimate useful for our purposes. In Section 5 we decompose the divergence of the Hamiltonian vector field with respect to the weighted Gaussian measures in several pieces according to the possible pairings. In Section 6 we estimates the contributions of the first generation. Section 7 deals with the most singular contribution resulting from pairings between different generations. This is the most delicate part of our analysis containing the remarkable algebraic cancellations mentioned above. In Section 8 we treat the remainder terms in which the singular pairings are not presented. Finally in an Appendix we prove some approximation results for (1.1), crucially exploited in Section 3. Let us emphasize that because of the critical nature of the Cauchy problem for (1.1), the approximation argument is much more delicate compared to the previous literature on quasi-invariant Gaussian measures for Hamiltonian PDE's.
**Acknowledgments.** This work is partially supported by the ANR project Smooth ANR-22-CE40-0017.
## 2. Modified energy and the weighted Gaussian measure
### An approximated system
Fix a radial cutoff function \(\chi\in C_{c}^{\infty}(\mathbb{R}^{3})\) such that \(\chi\equiv 1\) on \([-\frac{1}{2},\frac{1}{2}]\) and \(\operatorname{supp}(\chi)\subset\{|x|<1\}\). For \(N\in\mathbb{N}\), set \(\chi_{N}(\cdot):=\chi(N^{-1}\cdot)\) and \(S_{N}=\chi_{N}(\sqrt{-\Delta})\) the smooth frequency truncation and \(\Pi_{N}=\mathbf{1}_{\sqrt{-\Delta}\leq N}\) the sharp frequency truncation. By definition,
\[S_{N}\Pi_{N}=\Pi_{N}S_{N}=S_{N},\quad S_{N}^{*}=S_{N}.\]
The advantage of using the operator \(S_{N}\) is that \(S_{N}\) is uniformly bounded on \(L^{p}(\mathbb{T}^{3})\) for \(1<p<\infty\), which is crucial when taking the limit of the approximated system in the energy critical
case. Similar to the situation in [8], we consider the following smoothly approximated NLS equation
\[\begin{cases}&i\partial_{t}u_{N}+\Delta u_{N}=S_{N}(|S_{N}u_{N}|^{4}S_{N}u_{N}), \\ &u_{N}|_{t=0}=u_{0}\in H^{\sigma}(\mathbb{T}^{3}).\end{cases} \tag{2.1}\]
As in [8], the solution of (2.1) can be decomposed as two components on \(\mathcal{E}_{N}:=\Pi_{N}L^{2}(\mathbb{T}^{3})\) and \(\mathcal{E}_{N}^{\perp}:=(\operatorname{Id}-\Pi_{N})L^{2}(\mathbb{T}^{3})\). This naturally leads to a splitting of \(\mu_{s}\) as \(d\mu_{s}=d\mu_{s,N}\otimes d\mu_{s,N}^{\perp}\) for every \(N\in\mathbb{N}\), where \(\mu_{s,N}\) is a measure on \(\mathcal{E}_{N}\) while \(\mu_{s,N}^{\perp}\) is a measure on \(\mathcal{E}_{N}^{\perp}\). The finite-dimensional part of (2.1) on \(\mathcal{E}_{N}\) is a Hamiltonian system (see [8, Lemma 8.1]), while the infinite-dimensional part is the linear evolution \(\mathrm{e}^{it\Delta}\). Thanks to Cauchy-Lipchitz and the defocusing nature, the solution of (2.1) is global and we denote by \(\Phi_{N}(t)\) its flow, which can be factorized as \((\widetilde{\Phi}_{N}(t),\mathrm{e}^{it\Delta})\) on \(\mathcal{E}_{N}\times\mathcal{E}_{N}^{\perp}\), where \(\widetilde{\Phi}_{N}(t)\) is the restriction of \(\Phi_{N}(t)\) on the finite-dimensional space \(\mathcal{E}_{N}\), which is a Hamiltonian flow on \(\mathcal{E}_{N}\). By convention, we denote \(\Phi(t)\) by \(\Phi_{\infty}(t)\).
### Poincare-Dulac normal form and the modified Energy
To construct suitable weighted measures for our study, we must identify a modified energy functional. Consider a smooth solution \(u_{N}(t)\) of (2.1). We introduce a new unknown, factored by the linear flow:
\[v(t)=\mathrm{e}^{-it\Delta}u_{N}(t).\]
Expanding \(v(t)\) in the Fourier series, we have:
\[v(t,x)=\sum_{k\in\mathbb{Z}^{3}}v_{k}(t)\mathrm{e}^{ik\cdot x},\]
from which it follows that \(v_{k}(t)\) satisfies the equation:
\[i\partial_{t}v_{k}(t)=\chi_{N}(k)\sum_{k_{1}-k_{2}+k_{3}-k_{4}+k_{5}=k}\mathrm{ e}^{-it\Omega(\vec{k})}\cdot\left(\prod_{j=1}^{5}\chi_{N}(k_{j})\right)\cdot v _{k_{1}}(t)\overline{v}_{k_{2}}(t)\cdots v_{k_{5}}(t), \tag{2.2}\]
where
\[\Omega(\vec{k})=\sum_{j=1}^{5}(-1)^{j-1}|k_{j}|^{2}-|k|^{2}\]
is the resonant function.
To construct the modified energy, it is more convenient to use an equivalent of the Sobolev norm for \(s\geq 0\):
\[\left|\!\left|\!\left|f\right|\!\right|\!\right|\!\right|_{H^{s}(\mathbb{T}^{ 3})}^{2}:=\sum_{k\in\mathbb{Z}^{3}}(1+|k|^{2s})|\widehat{f}(k)|^{2}.\]
A simple computation using symmetry of indices yields
\[\frac{1}{2}\frac{d}{dt}\!\left|\!\left|v(t)\right|\!\right|\!\right|_{H^{s}}^ {2}=-\,\frac{1}{6}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0}\psi_{2s}( \vec{k})\mathrm{e}^{-it\Omega(\vec{k})}\Big{(}\prod_{j=1}^{6}\chi_{N}(k_{j}) \Big{)}v_{k_{1}}\overline{v}_{k_{2}}\cdots\overline{v}_{k_{6}}, \tag{2.3}\]
where in the above expression, we abuse the notation slightly and denote
\[\psi_{2s}(\vec{k})=\sum_{j=1}^{6}(-1)^{j-1}|k_{j}|^{2s},\quad\Omega(\vec{k})=\sum _{j=1}^{6}(-1)^{j-1}|k_{j}|^{2}.\]
The basic estimate for \(\psi_{2s}(\vec{k})\) is
\[|\psi_{2s}(\vec{k})|\lesssim|k_{(1)}|^{2s-2}(|k_{(3)}|^{2}+|\Omega(\vec{k})|),\]
where \(|k_{(1)}|\geq|k_{(2)}|\geq\cdots\geq|k_{(6)}|\) is rearrangement of \(k_{1},\cdots,k_{6}\) and \(k_{1}-k_{2}+\cdots-k_{6}=0\) (see Lemma 4.1 below). Note that each \(v_{k_{j}}\) will be accompanied with \(\chi_{N}(k_{j})\), and the capital \(N\) plays no role in our analysis, we will simply write \(w_{k_{j}}:=\chi_{N}(k_{j})v_{k_{j}}\) in the sequel. Note that
\[i\partial_{t}w_{k}=\chi_{N}(k)^{2}\sum_{k_{1}-k_{2}+k_{3}-k_{4}+k_{5}=k}\mathrm{ e}^{-it\Omega(\vec{k})}\cdot w_{k_{1}}\overline{w}_{k_{2}}\cdots w_{k_{5}}. \tag{2.4}\]
In order to truncate the level set of the resonant function, we further introduce the symmetric factor
\[\lambda(\vec{k})=\Big{(}\sum_{j=1}^{6}|k_{j}|^{2}\Big{)}^{\frac{1}{2}}.\]
As the resonant function \(\Omega(\vec{k})\) takes integer values1, we will decompose the set of indices \(k_{1},\cdots,k_{6}\) according to the level set of \(\Omega(\vec{k})\). In order to perform the differentiations by parts in time, we further write
Footnote 1: This fact is not essential for our result and the proof. Though we keep to work on the rational torus for convenience.
\[\frac{1}{2}\frac{d}{dt}|\!|\!|v(t)|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! = -\frac{1}{6}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0}\chi \Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{2s}( \vec{k})\mathrm{e}^{-it\Omega(\vec{k})}w_{k_{1}}\overline{w}_{k_{2}}\cdots \overline{w}_{k_{6}} \tag{2.5}\] \[-\frac{1}{6}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{-i\Omega(\vec{k})}\partial_{t}\Big{(} \mathrm{e}^{-it\Omega(\vec{k})}w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline{w }_{k_{6}}\Big{)}\] \[+\frac{1}{6}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{-i\Omega(\vec{k})}\mathrm{e}^{-it \Omega(\vec{k})}\partial_{t}(w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline{w} _{k_{6}}),\]
where \(0<\delta_{0}<\frac{2}{3}\) is close to \(\frac{2}{3}\). Motivated by the above formula, we define the modified energy (with \(w=\chi_{N}(\sqrt{-\Delta})v\))
\[\mathcal{E}_{s,t}(v):=\frac{1}{2}|\!|\!|v|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! \!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\! |\!|\!\!|\!|\!|\!|\!\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!|\!\!|\!|\!\!|\!\!|\!|\!|\!\!|\!|\! \!|\!|\!\!|\!|\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\! \!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!\!|\!|\!\!|\!\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!
Changing back to the variable \(u\), the modified energy is
\[\mathcal{E}_{s,t}(v)=E_{s,N}(u):=\frac{1}{2}|\!|\!|u|\!|\!|_{H^{s}(\mathbb{T}^{2}) }^{2}+R_{s,N}(u), \tag{2.7}\]
where
\[R_{s,N}(u):=\frac{1}{6}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{-i\Omega(\vec{k})}\cdot\Big{(}\prod_{ j=1}^{6}\chi_{N}(k_{j})\Big{)}\cdot\widehat{u}_{k_{1}}\overline{\widehat{u}}_{k_{2}} \cdots\overline{\widehat{u}}_{k_{6}}. \tag{2.8}\]
We define \(R_{s}(u)\) as \(R_{s,N}(u)\) without \(\prod_{j=1}^{6}\chi_{N}(k_{j})\). Sometimes \(R_{s}(u)\) will be denoted by \(R_{s,\infty}(u)\). We similarly define \(E_{s}(u)\) which may also be denoted by \(E_{s,\infty}(u)\).
The modified energy (2.7) will play a crucial role in our analysis. We refer to [42] for a survey on the use of modified energies in the analysis of dispersive PDE's.
Then from (2.5) and the equation (2.2) of \(u_{N}(t)\), and symmetry of indices, we have (with \(w_{k}=\chi_{N}(k)v_{k}\))
\[\frac{d}{dt}E_{s,N}(u_{N}(t))=\frac{d}{dt}\mathcal{E}_{s,t}(v) =-\frac{1}{6}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0} \chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{ 2s}(\vec{k})e^{-it\Omega(\vec{k})}w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline {w}_{k_{6}}\] \[+\frac{1}{2}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\] \[\qquad\times\sum_{k_{1}=p_{1}-p_{2}+\cdots+p_{5}}e^{-it\big{(} \Omega(\vec{k})+\Omega(\vec{p})\big{)}}\chi_{N}(k_{1})^{2}w_{p_{1}}\overline{ w}_{p_{2}}\cdots w_{p_{5}}\overline{w}_{k_{2}}\cdots\overline{w}_{k_{6}}\] \[-\frac{1}{2}\operatorname{Im}\sum_{k_{1}-k_{2}+\cdots-k_{6}=0} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\] \[\qquad\times\sum_{k_{2}=q_{1}-q_{2}+\cdots+q_{5}}e^{-it\big{(} \Omega(\vec{k})-\Omega(\vec{q})\big{)}}\chi_{N}(k_{2})^{2}w_{k_{1}}\overline{ w}_{q_{1}}\cdots\overline{w}_{q_{5}}w_{k_{3}}\cdots\overline{w}_{k_{6}}, \tag{2.9}\]
where
\[\Omega(\vec{p})=\sum_{j=1}^{5}(-1)^{j-1}|p_{j}|^{2}-|k_{1}|^{2},\quad\Omega( \vec{q})=\sum_{j=1}^{5}(-1)^{j-1}|q_{j}|^{2}-|k_{2}|^{2}.\]
### The weighted measure
Using the modified energy, we define the weighted Gaussian measure for given \(R>1\)
\[d\rho_{s,R,N}(u)=\chi_{R}(\|u\|_{H^{\sigma}})\cdot\mathrm{e}^{-R_{s,N}(u)}d\mu _{s,N}(u),\quad d\overline{\rho}_{s,R,N}(u):=d\rho_{s,R,N}\otimes d\mu_{s,N}^{ \perp}, \tag{2.10}\]
where the functional \(R_{s,N}(u)\) is defined by (2.8), \(\chi_{R}(\cdot)=\chi(R^{-1}\cdot)\) is the cutoff.
**Proposition 2.1** (Local existence of the weighted measure).: _Let \(s\geq 20,R\geq 1\), \(\sigma<s-\frac{3}{2}\), close to \(s-\frac{3}{2}\) and \(N\in\mathbb{N}\). Then for any \(p\in[1,\infty)\), there exists a uniform constant \(C(p,s,R)>0\), such that_
\[\left\|\chi_{R}(\|u\|_{H^{\sigma}})\cdot\mathrm{e}^{|R_{s,N}(u)|}\right\|_{L^{p }(d\mu_{s})}\leq C(p,s,R).\]
_Moreover, for fixed \(R>0\),_
\[\lim_{N\to\infty}\left\|\chi_{R}(\|u\|_{H^{\sigma}})\mathrm{e}^{-R_{s,N}(u)}- \chi_{R}(\|u\|_{H^{\sigma}})\mathrm{e}^{-R_{s}(u)}\right\|_{L^{p}(d\mu_{s})}=0.\]
Recall that \(\Phi_{N}(t)\) is the flow of (2.1) while \(\Phi_{\infty}(t)=\Phi(t)\) is the flow of (1.1). Another key proposition is the following weighted energy estimate:
**Proposition 2.2** (Weighted energy estimate).: _Let \(s\geq 10,R\geq 1\), \(\sigma<s-\frac{3}{2}\), close to \(s-\frac{3}{2}\) and \(N\in\mathbb{N}\cup\{\infty\}\). Set_
\[Q_{s,N}(u)=\frac{d}{dt}E_{s,N}(\Phi_{N}(t)u)|_{t=0}\]
_and denote by \(B_{R}^{H^{\sigma}}\) the centered ball in \(H^{\sigma}(\mathbb{T}^{3})\) or radius \(R\). Then there exist a uniform constants \(C(s,R)>0\) and \(\beta\in(0,1)\), such that for all \(p\in[2,\infty)\) and \(N\in\mathbb{N}\cup\{\infty\}\),_
\[\|\mathbf{1}_{B_{R}^{H^{\sigma}}}(u)\cdot Q_{s,N}(u)\|_{L^{p}(d\mu_{s})}\leq C (s,R)p^{\beta}.\]
_Thanks to Proposition 2.1, we have also for all \(N\in\mathbb{N}\cup\{\infty\},p\in[1,\infty)\),_
\[\|\mathbf{1}_{B_{R}^{H^{\sigma}}}(u)\cdot Q_{s,N}(u)\|_{L^{p}(\overline{\rho}_ {s,R,N})}\leq C(s,R)p^{\beta}.\]
The proof of above two propositions will occupy the main part of the article. To prove the quasi-invariance of the full system, we need to pass to the limit \(N\to\infty\) in the approximated equation (2.1). This will be done in the next section.
## 3. Proof of the quasi-invariance assuming energy estimates
In this section we prove Theorem 1.1, assuming Proposition 2.1 and Proposition 2.2.
### Approximation theory for the energy-critical NLS
**Proposition 3.1**.: _Assume that \(\sigma\geq 1\). There exists a constant \(\Lambda(R,T)>0\), depending only on \(T>0\), \(R>0\) and \(\sigma\geq 1\), such that for any \(u\in B_{R}^{H^{\sigma}}\),_
\[\sup_{|t|\leq T}\|\Phi(t)u\|_{H^{\sigma}}+\sup_{|t|\leq T}\|\Phi_{N}(t)u\|_{H^ {\sigma}}\leq\Lambda(R,T),\quad\forall\,N\in\mathbb{N}.\]
**Proposition 3.2**.: _Assume that \(\sigma\geq 1\). Let \(K\) be a compact subset of \(H^{\sigma}(\mathbb{T}^{3})\) and \(T>0\). Then uniformly in \(|t|\leq T\) and \(u\in K\),_
\[\lim_{N\to\infty}\|\Phi_{N}(t)u-\Phi(t)u\|_{H^{\sigma}}=0.\]
Observe that since \(\Phi_{N}(t)\) and \(\Phi(t)\) are continuous we have that for any \(|t|\leq T\) and \(N\in\mathbb{N}\), \(\Phi_{N}(t)(K),\Phi(t)(K)\) are also compacts in \(H^{\sigma}(\mathbb{T}^{3})\). The proof of Proposition 3.1, Proposition 3.2 will be given in the Appendix.
### Proof of quasi-invariance
First, we prove:
**Lemma 3.3**.: _Let \(T\geq 1\). Let \(A\subset B_{R}^{H^{\sigma}}\) be a Borel measurable set. Then there exist \(\epsilon_{0}>0\) and \(C_{s,R,T}>0\), such that for all \(N\in\mathbb{N}\) and \(|t|\leq T\), \(\mu_{s}(\Phi_{N}(t)(A))\leq C_{s,R,T}\cdot\mu_{s}(A)^{\frac{1-\epsilon_{0}}{4}}\)._
Proof.: Let \(\Lambda(R,T)>0\) be the constant in Proposition 3.1, such that for all \(R>0,N\in\mathbb{N}\cup\{\infty\}\),
\[\Phi_{N}(t)(B_{R}^{H^{\sigma}})\subset B_{\Lambda(R,T)}^{H^{\sigma}},\quad|t| \leq T.\]
Denote \(R_{1}:=\Lambda(\Lambda(R,T),T)\), and we consider the weighted measure
\[d\overline{\rho}_{s,R_{1},N}(u)= \rho_{s,R_{1},N}(u)\otimes d\mu_{s,N}^{\perp}\] \[= \chi_{R_{1}}(\|u\|_{H^{\sigma}})\frac{1}{\mathcal{Z}_{N}}\mathrm{ e}^{-E_{s,N}(u)}\Big{(}\prod_{|k|\leq N}d\widehat{u}_{k}\Big{)}\otimes d\mu_{s,N}^ {\perp},\]
where \(\mathcal{Z}_{N}>0\) is the normalizing constant appearing in the finite-dimensional truncation of the Gaussian measure
\[d\mu_{s,N}(u)=\frac{1}{\mathcal{Z}_{N}}\mathrm{e}^{-\frac{1}{2}\sum_{|k|\leq N }(1+|k|^{2s})|\widehat{u}_{k}|^{2}}\Big{(}\prod_{|k|\leq N}d\widehat{u}_{k} \Big{)}.\]
For \(A\subset B_{R}^{H^{\sigma}}\), from Proposition 3.1, for any \(|t_{1}|,|t_{2}|\leq T\) and \(N\in\mathbb{N}\),
\[\Phi_{N}(t_{2})\circ\Phi_{N}(t_{1})(A)\subset B_{R_{1}}^{H^{\sigma}}.\]
In particular, for any \(u\in A\), \(|t|\leq 2T\), \(\|\Phi_{N}(t)u\|_{H^{\sigma}}\leq R_{1}\). Now for \(|t_{0}|\leq T,|t|\leq 1\), using that for \(\chi_{R_{1}}(\|\Phi_{N}(t)u\|_{H^{\sigma}})\equiv 1\) for \(u\in A\), as in [35, 36, 38], we can obtain the following change of variable formula
\[\overline{\rho}_{s,R_{1},N}(\Phi_{N}(t_{0}+t)(A))=\int_{A}\frac{1}{\mathcal{Z }_{N}}\mathrm{e}^{-E_{s,N}(\Pi_{N}\Phi_{N}(t_{0}+t)u)}\prod_{|k|\leq N}d \widehat{u}_{k}\,d\mu_{s,N}^{\perp}(u).\]
Observe that
\[Q_{s,N}(u)=\frac{d}{dt}E_{s,N}(\Phi_{N}(t)u)|_{t=0}=\frac{d}{dt}E_{s,N}(\Pi_{N }\Phi_{N}(t)u)|_{t=0}\,.\]
Taking the time derivative of the above equality and evaluate it at \(t=0\), we obtain the identity
\[\frac{d}{dt}\overline{\rho}_{s,R_{1},N}(\Phi_{N}(t_{0}+t)(A))|_{t =0}= -\int_{A}\frac{1}{\mathcal{Z}_{N}}Q_{s,N}(\Phi_{N}(t_{0})u)\, \mathrm{e}^{-E_{s,N}(\Pi_{N}\Phi_{N}(t_{0})u)}\prod_{|k|\leq N}d\widehat{u}_{k }\,d\mu_{s,N}^{\perp}(u)\] \[= -\int_{\Phi_{N}(t_{0})(A)}\frac{1}{\mathcal{Z}_{N}}Q_{s,N}(u)\, \mathrm{e}^{-E_{s,N}(\Pi_{N}u)}\prod_{|k|\leq N}d\widehat{u}_{k}\,d\mu_{s,N}^ {\perp}(u),\]
where we again used the change of variable formula. As for \(u\in\Phi_{N}(t_{0})(A)\), \(1\leq\chi_{R_{1}}(\|u\|_{H^{\sigma}})\), we obtain the inequality
\[\Big{|}\frac{d}{dt}\overline{\rho}_{s,R_{1},N}(\Phi_{N}(t)(A))|_{t =t_{0}}\Big{|}\] \[\leq \int_{\Phi_{N}(t_{0})(A)}\chi_{R_{1}}(\|u\|_{H^{\sigma}})\frac{1}{ \mathcal{Z}_{N}}|Q_{s,N}(u)|\,\mathrm{e}^{-E_{s,N}(\Pi_{N}u)}\prod_{|k|\leq N} d\widehat{u}_{k}\,d\mu_{s,N}^{\perp}(u)\] \[= \int_{\Phi_{N}(t_{0})(A)}|Q_{s,N}(u)|d\overline{\rho}_{s,R_{1},N} (u)\] \[\leq \|Q_{s,N}(u)\|_{L^{p}(d\overline{\rho}_{s,R_{1},N})}\cdot \overline{\rho}_{s,R_{1},N}(\Phi_{N}(t_{0})(A))^{1-\frac{1}{p}}.\]
Thanks to the last assertion of Proposition 2.2, the function
\[F(t):=\overline{\rho}_{s,R_{1},N}(\Phi_{N}(t)(A)),\]
satisfies the inequality
\[F^{\prime}(t)\leq C_{s,R}\cdot p^{\beta}F(t)^{1-\frac{1}{p}},\quad\forall|t| \leq T,\quad\,p<\infty\]
Integrating the differential inequality above, we obtain that
\[F(t)\leq\big{(}F(0)^{\frac{1}{p}}+C_{s,R}\cdot p^{-(1-\beta)}t\big{)}^{p}\leq F (0)\mathrm{e}^{C_{R,s}tp^{\beta}F(0)^{-\frac{1}{p}}}.\]
Without loss of generality, we assume that \(F(0)=\mu_{s}(A)>0\). By optimizing the choice
\[p=2+\log\Big{(}\frac{1}{F(0)}\Big{)},\]
we conclude that there exists \(\epsilon_{0}\in(0,1)\), such that
\[F(t)\leq C_{R,s,T}F(0)^{1-\epsilon_{0}},\quad\forall|t|\leq T,\]
namely
\[\overline{\rho}_{s,R_{1},N}(\Phi_{N}(t)(A))\leq C_{R_{1},s,T}\overline{\rho}_ {s,R_{1},N}(A)^{1-\epsilon_{0}},\quad\forall|t|\leq T.\]
Finally, as \(\Phi_{N}(t)(A)\subset B_{R_{1}/2}^{H^{\sigma}}\),
\[\mu_{s}(\Phi_{N}(t)(A))=\int_{\Phi_{N}(t)(A)}\chi_{R_{1}}(\|u\|_{H^{\sigma}}) \cdot\mathrm{e}^{R_{s,N}(u)}d\overline{\rho}_{s,R,N}(u).\]
By Cauchy-Schwarz and the \(L^{2}\)-integrability of \(\chi_{R_{1}}(\|u\|_{H^{\sigma}})\cdot\mathrm{e}^{R_{s,N}(u)}\) with respect to \(d\mu_{s}\) (Proposition 2.1),
\[\mu_{s}(\Phi_{N}(t)(A))\leq \|\chi_{R_{1}}(\|u\|_{H^{\sigma}})\mathrm{e}^{R_{s,N}(u)}\|_{L^{2} (d\overline{\rho}_{s,R,N})}\overline{\rho}_{s,R,N}(\Phi(t)(A))^{\frac{1}{2}}\] \[\leq \|\chi_{R_{1}}(\|u\|_{H^{\sigma}})\mathrm{e}^{|R_{s,N}(u)|}\|_{L^ {1}(d\mu_{s})}^{\frac{1}{2}}\cdot\sqrt{C_{R_{1},s,T}}\overline{\rho}_{s,R_{1},N}(A)^{\frac{1-\epsilon_{0}}{2}}\] \[\leq C_{R_{1},s,T}^{\prime}\overline{\rho}_{s,R_{1},N}(A)^{\frac{1- \epsilon_{0}}{2}}, \tag{3.1}\]
Again, since \(A\subset B_{R}^{H^{\sigma}}\subset B_{R_{1}}^{H^{\sigma}}\),
\[\overline{\rho}_{s,R_{1},N}(A)\leq\|\chi_{R_{1}}(\|u\|_{H^{\sigma}})\mathrm{e} ^{R_{s,N}(u)}\|_{L^{2}(d\mu_{s})}\mu_{s}(A)^{\frac{1}{2}}\leq C_{R_{1},s}\mu_ {s}(A)^{\frac{1}{2}}.\]
Plugging into (3.1), we complete the proof of Lemma 3.3.
Proof of Theorem 1.1.: Let \(T>0\), we first show that for any compact set \(K\subset B_{R}^{H^{\sigma}}\),
\[\mu_{s}(\Phi(t)(K))\leq C_{R_{1},s,T}\cdot\mu_{s}(K)^{\frac{1-\epsilon_{0}}{4}},\]
where \(\epsilon_{0}>0\), \(R_{1}:=\Lambda(\Lambda(R,T),T)\) are as in the proof of Lemma 3.3. Indeed, by the approximation theory (Proposition 3.2), for any \(\epsilon>0\), there exists \(N_{0}\in\mathbb{N}\), such that for all \(N\geq N_{0}\), \(\Phi(t)(K)\subset\Phi_{N}(t)(K)+B_{\epsilon}^{H^{\sigma}}\), thus
\[\mu_{s}(\Phi(t)(K))\leq\mu_{s}(\Phi_{N}(t)(K)+B_{\epsilon}^{H^{\sigma}}). \tag{3.2}\]
We are going to take the limit \(\epsilon\to 0\) in the inequality above, using the fact that \(\mu_{s}\) is regular. Before doing that, we have to show that for any open set \(G\supset\Phi_{N}(t)(K)\), there exists \(\epsilon>0\), such that
\[G\supset\Phi_{N}(t)(K)+B_{\epsilon}^{H^{\sigma}}.\]
Since \(\Phi_{N}(t)(K)\) is compact, for any open set \(G\supset\Phi_{N}(t)(K)\), there exist finitely many balls \(B_{1},\cdots B_{m}\) of \(H^{\sigma}\) such that
\[\Phi_{N}(t)(K)\subset\bigcup_{j=1}^{m}B_{j}\subset\bigcup_{j=1}^{m}2B_{j} \subset G,\]
where \(2B_{j}\) is the ball with the same center as \(B_{j}\) and with radius twice of \(B_{j}\). In particular, there exists \(\epsilon_{1}>0\), such that for all \(0<\epsilon<\epsilon_{1}\),
\[\Phi_{N}(t)(K)+B_{\epsilon}^{H^{\sigma}}\subset G.\]
To see this, we take \(\epsilon_{1}<\frac{1}{4}\min\{\text{radius}(B_{j}):j=1,\cdots,m\}\). Then for any \(u\in\Phi_{N}(t)(K)+B_{\epsilon_{1}}^{\sigma}\), there exists \(u_{0}\in K\), such that \(\|u-u_{0}\|_{H^{\sigma}}<\epsilon_{1}\). As \(\Phi_{N}(t)(K)\) is covered by \(B_{j}\), there is a ball, say \(B_{1}\) with center \(u_{1}\), such that \(\|u_{0}-u_{1}\|_{H^{\sigma}}<\text{radius}(B_{1})\). Hence \(u\in 2B_{1}\subset G\).
Recall that Gaussian measures are regular, namely, for any Borel set \(A\)
\[\mu_{s}(A)= \inf\{\mu_{s}(G):\ G\supset A,\ G\ \text{open in}\ H^{\sigma}\}\] \[= \sup\{\mu_{s}(F):\ F\subset A,\ F\ \text{compact and Borel in}\ H^{\sigma}\}\]
we can take \(\epsilon\to 0\) on the right hand side of (3.2) to obtain the estimate
\[\mu_{s}(\Phi(t)(K))\leq\mu_{s}(\Phi_{N}(t)(K))\leq C_{R_{1},s,T}\cdot\mu_{s}( K)^{\frac{1-\epsilon_{0}}{4}}, \tag{3.3}\]
as desired.
Finally we assume that \(A\subset B_{R}^{H^{\sigma}}\) is an arbitrary Borel set. Since \(\Phi(t)\) is a continuous bijection on \(H^{\sigma}(\mathbb{T}^{3})\), \(\Phi(t)(A)\) is also a Borel set (view \(\Phi(t)(A)=(\Phi(-t))^{-1}(A)\)). Thus there exists a sequence of compact sets \(K_{n}\subset\Phi(t)(A)\), such that
\[\mu_{s}(\Phi(t)(A))=\lim_{n\to\infty}\mu_{s}(K_{n}).\]
For fixed \(|t|\leq T\), set \(F_{n}=\Phi(-t)(K_{n})\), by the bijectivity of \(\Phi(t)\), \(K_{n}=\Phi(t)(F_{n})\). Since \(F_{n}\) are also compact (Proposition 3.2), we deduce that
\[\mu_{s}(K_{n})=\mu_{s}(\Phi(t)(F_{n}))\leq C_{R_{1},s,T}\cdot\mu_{s}(F_{n})^{ \frac{1-\epsilon_{0}}{4}}.\]
Observe that \(K_{n}=\Phi(t)(F_{n})\subset\Phi(t)(A)\), again from the bijectivity, \(F_{n}\subset A\), thus
\[\mu_{s}(K_{n})\leq C_{R_{1},s,T}\cdot\mu_{s}(A)^{\frac{1-\epsilon_{0}}{4}}.\]
Letting \(n\to\infty\), we deduce that
\[\mu_{s}(\Phi(t)(A))\leq C_{R_{1},s,T}\cdot\mu_{s}(A)^{\frac{1-\epsilon_{0}}{4}}.\]
In particular, if \(\mu_{s}(A)=0\), we must have \(\mu_{s}(\Phi(t)(A))=0\). This proves the quasi-invariance property of \(\mu_{s}\) along the flow \(\Phi(t)\).
## 4. Preliminaries for the energy estimates
In this section, we summarize several frequently used preliminary results as well as some notations.
### Deterministic tools
For a given set of frequencies \(k_{1},k_{2},\cdots,k_{m}\), we denote \(k_{(1)},k_{(2)},\cdots,k_{(m)}\) a non-increasing rearrangement such that
\[|k_{(1)}|\geq|k_{(2)}|\geq\cdots\geq|k_{(m)}|.\]
Similarly, for a given set of dyadic integers \(N_{1},N_{2},\cdots,N_{m}\), we denote \(N_{(1)},N_{(2)},\cdots,N_{(m)}\) a non-increasing rearrangement such that
\[N_{(1)}\geq N_{(2)}\geq\cdots\geq N_{(m)}.\]
Wu have the following estimate on the the function \(\psi_{s}\) which measures the lack of conservation of \(H^{s}\) based quantities.
**Lemma 4.1**.: _Set_
\[\psi_{2s}(\vec{k})=\sum_{j=1}^{6}(-1)^{j-1}|k_{j}|^{2s},\quad\Omega(\vec{k})= \sum_{j=1}^{6}(-1)^{j-1}|k_{j}|^{2}.\]
_Then for \(k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0\),_
\[|\psi_{2s}(\vec{k})|\lesssim|k_{(1)}|^{2s-2}[|\Omega(\vec{k})|+|k_{(3)}|^{2}].\]
Proof.: We can suppose that \(|k_{(3)}|\ll|k_{(2)}|\), otherwise the estimate reduces to the straightforward bound \(|\psi_{2s}(\vec{k})|\lesssim|k_{(1)}|^{2s}.\) Essentially, there are two different cases : \(k_{(1)}=k_{1}\), \(k_{(2)}=k_{2}\) and \(k_{(1)}=k_{1}\), \(k_{(2)}=k_{3}\). In the second case we can again use the bound \(|\psi_{2s}(\vec{k})|\lesssim|k_{(1)}|^{2s}.\) Let is now suppose that \(k_{(1)}=k_{1}\), \(k_{(2)}=k_{2}\). By the mean-value theorem,
\[\big{|}|k_{1}|^{2s}-|k_{2}|^{2s}\big{|}\lesssim|k_{(1)}|^{2(s-1)}|\big{|}|k_{ 1}|^{2}-|k_{2}|^{2}\big{|}\lesssim|k_{(1)}|^{2s-2}[|\Omega(\vec{k})|+|k_{(3)}| ^{2}].\]
This completes the proof of Lemma 4.1.
For linear constraints, we denote
\[\mathfrak{h}_{k_{1}^{\iota_{1}}k_{2}^{\iota_{2}}\cdots k_{m}^{\iota_{m}}}:= \mathbf{1}_{\iota_{1}k_{1}+\iota_{2}k_{2}+\cdots+\iota_{m}k_{m}=0},\]
where \(\iota_{j}\in\{+1,-1\}\), identified also as \(\{+,-\}\), the signature of frequencies \(k_{1},\cdots,k_{m}\). For example,
\[\mathfrak{h}_{k_{1}^{+}k_{2}^{-}k_{3}^{+}k_{4}^{-}k_{5}^{+}k_{6}^{-}}= \mathbf{1}_{k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0}.\]
We will frequently use the following elementary counting bound:
**Lemma 4.2**.: _Assume that \(n\geq 2\) and given dyadic numbers \(N_{1},N_{2},\cdots N_{n}\). Then uniformly in \(K\in\mathbb{Z}^{3}\), \(\kappa\in\mathbb{R}\) and \(\iota_{j}\in\{+1,-1\}\), we have_
\[\sum_{\begin{subarray}{c}k_{1},k_{2},\cdots,k_{n}\\ \iota_{i}k_{i}+\iota_{j}k_{j}\neq 0,\forall i\neq j\end{subarray}}\mathbf{1}_{ \iota_{1}k_{1}+\iota_{2}k_{2}+\cdots+\iota_{n}k_{n}=K}\cdot\mathbf{1}_{\iota _{i_{1}}|k_{l_{1}}|^{2}+\cdots+\iota_{\iota_{n}}|k_{l_{n}}|^{2}=\kappa}\Big{(} \prod_{j=1}^{n}\mathbf{1}_{|k_{l_{j}}|\sim N_{j}}\Big{)}\lesssim N_{(2)}^{2} \prod_{j=3}^{n}N_{(j)}^{3},\]
_where we adopt the convention that when \(n=2\), the bound on the right hand side is \(N_{(2)}^{2}\)._
**Remark 4.3**.: The counting bound stated here is very rough but it already fits our need. By using some arithmetic, one can improve it when \(n\geq 3\) or \(n=2\) and \(\iota_{1}=\iota_{2}\). We refer to Lemma 4.5 of [13] for such an improvement. The estimate of Lemma 4.2 has the advantage to hold with the same (trivial) proof on a general torus.
Next we recall the following conditional Wiener chaos estimate for multi-linear expression of complex Gaussian random variables. In the sequel we adopt the notation \(z^{+}=z\) and \(z^{-}=\overline{z}\) for a complex number \(z\in\mathbb{C}\).
**Lemma 4.4** (Wiener chaos estimate).: _Consider the multi-linear expression of Gaussian:_
\[F(\omega^{\prime},\omega)=\sum_{k_{1},\cdots,k_{n}}c_{k_{1},\cdots,k_{n}}( \omega^{\prime})\cdot\prod_{j=1}^{n}g_{k_{j}}^{\iota_{j}}(\omega),\]
_where the random variables \(c_{k_{1},\cdots,k_{n}}(\omega^{\prime})\) are independent of complex standard i.i.d. Gaussians \(g_{k_{j}}(\omega)\). Then for any \(p\geq 2\), we have_
\[\|F(\omega^{\prime},\omega)\|_{L^{p}_{\omega}}\leq Cp^{\frac{n}{2}}\|F(\omega^ {\prime},\omega)\|_{L^{2}_{\omega}}.\]
We state the Wiener chaos estimate in above form since later on we will use Lemma 4.4 for \(L^{p}\) estimates for high-frequency Gaussians conditioning to some \(\sigma\)-algebra generated by low-frequency Gaussians (see also [41] for a statement involving conditional expectation). Starting from [5], in recent years such conditioned Wiener chaos estimates were extensively use in the field of random dispersive PDE's.
## 5. Decomposition of the differential of the modified energy
Recall from (2.9) that
\[Q_{s,N}(w)=\text{Im}\,\big{(}-\frac{1}{6}\mathcal{R}_{0}(w)+\frac{1}{2}\mathcal{ R}_{1}(w)-\frac{1}{2}\mathcal{R}_{2}(w)\big{)},\]
where
\[\mathcal{R}_{0}(w):=\sum_{k_{1}-k_{2}+\cdots-k_{6}=0}\chi\Big{(} \frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{2s}(\vec{k} )w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline{w}_{k_{6}}, \tag{5.1}\]
\[\mathcal{R}_{1}(w):=\sum_{\begin{subarray}{c}k_{1}-k_{2}+\cdots-k_{6}=0\\ k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\end{subarray}}\Big{(}1-\chi\Big{(}\frac{ \Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}\frac{\psi_{2s}( \vec{k})}{\Omega(\vec{k})}\chi_{N}(k_{1})^{2}w_{p_{1}}\overline{w}_{p_{2}} \cdots w_{p_{5}}\overline{w}_{k_{2}}\cdots\overline{w}_{k_{6}} \tag{5.2}\]
and
\[\mathcal{R}_{2}(w):=\sum_{\begin{subarray}{c}k_{1}-k_{2}+\cdots-k_{6}=0\\ k_{2}=q_{1}-q_{2}+q_{3}-q_{4}+q_{5}\end{subarray}}\Big{(}1-\chi\Big{(}\frac{ \Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}\frac{\psi_{2s}( \vec{k})}{\Omega(\vec{k})}\chi_{N}(k_{2})^{2}w_{k_{1}}\overline{w}_{q_{1}} \cdots\overline{w}_{q_{5}}w_{k_{3}}\cdots\overline{w}_{k_{6}}. \tag{5.3}\]
Comparing to the estimate for \(\mathcal{R}_{0}(w)\), the major difficulty in estimating \(\mathcal{R}_{1}(w),\mathcal{R}_{2}(w)\) is the existence of pairing contributions between different generations (\(w_{k_{j}}\) and \(w_{p_{j}}\) (or \(w_{k_{j}}\) and \(w_{q_{j}}\))). Roughly speaking, the pairing contributions in \(\mathcal{R}_{1}(v)\) are (up to symmetry)
* \(|k_{1}|\sim|k_{2}|\gg|k_{3}|+|k_{4}|+|k_{5}|+|k_{6}|\), \(|k_{1}|\sim|k_{2}|\gg|p_{2}|+|p_{3}|+|p_{4}|+|p_{5}|\) and \(p_{1}=k_{2}\);
* \(|k_{1}|\sim|k_{3}|\gg|k_{2}|+|k_{4}|+|k_{5}|+|k_{6}|,|k_{1}|\sim|k_{3}|\gg|p_{ 1}|+|p_{3}|+|p_{4}|+|p_{5}|\) and \(p_{2}=k_{3}\).
Now we identify these pairing contributions precisely:
\[\Lambda_{1,1}:=\Big{\{}(p_{1},\cdots,p_{5},k_{2},\cdots,k_{6}): \sum_{j=1}^{5}(-1)^{j-1}p_{j}+\sum_{i=2}^{6}(-1)^{i-1}k_{i}=0,\] \[k_{2}=p_{1},\ \sum_{i\in\{3,4,5,6\}}|k_{i}|\leq|k_{1}|^{\theta}+|k_{2}|^{ \theta},\ \sum_{j\in\{2,3,4,5\}}|p_{j}|\leq|k_{1}|^{\theta}+|k_{2}|^{\theta}\Big{\}} \tag{5.4}\]
and
\[\Lambda_{1,2}:=\Big{\{}(p_{1},\cdots,p_{5},k_{2},\cdots,k_{6}): \sum_{j=1}^{5}(-1)^{j-1}p_{j}+\sum_{i=2}^{6}(-1)^{i-1}k_{i}=0,\] \[k_{3}=p_{2},\ \sum_{i\in\{2,4,5,6\}}|k_{i}|\leq|k_{1}|^{\theta}+|k_{3}|^{ \theta},\ \sum_{j\in\{1,3,4,5\}}|p_{j}|\leq|k_{1}|^{\theta}+|k_{3}|^{\theta}\Big{\}}, \tag{5.5}\]
where \(0<\theta<\frac{\delta_{0}}{2}<\frac{1}{3}\) is close to \(\frac{1}{3}\). We define correspondingly
\[\mathcal{S}_{1,1}(w):=\sum_{\Lambda_{1,1}}\chi_{N}(k_{1})^{2}|w_{k_{2}}|^{2} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)} \Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}w_{k_{3}}\overline{w}_{k_{4}}w _{k_{5}}\overline{w}_{k_{6}}\cdot\overline{w}_{p_{2}}w_{p_{3}}\overline{w}_{p_{ 4}}w_{p_{5}} \tag{5.6}\]
and
\[\mathcal{S}_{1,2}(w):=\sum_{\Lambda_{1,2}}\chi_{N}(k_{1})^{2}|w_{k_{3}}|^{2} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)} \Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\overline{w}_{k_{2}}\overline {w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\cdot w_{p_{1}}w_{p_{3}}\overline{w}_{ p_{4}}w_{p_{5}}. \tag{5.7}\]
Similarly, the pairing contributions in \(\mathcal{R}_{2}\) are (up to symmetry)
\[\mathcal{S}_{2,1}(w):=\sum_{\Lambda_{2,1}}\chi_{N}(k_{2})^{2}|w_{k_{1}}|^{2} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}w_{k_{3}}\overline{w}_ {k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\cdot\overline{w}_{q_{3}}w_{q_{2}} \overline{w}_{q_{5}}w_{q_{4}} \tag{5.8}\]
and
\[\mathcal{S}_{2,2}(w):=\sum_{\Lambda_{2,2}}\chi_{N}(k_{2})^{2}|w_{k_{4}}|^{2} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}w_{k_{1}}w_{k_{3}}w_{ k_{5}}\overline{w}_{k_{6}}\cdot\overline{w}_{q_{1}}\overline{w}_{q_{3}}\overline{w}_{q_{5}}w _{q_{4}}. \tag{5.9}\]
where
\[\Lambda_{2,1}:=\Big{\{}(k_{1},q_{1},\cdots,q_{5},k_{3},\cdots,k_{ 6}): \sum_{j=1}^{5}(-1)^{j}q_{j}+\sum_{i\in\{1,3,4,5,6\}}(-1)^{i-1}k_{i }=0,\] \[k_{1}=q_{1},\ \sum_{i\in\{3,4,5,6\}}|k_{i}|\leq|k_{1}|^{\theta}+|k_{2}|^{ \theta},\ \sum_{j\in\{2,3,4,5\}}|q_{j}|\leq|k_{1}|^{\theta}+|k_{2}|^{\theta}\Big{\}}, \tag{5.10}\]
and
\[\Lambda_{2,2}:=\Big{\{}(k_{1},q_{1},\cdots,q_{5},k_{3},\cdots,k_{ 6}): \sum_{j=1}^{5}(-1)^{j}q_{j}+\sum_{i\in\{1,3,4,5,6\}}(-1)^{i-1}k_{i }=0,\] \[k_{4}=q_{2},\ \sum_{i\in\{1,3,5,6\}}|k_{i}|\leq|k_{2}|^{\theta}+|k_{4}|^{ \theta},\ \sum_{j\in\{1,3,4,5\}}|q_{j}|\leq|k_{2}|^{\theta}+|k_{4}|^{\theta}\Big{\}}. \tag{5.11}\]
\(\mathcal{S}_{1,1}\): \(p_{1}\) is paired with \(k_{2}\)\(\mathcal{S}_{1,2}\): \(p_{2}\) is paired with \(k_{3}\)
\(\mathcal{S}_{2,1}\): \(q_{1}\) is paired with \(k_{1}\)\(\mathcal{S}_{2,2}\): \(q_{2}\) is paired with \(k_{4}\)
By symmetry, we have
\[\mathcal{R}_{1}(w)=9\mathcal{S}_{1,1}(w)+4\mathcal{S}_{1,2}(w)+\mathcal{R}_{1,3 }(w), \tag{5.12}\]
and
\[\mathcal{R}_{2}(w)=9\mathcal{S}_{2,1}(w)+4\mathcal{S}_{2,2}(w)+\mathcal{R}_{2, 3}(w), \tag{5.13}\]
where in the expression of remainders \(\mathcal{R}_{1,3}(w)\) we have either \(|k_{(3)}|\gtrsim|k_{(1)}|^{\theta}\) or \(|k_{(3)}|\lesssim|k_{(1)}|^{\theta}\) and the dominating frequencies are either non-paired or paired within the same generation. Here \(k_{(1)}\cdots,k_{(10)}\) is a rearrangement of leaves \(p_{1},p_{2},p_{3},p_{4},p_{5},k_{2},k_{3},k_{4},k_{5},k_{6}\) such that \(|k_{(1)}|\geq|k_{(2)}|\geq\cdots\geq|k_{(10)}|\). We define similarly the remainder \(\mathcal{R}_{2,3}(w)\). More precisely, we distinguish three different types in \(\mathcal{R}_{1,3}(w)\) (and in \(\mathcal{R}_{2,3}(w)\)) with the corresponding constraints in the sum
\[\sum_{\begin{subarray}{c}k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0\\ k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\end{subarray}}(\cdots):\]
* Type A: \(\sum_{j=3}^{10}|k_{(j)}|>|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\).
* Type B: \(\sum_{j=3}^{10}|k_{(j)}|\leq|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\) and \(\{k_{(1)},k_{(2)}\}\subset\{k_{2},k_{3},k_{4},k_{5},k_{6}\}\) or \(\{k_{(1)},k_{(2)}\}\subset\{p_{1},p_{2},p_{3},p_{4},p_{5}\}\).
* Type C: \(\sum_{j=3}^{10}|k_{(j)}|\leq|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\), \(k_{(1)}\neq k_{(2)}\) and \[k_{(1)}\in\{k_{2},k_{3},k_{4},k_{5},k_{6}\},\quad k_{(2)}\in\{p_{1},p_{2},p_{3 },p_{4},p_{5}\}\] or \[k_{(2)}\in\{k_{2},k_{3},k_{4},k_{5},k_{6}\},\quad k_{(1)}\in\{p_{1},p_{2},p_{3 },p_{4},p_{5}\}.\]
Recall that we have to estimate the \(L^{p}(d\mu_{s})\) norm of
\[Q_{s,N}(w)=-\frac{1}{6}\operatorname{Im}\mathcal{R}_{0}(w)+\frac{1}{2} \operatorname{Im}\mathcal{R}_{1}(w)-\frac{1}{2}\operatorname{Im}\mathcal{R}_{ 2}(w).\]
In Section 6, we estimate the first generation contribution \(\mathcal{R}_{0}(w)\), and in Section 7, we estimate the pairing contributions
\[\operatorname{Im}(\mathcal{S}_{1,j}-\mathcal{S}_{2,j}),\quad j=1,2.\]
Finally in Section 8, we finish the estimate for remainders in the second generation \(\mathcal{R}_{1,3}(w),\mathcal{R}_{2,3}(w)\).
## 6. Energy estimate I: the first generation
Denote
\[\mathcal{R}(w):=\sum_{k_{1}-k_{2}+\cdots-k_{6}=0}\Big{(}1-\chi\Big{(} \frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}\frac{\psi_{2 s}(\vec{k})}{\Omega(\vec{k})}w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline{w}_{k_{6}}, \tag{6.1}\] \[\mathcal{R}_{0}(w):=\sum_{k_{1}-k_{2}+\cdots-k_{6}=0}\chi\Big{(} \frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{2s}(\vec{k} )w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline{w}_{k_{6}}. \tag{6.2}\]
**Proposition 6.1**.: _Assume that \(\delta_{0}<\frac{2}{3}\). There exists \(\beta\in(0,1)\), such that for any \(R>0\) and \(p\in[2,\infty)\) we have_
\[\|\mathbf{1}_{B_{R}^{H^{\sigma}}}(w)\mathcal{R}(w)\|_{L^{p}(d\mu_{s})}+\| \mathbf{1}_{B_{R}^{H^{\sigma}}}(w)\mathcal{R}_{0}(w)\|_{L^{p}(d\mu_{s})}\leq C (R)p^{\beta}.\]
Proof.: Before proceeding to the estimates, we first observe that without loss of generality, we may assume that there is no pairing between frequencies with different signatures in the sum defining \(\mathcal{R}_{0}(w)\) or \(\mathcal{R}(w)\). Indeed, if this is the case, say \(k_{1}=k_{2}\) are paired, then the resonant function will degenerate to \(|k_{3}|^{2}-|k_{4}|^{2}+|k_{5}|^{2}-|k_{6}|^{2}\) and the energy weight \(\psi_{2s}(\vec{k})\) will degenerate to \(|k_{3}|^{2s}-|k_{4}|^{2s}+|k_{5}|^{2s}-|k_{6}|^{2s}\). Therefore, this pairing contribution in \(\mathcal{R}_{0}(w)\) or \(\mathcal{R}(w)\) reduces to some power of \(\|w\|_{L^{2}}^{2}\) times a similar term with two less degrees of homogeneity2, and the treatment of such a reduced term is similar (simpler) than \(\mathcal{R}_{0}(w)\) or \(\mathcal{R}(w)\). So in the sequel of the proof, we implicitly assume that there is no pairing between frequencies with different signatures in all the sums.
Footnote 2: For over pairing contributions, the over-paired part can be controlled by a power of \(\|w\|_{L^{2}}^{2}\).
\(\bullet\)**Estimate of \(\mathcal{R}_{0}(w)\)**:
Pick \(\alpha\in(0,1)\), close enough to \(1\), we split \(\mathcal{R}_{0}(w)\) as \(\mathrm{I}+\mathrm{II}\), where
\[\mathrm{I}:=\sum_{\begin{subarray}{c}|k_{(3)}|>|k_{(1)}|^{\alpha}\\ k_{1}-k_{2}+\cdots-k_{6}=0\end{subarray}}\chi\Big{(}\frac{\Omega(\vec{k})}{ \lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{2s}(\vec{k})w_{k_{1}}\overline{w}_{ k_{2}}\cdots\overline{w}_{k_{6}}\]
and
\[\mathrm{II}:=\sum_{\begin{subarray}{c}|k_{(3)}|\leq|k_{(1)}|^{\alpha}\\ k_{1}-k_{2}+\cdots-k_{6}=0\end{subarray}}\chi\Big{(}\frac{\Omega(\vec{k})}{ \lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{2s}(\vec{k})w_{k_{1}}\overline{w}_ {k_{2}}\cdots\overline{w}_{k_{6}}.\]
To estimate I, we only exploit deterministic analysis. The order of \(w_{k_{j}},\overline{w}_{k_{j}}\) plays no significant role in the analysis. Therefore, without loss of generality, we assume that in the sum, \(|k_{1}|\geq\frac{1}{2}\)
\(|k_{2}|\geq|k_{3}|\geq|k_{4}|\geq|k_{5}|\geq|k_{6}|\). Taking the absolute value in the sum, we have
\[\mathrm{I} \lesssim\sum_{\begin{subarray}{c}|k_{3}|>|k_{1}|^{\alpha}\\ k_{1}-k_{2}+\cdots-k_{6}=0,\end{subarray}}\mathbf{1}_{|\Omega(\vec{k})|\lesssim |k_{1}|^{\delta_{0}}}\ \mathbf{1}_{|k_{1}|\geq|k_{2}|\geq\cdots\geq|k_{6}|}\cdot|k_{1}|^{2s-2}(|k_{3} |^{2}+|\Omega(\vec{k})|)|w_{k_{1}}\cdots w_{k_{6}}|\] \[\lesssim\sum_{\begin{subarray}{c}N_{1}\geq N_{2}>\cdots N_{6}\\ N_{3}\gtrsim N_{1}^{\alpha}\end{subarray}}\mathrm{I}_{N_{1},\cdots N_{6}},\]
where the summations are performed on the dyadic values of \(N_{1},\cdots N_{6}\) and
\[\mathrm{I}_{N_{1},\cdots N_{6}}:=\sum_{|\kappa|\lesssim N_{1}^{\delta_{0}}} \sum_{\begin{subarray}{c}|k_{3}|>|k_{1}|^{\alpha}\\ k_{1}-k_{2}+\cdots-k_{6}=0\\ \Omega(\vec{k})=\kappa\end{subarray}}N_{1}^{2s-2}N_{3}^{2}\prod_{j=1}^{6} \mathbf{1}_{|k_{j}|\sim N_{j}}|w_{k_{j}}|,\]
provided that \(\alpha>\frac{1}{3}\) thanks to the restriction \(0<\delta_{0}<\frac{2}{3}\). Using the Cauchy-Schwarz inequality in the \(k_{1},k_{2}\) summations, we can write
\[\mathrm{I}_{N_{1},\cdots N_{6}}\lesssim N_{1}^{2s-2}N_{3}^{2}\|P_{N_{1}}w\|_{L ^{2}}\|P_{N_{2}}w\|_{L^{2}}\prod_{j=3}^{6}\big{(}\sum_{k_{j}\in\mathbb{Z}} \mathbf{1}_{|k_{j}|\sim N_{j}}|w_{k_{j}}|\big{)},\]
where \(P_{N}\) is the frequency projector to \(|k|\sim N\). Therefore, for \(N_{3}\gtrsim N_{1}^{\alpha}\) and \(w\in B_{R}^{H^{\sigma}}\), we have a crude estimate
\[\mathrm{I}_{N_{1},\cdots N_{6}}\lesssim R^{6}\,N_{1}^{2s-2}N_{3}^{2}\,N_{1}^{ -2\sigma}N_{3}^{\frac{3}{2}-\sigma}\lesssim_{R}N_{1}^{\frac{3}{2}+2s-2\sigma} N_{1}^{-\alpha\sigma}\,,\]
which is conclusive as far as \((2+\alpha)\sigma>2s+\frac{3}{2}\). The last restriction is easily satisfied by taking \(\alpha\) close to \(1\) as far as \(s>\frac{15}{2}\).
Next, we estimate II. We decompose II dyadically as \(\sum_{N_{1},\cdots,N_{6}}\mathrm{II}_{N_{1},\cdots,N_{6}}\), where
\[\mathrm{II}_{N_{1},\cdots,N_{6}}:=\sum_{\begin{subarray}{c}|k_{(3)}|\leq|k_{( 1)}|^{\alpha}\\ k_{1}-k_{2}+\cdots-k_{6}=0\end{subarray}}\chi\Big{(}\frac{\Omega(\vec{k})}{ \lambda(\vec{k})^{\delta_{0}}}\Big{)}\psi_{2s}(\vec{k})w_{k_{1}}\overline{w}_ {k_{2}}\cdots\overline{w}_{k_{6}}\prod_{j=1}^{6}\mathbf{1}_{|k_{j}|\sim N_{j}}.\]
For this contribution, we mainly rely on the Wiener chaos estimates.
Without loss of generality, we assume that \(N_{1}\sim N_{2}\sim N_{(1)},N_{(3)}=N_{3}\) (since the analysis of cases \(N_{1}\sim N_{3}\sim N_{(1)},N_{2}\sim N_{(3)}\) or \(N_{1}\sim N_{3}\sim N_{(1)},N_{5}\sim N_{(3)}\) are similar or simpler).
Denote \(\mathcal{B}_{\ll N_{1}}\) the \(\sigma\)-algebra generated by Gaussians \((g_{k}(\omega))_{|k|\leq N_{1}/100}\). Note that we have the constraint \(N_{3}\leq N_{1}^{\alpha}\) for some \(\alpha<1\), close to \(1\), and we only need to consider the contribution where \(N_{1}\) is sufficiently large so that \(N_{1}^{\alpha}\ll\frac{N_{1}}{100}\). Consequently, \(\mathbf{1}_{|k_{j}|}w_{k_{j}},j=3,4,5,6\) are all \(\mathcal{B}_{\ll N_{1}}\) measurable and the random function
\[\sum_{|k|\sim N_{1}}\frac{g_{k}(\omega)}{\sqrt{1+|k|^{2s}}}\mathrm{e}^{ik\cdot x}\]
is independent of \(\mathcal{B}_{\ll N_{1}}\), we have
\[\|\Pi_{N_{1},\cdots,N_{6}}\cdot\mathbf{1}_{B_{R}^{H^{\sigma}}}(w) \|_{L^{p}(d\mu_{s})}\leq \|\Pi_{N_{1},\cdots,N_{6}}\cdot\mathbf{1}_{B_{R}^{H^{\sigma}}}( \mathbf{P}_{\leq N_{1}/100}w)\|_{L^{p}(d\mu_{s})}\] \[\leq \big{\|}\|\Pi_{N_{1},\cdots,N_{6}}\|_{L^{p}(d\mu_{s}|\mathcal{B}_ {\ll N_{1}})}\cdot\mathbf{1}_{B_{R}^{H^{\sigma}}}(\mathbf{P}_{\leq N_{1}/100} w)\big{\|}_{L^{\infty}(d\mu_{s})},\]
where \(\mathbb{P}_{\leq N_{1}/100}\) is the frequency projection to \(|k|\leq N_{1}/100\) and \(L^{p}(d\mu_{s}|\mathcal{B}_{\leq N_{1}/100})\) means the \(L^{p}\) norm conditioned to the \(\sigma\)-algebra \(\mathcal{B}_{\leq N_{1}/100}\). By the conditional Wiener-chaos estimate (see Lemma 4.4 ), we have
\[\|\Pi_{N_{1},\cdots,N_{6}}\|_{L^{p}(d\mu_{s}|\mathcal{B}_{\ll N_{ 1}})} \lesssim p\|\Pi_{N_{1}\cdots,N_{6}}\|_{L^{2}(d\mu_{s}|\mathcal{B}_{ \ll N_{1}})}\] \[\lesssim p(N_{1}N_{2})^{-s}\Big{(}\sum_{\begin{subarray}{c}|k_{1}|\sim N _{1}\\ |k_{2}|\sim N_{2}\end{subarray}}\Big{|}\sum_{\begin{subarray}{c}k_{3},k_{4},k_{5 },k_{6}\\ k_{4}+k_{5}-k_{6}=k_{2}-k_{1}\\ |\Omega(\vec{k})|\lesssim N_{1}^{8}\end{subarray}}\psi_{2s}(\vec{k})w_{k_{3}} \overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\prod_{j=3}^{6}\mathbf{1}_{| k_{j}|\sim N_{j}}\Big{|}^{2}\Big{)}^{\frac{1}{2}}. \tag{6.3}\]
By Cauchy-Schwarz,
\[\sum_{\begin{subarray}{c}|k_{1}|\sim N_{1}\\ |k_{2}|\sim N_{2}\end{subarray}}\Big{|}\sum_{\begin{subarray}{c}k_{3},k_{4},k_{ 5},k_{6}\\ k_{3}-k_{4}+k_{5}-k_{6}=k_{2}-k_{1}\\ |\Omega(\vec{k})|\lesssim N_{1}^{8}\end{subarray}}\psi_{2s}(\vec{k})w_{k_{3}} \overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\prod_{j=3}^{6}\mathbf{1}_{| k_{j}|\sim N_{j}}\Big{|}^{2}\] \[\leq \Big{(}\sum_{\begin{subarray}{c}|k_{1}|\sim N_{1}\\ |k_{2}|\sim N_{2}\end{subarray}}\sum_{\begin{subarray}{c}k_{3},k_{4},k_{5},k_{ 6}\\ k_{3}-k_{4}+k_{5}-k_{6}=k_{2}-k_{1}\\ |\Omega(\vec{k})|\lesssim N_{1}^{8}\end{subarray}}|\psi_{2s}(\vec{k})|^{2}|w_{ k_{6}}|^{2}\prod_{j=3}^{6}\mathbf{1}_{|k_{j}|\sim N_{j}}\Big{)}\] \[\qquad\qquad\qquad\qquad\times\sup_{|k_{1}|,|k_{2}|\sim N}\sum_{k_ {3}-k_{4}+k_{5}-k_{6}=k_{2}-k_{1}}|w_{k_{3}}w_{k_{4}}w_{k_{5}}|^{2}\prod_{j=3}^ {6}\mathbf{1}_{|k_{j}|\sim N_{j}}.\]
Since \(|\psi_{2s}(\vec{k})|^{2}\lesssim N_{1}^{4(s-1)}(N_{3}^{4}+|\Omega(\vec{k})|^{2})\), the first sum on the right hand-side can be bounded by (below we implicitly insert the constraint \(|k_{j}|\sim N_{j}\))
\[\sum_{k_{6}\sim N_{6}}|w_{k_{6}}|^{2} \sum_{|\kappa|\lesssim N_{1}^{8}}\ \sum_{k_{1},k_{2},k_{3},k_{4},k_{5}}N_{1}^{4(s-1)}(N_{3}^{4}+\kappa^{2})\ \mathfrak{h}_{k_{1}k_{2}k_{3}k_{4}k_{5}k_{6}}(\kappa)\] \[\lesssim N_{6}^{-2\sigma}\|w\|_{H^{\sigma}}^{2}N_{1}^{4(s-1)}\big{(}N_{3}^ {4}N_{1}^{\delta_{0}}\,N_{2}^{2}\,(N_{3}N_{4}N_{5})^{3}+N_{1}^{3\delta_{0}}\,N_ {2}^{2}\,(N_{3}N_{4}N_{5})^{3}\big{)}\] \[\lesssim \|w\|_{H^{\sigma}}^{2}N_{6}^{-2\sigma}N_{1}^{4s-4}(N_{3}^{4}N_{1 }^{\delta_{0}}+N_{1}^{3\delta_{0}})N_{2}^{2}N_{3}^{3}N_{4}^{3}N_{5}^{3},\]
where we used Lemma 4.2 and the notation
\[\mathfrak{h}_{k_{1}k_{2}k_{3}k_{4}k_{5}k_{6}}(\kappa):=\mathbf{1}_{k_{1}-k_{2}+ k_{3}-k_{4}+k_{5}-k_{6}=0}\cdot\mathbf{1}_{\Omega(\vec{k})=\kappa}.\]
Plugging into (6.3), we obtain that
\[\|\mathrm{II}_{N_{1},\cdots,N_{6}}\|_{L^{p}(d\mu_{s}|\mathcal{B}\ll N _{1})} \lesssim pN_{1}^{-1}(N_{3}^{2}N_{1}^{\frac{\delta_{0}}{2}}+N_{1}^{ \frac{3\delta_{0}}{2}})N_{3}^{\frac{3}{2}}N_{3}^{-\sigma}\prod_{j=3}^{6}\|w_{N _{j}}\|_{H^{\sigma}}\] \[\lesssim p\big{(}N_{1}^{\frac{\delta_{0}}{2}-1}N_{3}^{\frac{7}{2 }-\sigma}+N_{1}^{\frac{3\delta_{0}}{2}-1}N_{3}^{\frac{3}{2}-\sigma}\big{)}\|w \|_{H^{\sigma}}^{4}.\]
Since \(\delta_{0}<\frac{2}{3}\) and \(\sigma>\frac{7}{2}\), the above quantity can be controlled by
\[pN_{1}^{-(1-\frac{3\delta_{0}}{2})}\|w\|_{H^{\sigma}}^{4}.\]
Hence
\[\|\mathrm{II}_{N_{1},\cdots,N_{6}}\mathbf{1}_{B_{R}^{H^{\sigma}}}(w)\|_{L^{p} (d\mu_{s})}\lesssim pN_{(1)}^{-(1-\frac{3\delta_{0}}{2})}R^{4}.\]
Here since we gain a negative power in \(N_{(1)}\), by interpolating with the crude deterministic estimate
\[|\mathrm{II}_{N_{1},\cdots,N_{6}}\mathbf{1}_{B_{R}^{H^{\sigma}}}(w)|\lesssim N _{(1)}^{2s-2\sigma}\|w\|_{H^{\sigma}}^{6}\leq N_{(1)}^{2(s-\sigma)}R^{6},\]
we conclude the estimate for \(\mathcal{R}_{0}(w)\).
\(\bullet\)**Estimate of \(\mathcal{R}(w)\)**: The estimate for \(\mathcal{R}(w)\) is similar (simpler) to the estimate for \(\mathcal{R}_{0}(w)\), and we only sketch the proof. Indeed, comparing to the estimate of \(\mathcal{R}_{0}(w)\), the only difference is that the weight \(\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\) is now replaced by
\[\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{1}{\Omega(\vec{k})}.\]
We similarly split \(\mathcal{R}(w)\) similarly as \(\mathrm{I}^{\prime}+\mathrm{II}^{\prime}\), where
\[\mathrm{II}^{\prime}:=\sum_{\begin{subarray}{c}|k_{(3)}|\leq|k_{(1)}|^{\alpha} \\ k_{1}-k_{2}+\cdots-k_{6}=0\end{subarray}}\Big{(}1-\chi\Big{(}\frac{\Omega( \vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}\frac{\psi_{2s}(\vec{k} )}{\Omega(\vec{k})}w_{k_{1}}\overline{w}_{k_{2}}\cdots\overline{w}_{k_{6}},\]
and we invoke the inequalities
\[\sum_{N_{(1)}^{\delta_{0}}\leq|\kappa|\lesssim N_{(1)}^{2}}\frac{1}{|\kappa|} \lesssim\log(N_{(1)}),\quad\Big{|}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})} \Big{|}\lesssim|k_{(1)}|^{2s-2}(|k_{(3)}|^{2}+1).\]
The proof of Proposition 6.1 is now complete.
## 7. Energy estimate II: the pairing contributions in the second generation
In this section, we estimate the singular contributions. Recall the definition of \(\mathcal{S}_{i,j}\) in (5.6)-(5.9).
**Proposition 7.1**.: _There exist \(C>0\) and \(\beta=\beta(\theta,s)\in(0,1)\), such that for \(j\in\{1,2\}\), \(R\geq 1\) and \(p\in[2,\infty)\), we have_
\[\|\operatorname{Im}\big{(}\mathcal{S}_{1,j}(w)-\mathcal{S}_{2,j}(w)\big{)} \mathbf{1}_{B^{H\sigma}_{R}(w)}\|_{L^{p}(d\mu_{s})}\leq Cp^{\beta}R^{10}.\]
**Remark 7.2**.: To explain the difficulty, we remark that the singular contributions \(\mathcal{S}_{i,j}(w)\) (\(i,j\in\{1,2\}\)) prevents us to use Wiener-chaos estimate to gain the square root cancellation. Nevertheless, it turns out that there is an extra cancellation when one takes the imaginary part of \(\mathcal{S}_{i,j}(w)\). To understand the hidden cancellation, for \(\mathcal{S}_{1,1}(v)\), one can think about the sum is taken over \(|k_{3}|,\cdots,|k_{6}|,|p_{2}|,\cdots,|p_{5}|=O(1)\), then
\[\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\approx\frac{|k_{1}|^{2s}-|k_{2}|^{ 2s}}{|k_{1}|^{2}-|k_{2}|^{2}},\]
and the second sum in the definition of \(\mathcal{S}_{1,1}\) is completely decoupled and we have
\[\mathcal{S}_{1,1}(w)= -\sum_{k_{1},k_{2}}\chi_{N}(k_{1})^{2}|w_{k_{2}}|^{2}\frac{|k_{1} |^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{2}}\bigg{|}\sum_{\begin{subarray}{c }|k_{3}|+|k_{4}|+|k_{5}|+|k_{6}|\leq|k_{2}|^{\theta}\\ k_{3}-k_{4}+k_{5}-k_{6}=k_{2}-k_{1}\end{subarray}}w_{k_{3}}\overline{w}_{k_{4 }}w_{k_{5}}\overline{w}_{k_{6}}\bigg{|}^{2}\] \[+\text{error },\]
where the main contribution is obviously real.
**Remark 7.3**.: However, it turns out that the cancellation described in Remark 7.2 alone is not enough to conclude, as the error term in the formula above is not negligible if we estimate individually \(\mathcal{S}_{1,j}(w)\) and \(\mathcal{S}_{1,j}(w)\), \(j=1,2\). What saves us is that these expressions there is some symmetric structure so that we can exploit some extra probabilistic cancellation and a deterministic smoothing. Let us explain theses points with more details. With the identification of \((q_{3},q_{2},q_{5},q_{4})=(p_{2},p_{3},p_{4},p_{5})\) (without changing \(k_{j}\)), we observe that \(\Lambda_{2,1}=\Lambda_{1,1}\) and
\[\mathcal{S}_{2,1}(w)=\sum_{\Lambda_{1,1}}\chi_{N}(k_{2})^{2}|w_{k_{1}}|^{2} \frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega( \vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}w_{k_{3}}\overline{w}_{ k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\cdot\overline{w}_{p_{2}}w_{p_{3}} \overline{w}_{p_{4}}w_{p_{5}}. \tag{7.1}\]
Therefore,
\[\operatorname{Im}\mathcal{S}_{1,1}(w)-\operatorname{Im}\mathcal{ S}_{2,1}(w)= \operatorname{Im}\sum_{\Lambda_{1,1}}(\chi_{N}(k_{1})^{2}|w_{k_{2}}|^{2}-\chi_{N }(k_{2})^{2}|w_{k_{1}}|^{2})\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\Big{(} 1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)} \Big{)}\] \[\times w_{k_{3}}\overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}} \cdot\overline{w}_{p_{2}}w_{p_{3}}\overline{w}_{p_{4}}w_{p_{5}}. \tag{7.2}\]
For \(j=2\), due to the special position, there is no such cancellation in \(\operatorname{Im}\big{(}\mathcal{S}_{1,2}(w)-\mathcal{S}_{2,2}(w)\big{)}\). Indeed,
\[\mathcal{S}_{2,2}(w)=\sum_{k_{2},k_{4}}\chi_{N}(k_{2})^{2}|w_{k_{4}}|^{2}\sum_{ \begin{subarray}{c}k_{1}+k_{3}+k_{5}-k_{6}=k_{2}+k_{4}\\ q_{1}+q_{3}+q_{5}-q_{4}=k_{2}+k_{4}\\ |k_{1}+|k_{3}|+|k_{5}|+|k_{6}|\leq|k_{2}|^{\theta}+|k_{4}|^{\theta}\\ |q_{1}|+|q_{3}|+|q_{4}|+|q_{5}|\leq|k_{2}|^{\theta}+|k_{4}|^{\theta}\end{subarray}} \frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec {k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}w_{k_{1}}w_{k_{3}}w_{k_{5}} \overline{w}_{k_{6}}\cdot\overline{w}_{q_{1}}\overline{w}_{q_{3}}\overline{ w}_{q_{5}}w_{q_{4}}\]
By switching the indices \((k_{1},k_{3},k_{5})\) with \((k_{2},k_{4},k_{6})\) and identifying \((q_{1},q_{3},q_{4},q_{5})\) as \((p_{1},p_{3},p_{4},p_{5})\) in \(\Lambda_{2,2}\), we deduce that
\[\mathcal{S}_{2,2}(w)=\overline{\mathcal{S}}_{1,2}(w),\]
where we used the fact that \(\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\) is invariant by the switching of indices \((k_{1},k_{3},k_{5})\) and \((k_{2},k_{4},k_{6})\). Therefore,
\[\operatorname{Im}\mathcal{S}_{1,2}(w)-\operatorname{Im}\mathcal{S}_{2,2}(w)=- 2\operatorname{Im}\mathcal{S}_{2,2}(w). \tag{7.3}\]
The good news is that in the expression \(\operatorname{Im}\mathcal{S}_{1,2}(v)\), we only need to exploit the first cancellation explained in Remark 7.2, since the resonant function \(\Omega(\vec{k})\approx|k_{1}|^{2}+|k_{3}|^{2}\sim|k_{(1)}|^{2}\) has a significantly larger size which provides a smoothing effect.
Proof of Proposition 7.1.: We separate the analysis for \(j=1\) and \(j=2\).
\(\bullet\) **Estimate for \(j=1\):**
Set
\[\Psi(\vec{k}):=\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\Big{(}1-\chi\Big{(} \frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}-\frac{|k_ {1}|^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{2}}\Big{(}1-\chi\Big{(}\frac{|k_{1} |^{2}-|k_{2}|^{2}}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}}\Big{)}\Big{)}.\]
We need an elementary lemma:
**Lemma 7.4**.: _On \(\Lambda_{1,1}\) defined in (5.4), for sufficiently large \(|k_{(1)}|\), we have_
\[|\Psi(\vec{k})|\lesssim\frac{|k_{(1)}|^{2s-2}|k_{(3)}|^{2}}{|\Omega(\vec{k})| }\mathbf{1}_{|\Omega(\vec{k})|\gtrsim|k_{(1)}|^{\delta_{0}}},\]
_where we recall that in the definition of \(\Lambda_{1,1}\), \(\theta<\frac{\delta_{0}}{2}\)._
Proof.: Note that on \(\Lambda_{1,1}\), \(\{k_{1},k_{2}\}=\{k_{(1)},k_{(2)}\}\) and \(|k_{(3)}|^{2}\lesssim|k_{(1)}|^{2\theta}\ll\lambda(\vec{k})^{\delta_{0}}\). Thanks to the support property of \(\chi\), if \(|\Omega(\vec{k})|\ll|k_{(1)}|^{\delta_{0}}\sim\lambda(\vec{k})^{\delta_{0}}\), we must have
\[\Psi(\vec{k})=-\frac{|k_{1}|^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{2}}\Big{(} 1-\chi\Big{(}\frac{|k_{1}|^{2}-|k_{2}|^{2}}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_ {0}/2}}\Big{)}\Big{)},\]
and \(||k_{1}|^{2}-|k_{2}|^{2}|\gtrsim(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}\sim \lambda(\vec{k})^{\delta_{0}}\), otherwise \(\Psi(\vec{k})=0\). Thus
\[|k_{(1)}|^{\delta_{0}}\lesssim||k_{1}|^{2}-|k_{2}|^{2}|=|\Omega(\vec{k})|-O(|k_ {(3)}|^{2}),\]
which contradicts to the fact that \(|k_{(3)}|^{2}\ll|k_{(1)}|^{\delta_{0}}\). Therefore, we assume that \(|\Omega(\vec{k})|\gtrsim|k_{(1)}|^{\delta_{0}}\) and consequently \(|k_{1}|\neq|k_{2}|\) in the sequel.
Set
\[G=|k_{3}|^{2}-|k_{4}|^{2}+|k_{5}|^{2}-|k_{6}|^{2},\quad F=|k_{3}|^{2s}-|k_{4}|^{ 2s}+|k_{5}|^{2s}-|k_{6}|^{2s}\]
and write
\[\psi_{2s}(\vec{k})=\frac{|k_{1}|^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{2}}( \Omega(\vec{k})-G)+F.\]
Hence
\[\Psi(\vec{k})= \frac{|k_{1}|^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{2}}\Big{[} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}-\Big{(}1-\chi\Big{(}\frac{|k_{1}|^{2}-|k_{2}|^{2}}{(|k_{1}|^{2} +|k_{2}|^{2})^{\delta_{0}/2}}\Big{)}\Big{)}\Big{]}\] \[+\frac{F}{\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{ k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}.\]
Since \(|F|\lesssim|k_{(3)}|^{2s},|G|\lesssim|k_{(3)}|^{2}\) and
\[\Big{|}\frac{|k_{1}|^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{2}}\Big{|} \lesssim|k_{(1)}|^{2s-2},\]
we deduce that
\[|\Psi(\vec{k})|\lesssim\frac{|k_{(1)}|^{2s-2}|k_{(3)}|^{2}}{|\Omega(\vec{k})| }\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}+|k_{(1)}|^{2s-2}\Big{|}\chi\Big{(}\frac{\Omega(\vec{k})}{ \lambda(\vec{k})^{\delta_{0}}}\Big{)}-\chi\Big{(}\frac{|k_{1}|^{2}-|k_{2}|^{2 }}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}}\Big{)}\Big{|}. \tag{7.4}\]
The first term on the right hand side of (7.4) satisfies the claimed bound. It remains to evaluate the second one. By the mean value theorem, there exists \(\alpha\in[0,1]\) such that
\[\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}-\chi \Big{(}\frac{|k_{1}|^{2}-|k_{2}|^{2}}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2 }}\Big{)}=\chi^{\prime}(\xi_{\alpha})\Big{(}\frac{\Omega(\vec{k})}{\lambda( \vec{k})^{\delta_{0}}}-\frac{|k_{1}|^{2}-|k_{2}|^{2}}{(|k_{1}|^{2}+|k_{2}|^{2 })^{\delta_{0}/2}}\Big{)},\]
where
\[\xi_{\alpha}=\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}-\alpha\Big{(} \frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}-\frac{|k_{1}|^{2}-|k_{2} |^{2}}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}}\Big{)}.\]
Thanks to the support properties of \(\chi^{\prime}\), when the second term on the right hand side of (7.4) is non zero, we must have \(|\xi_{\alpha}|\sim 1\). In this case, a direct computation yields
\[\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}-\frac{|k_{1}|^{2}-|k_{2 }|^{2}}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}}=\frac{\Omega(\vec{k})}{ \lambda(\vec{k})^{\delta_{0}}}\cdot\frac{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0} /2}-\lambda(\vec{k})^{\delta_{0}}}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}}+ \frac{G}{(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}}.\]
Note that \(|(|k_{1}|^{2}+|k_{2}|^{2})^{\delta_{0}/2}-\lambda(\vec{k})^{\delta_{0}}| \lesssim|k_{(3)}|^{\delta_{0}}\), we deduce that
\[\Big{|}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}-\xi_{\alpha}\Big{|} \lesssim\alpha\frac{|k_{(3)}|^{\delta_{0}}}{\lambda(\vec{k})^{\delta_{0}}} \Big{|}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}-\xi_{\alpha} \Big{|}+\alpha\frac{|k_{(3)}|^{\delta_{0}}}{\lambda(\vec{k})^{\delta_{0}}}| \xi_{\alpha}|+\frac{|G|}{\lambda(\vec{k})^{\delta_{0}}}.\]
As \(|G|\lesssim|k_{(3)}|^{2}\lesssim\lambda_{1}(\vec{k})^{2\theta}\ll\lambda(\vec{k})^{ \delta_{0}}\) for large enough \(|k_{(1)}|\), we deduce that
\[\Big{|}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}-\xi_{\alpha}\Big{|} \lesssim|k_{(3)}|^{2}\lambda(\vec{k})^{-\delta_{0}}\lesssim\lambda(\vec{k})^{- \delta_{0}+2\theta}\ll 1.\]
Since \(|\Omega(\vec{k})|\gtrsim|k_{(1)}|^{\delta_{0}}\), the second term on the right hand side of (7.4) is bounded by
\[\mathbf{1}_{|\Omega(\vec{k})|\sim|k_{(1)}|^{\delta_{0}}}\cdot\frac{|k_{(1)}|^{ 2s-2}|k_{(3)}|^{2}}{\lambda(\vec{k})^{\delta_{0}}}\sim\mathbf{1}_{|\Omega( \vec{k})|\sim|k_{(1)}|^{\delta_{0}}}\cdot\frac{|k_{(1)}|^{2s-2}|k_{(3)}|^{2}}{ |\Omega(\vec{k})|}.\]
This completes the proof of Lemma 7.4.
The key observation is that
\[\sum_{\Lambda_{1,1}}(\chi_{N}(k_{1})^{2}|w_{k_{2}}|^{2}-\chi_{N}( k_{2})^{2}|w_{k_{1}}|^{2})\Big{[}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}-\Psi(\vec{k})\Big{]}\] \[\qquad\qquad\times w_{k_{3}}\overline{w}_{k_{4}}w_{k_{5}}\overline {w}_{k_{6}}\cdot\overline{w}_{p_{2}}w_{p_{3}}\overline{w}_{p_{4}}w_{p_{5}}\] \[= \sum_{k_{1},k_{2}}(\chi_{N}(k_{1})^{2}|w_{k_{2}}|^{2}-\chi_{N}(k_ {2})^{2}|w_{k_{1}}|^{2})\frac{|k_{1}|^{2s}-|k_{2}|^{2s}}{|k_{1}|^{2}-|k_{2}|^{ 2}}\Big{(}1-\chi\Big{(}\frac{|k_{1}|^{2}-|k_{2}|^{2}}{(|k_{1}|^{2}+|k_{2}|^{2} )^{\delta_{0}/2}}\Big{)}\Big{)}\] \[\qquad\qquad\times\Big{|}\sum_{\begin{subarray}{c}k_{3}-k_{4}+k_{ 5}-k_{6}=k_{2}-k_{1}\\ |k_{3}|+|k_{4}|+|k_{5}|+|k_{6}|\leq|k_{1}|^{\theta}+|k_{2}|^{\theta}\end{subarray}}w _{k_{3}}\overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\Big{|}^{2}\]
is real-valued and it disappears when taking the imaginary part.
Therefore it suffices to show that there exists \(\beta=\beta(s,\theta,\delta_{0})\in(0,1)\), such that
\[\|J(w)\mathbf{1}_{\|w\|_{H^{\sigma}}\leq R}\|_{L^{p}(d\mu_{s})}\lesssim p^{ \beta}R^{10}. \tag{7.5}\]
where
\[J(w):=\sum_{\Lambda_{1,1}}\Psi(\vec{k})(\chi_{N}(k_{1})^{2}|w_{k_{2}}|^{2}- \chi_{N}(k_{2})^{2}|w_{k_{1}}|^{2})w_{k_{3}}\overline{w}_{k_{4}}w_{k_{5}} \overline{w}_{k_{6}}\cdot\overline{w}_{p_{2}}w_{p_{3}}\overline{w}_{p_{4}}w_{p _{5}}. \tag{7.6}\]
Since in the above expression, the contribution of \(k_{1}=k_{2}\) is zero, below we always implicitly assume that \(k_{1}\neq k_{2}\).
For dyadic numbers \(N_{1},N_{2},N_{3},N_{4},N_{5},N_{6},M_{2},M_{3},M_{4},M_{5}\), we decompose accordingly \(w_{k_{j}}^{N_{j}}=w_{k_{j}}\mathbf{1}_{|k_{j}|\sim N_{j}}\) and \(w_{p_{i}}^{M_{i}}=v_{p_{i}}\mathbf{1}_{|p_{i}|\sim M_{i}}\). It suffices to show that
\[\|J_{N_{1},\cdots,N_{6};M_{2},\cdots,M_{5}}(w)\mathbf{1}_{\|w\|_{H^{\sigma}} \leq R}\|_{L^{p}(d\mu_{s})}\lesssim p^{\beta}N_{(1)}^{-\gamma}R^{10} \tag{7.7}\]
for some \(\beta\in(0,1)\) and \(\gamma>0\), where \(J_{N_{1},\cdots,N_{6};M_{2},\cdots,M_{5}}\) is the same expression as \(J(w)\) by replacing the inputs \(w_{k_{j}},w_{p_{i}}\) to \(w_{k_{j}}^{N_{j}},w_{p_{i}}^{M_{i}}\). By definition of \(\Lambda_{1,1}\), we have \(N_{1}\sim N_{2}\) and
\[N_{3}+\cdots+N_{6}+M_{2}+\cdots+M_{5}\lesssim N_{1}^{\theta}.\]
By Lemma 7.4 and the fact that \(|\Omega(\vec{k})|\gtrsim N_{(1)}^{\delta_{0}}>N_{(1)}^{2\theta}\gtrsim N_{(3)}^{2}\), a crude deterministic estimate leads to
\[|J_{N_{2},\cdots,M_{5}}(w)|\lesssim \sum_{\Lambda_{1,1}}\frac{N_{1}^{2(s-1)}N_{(3)}^{2}}{|\Omega(\vec {k})|}\mathbf{1}_{k_{1}\neq k_{2}}(|w_{k_{2}}^{N_{2}}|^{2}+|w_{k_{1}}^{N_{1}}| ^{2})|w_{k_{3}}^{N_{3}}\cdots w_{k_{6}}^{N_{6}}|\cdot|w_{p_{2}}^{M_{2}}\cdots w _{p_{5}}^{M_{5}}|\] \[\lesssim \sum_{\Lambda_{1,1}}N_{1}^{2(s-1)}\mathbf{1}_{k_{1}\neq k_{2}}(|w _{k_{2}}^{N_{2}}|^{2}+|w_{k_{1}}^{N_{1}}|^{2})|w_{k_{3}}^{N_{3}}\cdots w_{k_{6} }^{N_{6}}|\cdot|w_{p_{2}}^{M_{2}}\cdots w_{p_{5}}^{M_{5}}|,\]
and the right hand side ca be bounded by
\[N_{1}^{2(s-1)}(\|w_{k_{1}}^{N_{1}}\|_{l^{2}}^{2}+\|w_{k_{2}}^{N_ {2}}\|_{l^{2}}^{2})\|w_{k_{3}}^{N_{3}}\|_{l^{1}}\cdots\|w_{k_{6}}^{N_{6}}\|_{l ^{1}}\|w_{p_{2}}^{M_{2}}\|_{l^{1}}\cdots\|w_{p_{5}}^{M_{5}}\|_{l^{1}}\] \[\lesssim N_{1}^{2(s-1)}N_{1}^{-2\sigma}(\|w_{k_{1}}^{N_{1}}\|_{h^{ \sigma}}^{2}+\|w_{k_{2}}^{N_{2}}\|_{h^{\sigma}}^{2})\|w_{k_{3}}^{N_{3}}\|_{h^{ \sigma}}\cdots\|w_{p_{5}}^{M_{5}}\|_{h^{\sigma}}\cdot(N_{3}\cdots M_{5})^{- \sigma+\frac{3}{2}}\] \[\lesssim N_{1}^{2(s-1-\sigma)}(N_{3}\cdots M_{5})^{-\sigma+\frac{3}{2}}\|w \|_{H^{\sigma}}^{10}.\]
Therefore, we obtain the first bound
\[|J_{N_{1},\cdots,M_{5}}(w)|\mathbf{1}_{\|w\|_{H^{\sigma}}\leq R} \lesssim N_{1}^{2(s-1-\sigma)}(N_{3}\cdots M_{5})^{-(\sigma-\frac{3}{2})}R^{ 10}. \tag{7.8}\]
As \(\sigma<s-\frac{3}{2}\), we need to improve the above bound using Wiener chaos estimates. We further split
\[J_{N_{1},\cdots,M_{5}}(w):=\widetilde{J}_{N_{1},\cdots,M_{5}}(w) +R_{N_{1},\cdots,M_{5}}(w),\]
where
\[\widetilde{J}_{N_{1},\cdots,M_{5}}(w):= \sum_{\Lambda_{1,1}}\Psi(\vec{k})\Big{[}\chi_{N}(k_{1})^{2}\big{(} |w_{k_{2}}^{N_{2}}|^{2}-\frac{1}{\sqrt{1+|k_{1}|^{2s}}}\big{)}-\chi_{N}(k_{2}) ^{2}\big{(}|w_{k_{1}}^{N_{1}}|^{2}-\frac{1}{\sqrt{1+|k_{2}|^{2s}}}\big{)}\Big{]}\] \[\times w_{k_{3}}^{N_{3}}\overline{w}_{k_{4}}^{N_{4}}w_{k_{5}}^{N_ {5}}\overline{w}_{k_{6}}^{N_{6}}\cdot\overline{w}_{p_{2}}^{M_{2}}w_{p_{3}}^{M_ {3}}\overline{w}_{p_{4}}^{M_{4}}w_{p_{5}}^{M_{5}}, \tag{7.9}\]
and
\[R_{N_{1},\cdots,M_{5}}:=\sum_{\Lambda_{1,1}}\Psi(\vec{k})\Big{(} \frac{\chi_{N}(k_{1})^{2}}{\sqrt{1+|k_{2}|^{2s}}}-\frac{\chi_{N}(k_{2})^{2}}{ \sqrt{1+|k_{1}|^{2s}}}\Big{)}w_{k_{3}}^{N_{3}}\overline{w}_{k_{4}}^{N_{4}}w_{k _{5}}^{N_{5}}\overline{w}_{k_{6}}^{N_{6}}\cdot\overline{w}_{p_{2}}^{M_{2}}w_{p_ {3}}^{M_{3}}\overline{w}_{p_{4}}^{M_{4}}w_{p_{5}}^{M_{5}}.\]
For \(|k_{1}|\sim|k_{2}|\sim N_{1}\sim N_{2}\), by the mean-value theorem and the fact that \(\chi_{N}(k_{1})-\chi_{N}(k_{2})\) takes the form \(\widetilde{\chi}(|k_{1}|^{2}/N^{2})-\widetilde{\chi}(|k_{2}|^{2}/N^{2})\), we have
\[\Big{|}\frac{\chi_{N}(k_{1})^{2}}{\sqrt{1+|k_{2}|^{2s}}}-\frac{ \chi_{N}(k_{2})^{2}}{\sqrt{1+|k_{1}|^{2s}}}\Big{|}\lesssim\frac{|\Omega(\vec{k} )|+|k_{(3)}|^{2}}{|k_{(1)}|^{2(s+1)}}.\]
Thus the remainder term \(R_{N_{1},\cdots,M_{5}}\) can be estimated by the previous deterministic manipulation (recall that \(|\Omega(\vec{k})|\gtrsim N_{(1)}^{\delta_{0}}>N_{(1)}^{2\theta}\) in the sum)
\[N_{1}^{2(s-1)+2\theta-2(s+1)}\sum_{\Lambda_{1,1}}\mathbf{1}_{k_{1 }\neq k_{2}}\frac{|\Omega(\vec{k})|+N_{1}^{2\theta}}{|\Omega(\vec{k})|}|w_{k_{ 3}}^{N_{3}}\cdots w_{p_{5}}^{M_{5}}|\] \[\lesssim N_{1}^{-4+2\theta}\cdot N_{1}^{3}\|w_{k_{3}}^{N_{3}}\|_{l^{1}} \cdots\|w_{p_{5}}^{M_{5}}\|_{l^{1}}\] \[\lesssim N_{1}^{-1+2\theta}\|w_{k_{3}}^{N_{3}}\|_{h^{\sigma}}\cdots\|w_{ p_{5}}^{M_{5}}\|_{h^{\sigma}}(N_{3}\ldots M_{5})^{-\sigma+\frac{3}{2}}\] \[\lesssim N_{1}^{-1+2\theta}(N_{3}\cdots M_{5})^{-(\sigma-\frac{3}{2})}\| w\|_{H^{\sigma}}^{8}\lesssim N_{(1)}^{-1+2\theta}R^{8}, \tag{7.10}\]
which is conclusive as far as \(\theta<\frac{1}{2}\) and \(\sigma>\frac{3}{2}\).
We next estimate \(\widetilde{J}_{N_{1},\cdots,M_{5}}\). We will not make use of the cancellation in the difference
\[\frac{\chi_{N}(k_{1})^{2}}{\sqrt{1+|k_{2}|^{2s}}}-\frac{\chi_{N}(k_{2})^{2}}{ \sqrt{1+|k_{1}|^{2s}}}\]
so we treat separately and in the same manner the contribution of each term..
Let \(\mathcal{B}_{\ll N_{1}}\) be the \(\sigma\)-algebra generated by Gaussians \(g_{k_{j}},|k_{j}|\ll N_{1}\) and \(\mathbf{P}_{\ll N_{1}}\) the frequency projector to \(|k|\ll N_{1}\). In particular, \(g_{k_{1}},g_{k_{2}}\) for \(|k_{1}|\sim N_{1},|k_{2}|\sim N_{2}\) are independent of the \(\sigma\)-algebra \(\mathcal{B}_{\ll N_{1}}\). We have
\[\|\widetilde{J}_{N_{1},\cdots,M_{5}}(w)\mathbf{1}_{\|w\|_{H^{\sigma}}\leq R}\| _{L^{p}(d\mu_{s})}^{p}\leq\mathbb{E}^{\mu_{s}}[\mathbb{E}^{\mu_{s}}[|\widetilde {J}_{N_{1},\cdots,M_{5}}(w)\mathbf{1}_{\|\mathbf{P}_{\ll N_{1}}w\|_{H^{\sigma} }\leq R}|^{p}|\mathcal{B}_{\ll N_{1}}]].\]
As \(\mathbf{P}_{\ll N_{1}}w,w_{k_{3}}^{N_{3}},\cdots,w_{p_{5}}^{M_{5}}\) are \(\mathcal{B}_{\ll N_{1}}\)-measurable, by the Wiener chaos estimate conditional to \(\mathcal{B}_{\ll N_{1}}\),
\[\Big{(}\mathbb{E}^{\mu_{s}}[|\widetilde{J}_{N_{1},\cdots,M_{5}}(w )\mathbf{1}_{\|\mathbf{P}_{\ll N_{1}}w\|_{H^{\sigma}}\leq R}|^{p}|\mathcal{B}_ {\ll N_{1}}]\Big{)}^{\frac{1}{p}}\] \[\leq Cp\Big{(}\sum_{|k_{2}|\sim N_{2}}\frac{1}{\langle k_{2}\rangle^{ 4s}}\Big{|}\sum_{k_{3}-k_{4}+k_{5}-k_{6}=p_{2}-p_{3}+p_{4}-p_{5}}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Combining with (7.10) and interpolating with (7.8), we deduce that there exist constants \(C>0\), \(\beta=\delta(s,\theta)\in(0,1)\), such that for any \(p\geq 2\) and \(R\geq 1\),
\[\|J_{N_{1},\cdots,M_{5}}(w)\mathbf{1}_{\|w\|_{H^{\sigma}}\leq R}\|_{L^{p}(d\mu_ {s})}\leq Cp^{\beta}N_{(1)}^{-1+2\theta}R^{10}, \tag{7.11}\]
which is conclusive since \(\theta<\frac{1}{3}\).
\(\bullet\)**Estimate for \(j=2\):** It suffices to estimate \(\operatorname{Im}\mathcal{S}_{1,2}(w)\). Recall that
\[\mathcal{S}_{1,2}(w):=\sum_{\Lambda_{1,2}}\chi_{N}(k_{1})^{2}|w_{k_{3}}|^{2} \Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\overline{w}_{k_{2}} \overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\cdot w_{p_{1}}w_{p_{3}} \overline{w}_{p_{4}}w_{p_{5}},\]
and on \(\Lambda_{1,2}\), \(|\Omega(\vec{k})|\sim|k_{(1)}|^{2}\gg\lambda(\vec{k})^{\delta_{0}}\), thus
\[\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega( \vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}=\frac{\psi_{2s}(\vec{k })}{\Omega(\vec{k})}.\]
Set
\[\widetilde{\Psi}(\vec{k}):=\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}-\frac{| k_{1}|^{2s}+|k_{3}|^{2s}}{|k_{1}|^{2}+|k_{3}|^{2}}.\]
By mean-value theorem, we easily deduce that:
**Lemma 7.5**.: _On \(\Lambda_{1,2}\) defined in (5.5), for sufficiently large \(|k_{(1)}|\), we have_
\[|\widetilde{\Psi}(\vec{k})|\lesssim\frac{|k_{(1)}|^{2s-2}|k_{(3)}|^{2}}{| \Omega(\vec{k})|}\mathbf{1}_{|\Omega(\vec{k})|\sim|k_{(1)}|^{2}}.\]
Since
\[\sum_{k_{1},k_{3}}\chi_{N}(k_{1})^{2}|w_{k_{3}}|^{2}\frac{|k_{1}|^{2s}+|k_{3}| ^{2s}}{|k_{1}|^{2}+|k_{3}|^{2}}\sum_{\begin{subarray}{c}k_{2}+k_{4}-k_{5}+k_{ 6}=k_{1}+k_{3}\\ p_{1}+p_{3}-p_{4}+p_{5}=k_{1}+k_{3}\\ k_{2}|+|k_{4}|+|k_{5}|+|k_{6}|\leq|k_{1}|^{\theta}+|k_{8}|^{\theta}\\ |p_{1}|+|p_{3}|+|p_{4}|+|p_{5}|\leq|k_{1}|^{\theta}+|k_{3}|^{\theta}\end{subarray}} \overline{w}_{k_{2}}\overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\cdot w _{p_{1}}w_{p_{3}}\overline{w}_{p_{4}}w_{p_{5}}\]
equals to
\[\sum_{k_{1},k_{3}}\chi_{N}(k_{1})^{2}|w_{k_{3}}|^{2}\frac{|k_{1}|^{2s}+|k_{3}| ^{2s}}{|k_{1}|^{2}+|k_{3}|^{2}}\Big{|}\sum_{\begin{subarray}{c}k_{2}+k_{4}-k_{5} +k_{6}=k_{1}+k_{3}\\ |k_{2}|+|k_{4}|+|k_{5}|+|k_{6}|\leq|k_{1}|^{\theta}+|k_{3}|^{\theta}\end{subarray}} \overline{w}_{k_{2}}\overline{w}_{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\Big{|}^{ 2},\]
which is real-valued, we deduce that
\[\operatorname{Im}(\mathcal{S}_{1,2}(w))=I(w):=\sum_{\Lambda_{1,2}}\chi_{N}(k_{ 1})^{2}|w_{k_{3}}|^{2}\widetilde{\Psi}(\vec{k})\overline{w}_{k_{2}}\overline{w} _{k_{4}}w_{k_{5}}\overline{w}_{k_{6}}\cdot w_{p_{1}}w_{p_{3}}\overline{w}_{p_{ 4}}w_{p_{5}}. \tag{7.12}\]
As for the case \(j=1\), we will split the sum into dyadic pieces. For dyadic numbers \(N_{j},M_{j}\), we decompose accordingly \(w_{k_{j}}^{N_{j}}=w_{k_{j}}\mathbf{1}_{|k_{j}|\sim N_{j}}\) and \(w_{p_{i}}^{M_{i}}=w_{p_{i}}\mathbf{1}_{|p_{i}|\sim M_{i}}\). It suffices to show that for some \(\beta\in(0,1)\).
\[\|I_{N_{1},\cdots,N_{6};M_{2},\cdots,M_{5}}(w)\mathbf{1}_{\|w\|_{H^{\sigma}} \leq R}\|_{L^{p}(d\mu_{s})}\lesssim p^{\beta}N_{(1)}^{-\frac{1}{100}}R^{10}, \tag{7.13}\]
where \(I_{N_{1},\cdots,N_{6};M_{2},\cdots,M_{5}}\) is the same expression as \(I(w)\) by replacing the inputs \(w_{k_{j}},w_{p_{i}}\) to \(w_{k_{j}}^{N_{j}},w_{p_{i}}^{M_{i}}\).
It turns out that only deterministic estimates suffice. Indeed, from the fact that \(|\Omega(\vec{k})|\sim N_{(1)}^{2}\), by Lemma 7.5, we have
\[|I_{N_{1},\cdots,M_{5}}(w)|\lesssim\frac{N_{(1)}^{2s-2}N_{(3)}^{2}}{N_{(1)}^{2 }}\,\sum_{k_{3}}|w_{k_{3}}^{N_{3}}|^{2}\sum_{k_{2},k_{4},k_{5},k_{6}}\sum_{p_{1 },p_{3},p_{4},p_{5}}|w_{k_{2}}^{N_{2}}w_{k_{4}}^{N_{4}}w_{k_{5}}^{N_{5}}w_{k_{6 }}^{N_{6}}w_{p_{1}}^{M_{1}}w_{p_{3}}^{M_{3}}w_{p_{4}}^{M_{4}}w_{p_{5}}^{M_{5}}|,\]
which is bounded by
\[N_{(1)}^{2(s-2)}N_{(3)}^{2}\|w_{k_{3}}^{N_{3}}\|_{l^{2}}^{2}\|w_{ k_{2}}^{N_{2}}\|_{l^{1}}\cdots\|w_{k_{6}}^{N_{6}}\|_{l^{1}}\|w_{p_{1}}^{M_{1}} \|_{l^{1}}\cdots\|w_{p_{5}}^{M_{5}}\|_{l^{1}}\] \[\lesssim N_{(1)}^{2(s-2-\sigma)+2\theta}(N_{2}N_{4}N_{5}N_{6}M_{1}M_{3}M_{ 4}M_{5})^{-(\sigma-\frac{3}{2})}\|w\|_{H^{\sigma}}^{10}\] \[\leq N_{(1)}^{2(s-2-\sigma)+2\theta}\|w\|_{H^{\sigma}}^{10},\]
provided that \(\sigma>\frac{3}{2}\), which is conclusive when \(\theta<\frac{1}{2}\) and \(\sigma\) close enough to \(s-\frac{3}{2}\). This completes the proof of Proposition 7.1.
## 8. Energy estimate III: Remainders in the second generation
In this section, we will estimate \(\mathcal{R}_{1,3}(w),\mathcal{R}_{2,3}(w)\). More precisely, we have the following statement:
**Proposition 8.1**.: _Let \(\theta<\frac{1}{3}\), close enough to \(\frac{1}{3}\) and \(\delta_{0}\in(2\theta,\frac{2}{3})\), close enough to \(\frac{2}{3}\). There exist \(C>0\) and \(\beta=\beta(\theta,s)\in(0,1)\), such that for \(j\in\{1,2\}\), \(R\geq 1\) and \(p\in[2,\infty)\), we have_
\[\|\mathcal{R}_{1,3}(w)\mathbf{1}_{B_{R}^{H\sigma}(w)}\|_{L^{p}(d\mu_{s})}+\| \mathcal{R}_{2,3}(w)\mathbf{1}_{B_{R}^{H\sigma}(w)}\|_{L^{p}(d\mu_{s})}\leq Cp ^{\beta}R^{10}.\]
Since the estimate for \(\mathcal{R}_{2,3}(w)\) is similar, we only do it for \(\mathcal{R}_{1,3}(w)\). Recall in the expression of \(\mathcal{R}_{1,3}(w)\), we distinguish three types of contributions in the decomposition of the sum
\[\sum_{\begin{subarray}{c}k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0\\ k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\end{subarray}}(\cdots).\]
Recall that \(k_{(1)}\cdots,k_{(10)}\) is a rearrangement of leaves \(p_{1},p_{2},p_{3},p_{4},p_{5},k_{2},k_{3},k_{4},k_{5},k_{6}\) such that \(|k_{(1)}|\geq|k_{(2)}|\geq\cdots\geq|k_{(10)}|\).
* Type A: \(\sum_{j=3}^{10}|k_{(j)}|>|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\).
* Type B: \(\sum_{j=3}^{10}|k_{(j)}|\leq|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\) and \(\{k_{(1)},k_{(2)}\}\subset\{k_{2},k_{3},k_{4},k_{5},k_{6}\}\) or \(\{k_{(1)},k_{(2)}\}\subset\{p_{1},p_{2},p_{3},p_{4},p_{5}\}\).
* Type C: \(\sum_{j=3}^{10}|k_{(j)}|\leq|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\), \(k_{(1)}\neq k_{(2)}\) and \[k_{(1)}\in\{k_{2},k_{3},k_{4},k_{5},k_{6}\},\quad k_{(2)}\in\{p_{1},p_{2},p_{3}, p_{4},p_{5}\}\] or \[k_{(2)}\in\{k_{2},k_{3},k_{4},k_{5},k_{6}\},\quad k_{(1)}\in\{p_{1},p_{2},p_{3}, p_{4},p_{5}\},\] and \(k_{(1)},k_{(2)}\) have different signatures.
Let us denote by \(\Lambda_{A}\), \(\Lambda_{B}\), \(\Lambda_{C}\) the sets of indices \(k_{1},\dots,k_{6},p_{1},\dots,p_{5}\) that satisfy the linear constraints
\[k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0,\quad k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\]
and the conditions for Type A, B, C respectively. Furthermore, we denote \(\mathcal{R}^{(A)}_{1,3},\mathcal{R}^{(B)}_{1,3},\mathcal{R}^{(C)}_{1,3}\) the corresponding contributions to \(\mathcal{R}_{1,3}(w)\). We will need the following elementary lemma.
**Lemma 8.2**.: _Assume that \(f^{(j)}\) satisfies \(f^{(j)}_{k_{j}}\mathbf{1}_{|k_{j}|\sim M_{j}}=f^{(j)}_{k_{j}}\) for \(j=1,2,3,4,5,6\) with \(M_{j}\in 2^{\mathbb{N}}\). Then_
\[\sum_{k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0}\frac{\psi_{2s}(\vec{ k})}{|\Omega(\vec{k})|}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda( \vec{k})^{\delta_{0}}}\Big{)}\Big{)}\prod_{j=1}^{6}|f^{(j)}_{k_{j}}|\\ \lesssim M^{2s-2}_{(1)}\,M^{2}_{(3)}(M_{(3)}M_{(4)}M_{(5)}M_{(6) })^{\frac{3}{2}}\prod_{j=1}^{6}\|f^{(j)}\|_{l^{2}},\]
_where \(M_{(1)}\geq M_{(2)}\geq\dots\geq M_{(6)}\) is non-increasing rearrangement of dyadic integers \(M_{1},M_{2},\cdots,M_{6}\)._
Proof.: Since signature of \(k_{j}\) plays no significant role in the proof, without loss of generality, we assume that \(M_{1}\geq M_{2}\geq\dots\geq M_{6}\). Since \(|\psi_{2s}(\vec{k})|\lesssim M^{2s-2}_{1}(M^{2}_{3}+|\Omega(\vec{k})|)\), using the Cauchy-Schwarz inequality in the \(k_{1}\) and \(k_{2}\) summation, we obtain the bound
\[M^{2s-2}_{(1)}\,M^{2}_{(3)}\prod_{j=1}^{2}\|f^{(j)}\|_{l^{2}}\prod_{j=3}^{6} \|f^{(j)}\|_{l^{1}}.\]
It remains to use the Cauchy-Schwarz inequality to pass from \(l^{1}\) to \(l^{2}\). This completes the proof of Lemma 8.2.
**Remark 8.3**.: Since the crude bound is enough for our need, we do not make use of the denominator \(\frac{1}{|\Omega(\vec{k})|}\) in the estimate above.
Proof of Proposition 8.1.: Since the proof follows from tedious estimates, we split it into three different parts, according to Type A,B,C. We will split the function \(w\) into dyadic pieces, and we denote \(w^{K}=\mathbf{P}_{K}w\) in the sequel which means that \(w^{K}_{k}=\mathbf{1}_{|k|\sim K}w_{k}\).
\(\bullet\) Estimate of Type A contribution:
We decompose the expression
\[\mathcal{R}^{(A)}_{1,3}:=\sum_{\Lambda_{A}}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{ k})}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}} \Big{)}\Big{)}\chi_{N}(k_{1})^{2}w_{p_{1}}\cdots w_{p_{5}}\overline{w}_{k_{2}} \cdots\overline{w}_{k_{6}}\]
dyadically by
\[\sum_{M_{1},\cdots,M_{6},P_{1},\cdots,P_{5}}\mathcal{R}^{(A)}_{1,3}(M_{1}, \cdots,P_{5}),\]
where \(\mathcal{R}^{(A)}_{1,3}(M_{1},\cdots,P_{5})\) is
\[\sum_{\Lambda_{A}}\frac{\psi_{2s}(\vec{k})}{\Omega(\vec{k})}\Big{(}1-\chi \Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)} \chi_{N}(k_{1})^{2}w_{p_{1}}^{P_{1}}\cdots w_{p_{5}}^{P_{5}}\overline{w}_{k_{2 }}^{M_{2}}\cdots\overline{w}_{k_{6}}^{M_{6}}\cdot\mathbf{1}_{|k_{1}|\sim M_{1 }}.\]
We denote by \(N_{(1)}\geq N_{(2)}\geq\cdots N_{(10)}\) the non-increasing rearrangement of dyadic integers
\[P_{1},P_{2},P_{3},P_{4},P_{5},M_{2},M_{3},M_{4},M_{5},M_{6}.\]
Note that the constraint \(\sum_{j=3}^{10}|k_{(j)}|>|k_{(1)}|^{\theta}+|k_{(2)}|^{\theta}\) implies that \(N_{(3)}\gtrsim N_{(1)}^{\theta}\) for non-zero terms \(\mathcal{R}^{(A)}_{1,3}(M_{1},\cdots,P_{5})\). Write
\[|\mathcal{R}^{(A)}_{1,3}(M_{1},\cdots,P_{5})|\leq\sum_{k_{1}-k_{2}+k_{3}-k_{4} +k_{5}-k_{6}=0}\frac{\psi_{2s}(\vec{k})}{|\Omega(\vec{k})|}\Big{(}1-\chi\Big{(} \frac{\Omega(\vec{k})}{\lambda(\vec{k})^{\delta_{0}}}\Big{)}\Big{)}\prod_{j=1 }^{6}|f^{(j)}_{k_{j}}|,\]
where \(f^{(j)}_{k_{j}}=w^{M_{j}}_{k_{j}}\) for \(j=2,3,4,5,6\) and
\[f^{(1)}_{k_{1}}=\sum_{p_{1}-p_{2}+p_{3}-p_{4}+p_{5}=k_{1}}\mathbf{1}_{|k_{1}| \sim M_{1}}w_{p_{1}}^{P_{1}}\cdots w_{p_{5}}^{P_{5}}.\]
Applying Lemma 8.2, we have
\[|\mathcal{R}^{(A)}_{1,3}(M_{1},\cdots,P_{5})|\lesssim M^{2s-2}_{(1)}\,M^{2}_{(3)}(M_{(3)}M_{(4)}M_{(5)}M_{(6)})^{\frac{3}{2}} \prod_{j=1}^{6}\|f^{(j)}\|_{l^{2}}\] \[\lesssim M^{2s-2}_{(1)}\,M^{2}_{(3)}(M_{(3)}M_{(4)}M_{(5)}M_{(6)})^{\frac{ 3}{2}}\|f^{(1)}\|_{l^{2}}\prod_{j=2}^{6}(M^{-\sigma}_{j}\|w^{M_{j}}\|_{H^{ \sigma}}).\]
where \(M_{(1)}\geq M_{(2)}\geq\cdots\geq M_{(6)}\) is a non-increasing rearrangement of \(M_{1},M_{2},\cdots,M_{6}\). By Cauchy-Schwarz, we have
\[\|f^{(1)}\|_{l^{2}}\lesssim(P_{(2)}P_{(3)}P_{(4)}P_{(5)})^{\frac{3}{2}}\prod_{ j=1}^{5}(P_{j}^{-\sigma}\|w^{P_{j}}\|_{H^{\sigma}}),\]
where \(P_{(1)}\geq P_{(2)}\geq P_{(3)}\geq P_{(4)}\geq P_{(5)}\) is a non-increasing rearrangement of \(P_{1},P_{2},P_{3},P_{4},P_{5}\). Thus we obtain that
\[|\mathcal{R}^{(A)}_{1,3}(M_{1},\cdots,P_{5})|\] \[\qquad\qquad\lesssim M_{(1)}^{2s-2}\,M_{(3)}^{2}(M_{(3)}M_{(4)}M_ {(5)}M_{(6)})^{\frac{3}{2}}(M_{2}\cdots M_{6})^{-\sigma}\,P_{(1)}^{-\sigma}(P_ {(2)}\cdots P_{(5)})^{\frac{3}{2}-\sigma}\|w\|_{H^{\sigma}}^{10}.\]
Since we are in the regime \(s\geq 10\) and \(\sigma\) is close to \(s-\frac{3}{2}\), we control the right hand side by
\[N_{(1)}^{2(s-1-\sigma)}\,N_{(3)}^{\frac{7}{2}-\sigma}\|w\|_{H^{\sigma}}^{10} \lesssim N_{(1)}^{2(s-1-\sigma)}\,N_{(1)}^{-\theta(\sigma-\frac{7}{2})}\|w\|_{ H^{\sigma}}^{10}.\]
For \(s\geq 10\), \(\sigma\) close to \(s-\frac{3}{2}\) and \(\theta\) is close to \(\frac{1}{3}\) the last expression can be estimated by \(N_{(1)}^{-\epsilon_{0}(\theta,\sigma)}\|w\|_{H^{\sigma}}^{10}\) for some \(\epsilon_{0}(\theta,\sigma)>0\) which is conclusive.
\(\bullet\) Estimate of Type B contribution:
Denote \(\Lambda_{B1}\) the set of \((k_{1},\cdots,p_{5})\in\Lambda_{B}\) such that \(k_{(1)},k_{(2)}\in\{k_{2},k_{3},k_{4},k_{5},k_{6}\}\) and \(\Lambda_{B2}\) the set of \((k_{1},\cdots,p_{5})\in\Lambda_{B}\) such that \(k_{(1)},k_{(2)}\in\{p_{1},p_{2},p_{3},p_{4},p_{5}\}\), and denote by \(\mathcal{R}^{(B1)}_{1,3},\mathcal{R}^{(B2)}_{1,3}\) the corresponding multilinear expressions.
\(\bullet\)**Subcase: Contribution \(\mathcal{R}^{(B1)}_{1,3}\):** We first estimate \(\mathcal{R}^{(B1)}_{1,3}\). By symmetry of indices, we may assume that \(k_{(1)}=k_{3},k_{(2)}=k_{2}\). Then other frequencies satisfy the constraint
\[\sum_{j=1}^{5}|p_{j}|+\sum_{j=4}^{6}|k_{j}|<|k_{2}|^{\theta}+|k_{3}|^{\theta}\]
on \(\Lambda_{B1}\). We decompose \(\mathcal{R}^{(B1)}_{1,3}\) by the dyadic sum
\[\sum_{M_{1},\cdots,M_{6},P_{1},\cdots,P_{5}}\mathcal{R}^{(B1)}_{1,3}(M_{1}, \cdots,P_{5})\]
as in the estimate for Type (A) terms. Under the constraint of \(\Lambda_{B}\) and our convention that \(\{k_{(1)},k_{(2)}\}=\{k_{3},k_{2}\}\), we must have \(M_{2}\sim M_{3}\sim N_{(1)}\) and \(\max\{M_{1},M_{4},M_{5},M_{6}\}\leq N_{(3)}\).
Note that for the pairing part \(k_{2}=k_{3}\) in \(\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\), we have \(|\psi_{2s}(\vec{k})|\lesssim|k_{(3)}|^{2s}\), thus we can control it simply by
\[\|w^{M_{2}}\|_{l^{2}}^{2}\cdot\sum_{\begin{subarray}{c}k_{1}-k_{4 }+k_{5}-k_{6}=0\\ k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\end{subarray}}|k_{(3)}|^{2s}\prod_{j=1}^{5 }|w^{P_{j}}_{p_{j}}|\prod_{j=4}^{6}|w^{M_{j}}_{k_{j}}|\] \[\lesssim N_{(1)}^{-2\sigma}N_{(3)}^{2s}\|w\|_{H^{\sigma}}^{10}\lesssim N_{(1)}^{2s\theta-2\sigma}\|w\|_{H^{\sigma}}^{10}, \tag{8.1}\]
thanks to \(\sigma>\frac{3}{2}\). As \(\theta<\frac{1}{3}\), \(s\geq 10\) and \(\sigma\) is close enough to \(s-\frac{3}{2}\), the right hand side is bounded by a negative power of \(N_{(1)}\) times \(\|w\|_{H^{\sigma}}^{10}\), which is conclusive.
It remains to consider the non-pairing contribution in \(\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\). Recall that
\[\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\] \[=\sum_{\begin{subarray}{c}k_{2}\neq k_{3}\\ |k_{2}|\sim M_{2},|k_{3}|\sim M_{3}\end{subarray}}\overline{w}_{k_{2}}^{M_{2} }w_{k_{3}}^{M_{3}}\sum_{\begin{subarray}{c}k_{4},k_{5},k_{6},p_{1},\cdots,p_{ 5}\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\frac{\psi_{2s}(\vec{k}) }{\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{ \delta_{0}}}\Big{)}\Big{)}\overline{w}_{k_{4}}^{M_{4}}w_{k_{5}}^{M_{5}} \overline{w}_{k_{6}}^{M_{6}}w_{p_{1}}^{P_{1}}\cdots w_{p_{5}}^{P_{5}}.\]
Denote \(\mathcal{B}_{\ll M_{2}}\) the \(\sigma\)-algebra generated by \((g_{k})_{|k|\leq M_{2}/100}\). Without loss of generality, we assume that \(M_{2}\sim N_{(1)}\) is large enough such that \(N_{(3)}\lesssim N_{(1)}^{\theta}\ll\frac{M_{2}}{100}\). Consequently, with respect to \(\mu_{s}\), \(w^{M_{4}},w^{M_{5}},w^{M_{6}},w^{P_{1}},\cdots,w^{P_{5}}\) are independent of \(w^{M_{2}},w^{M_{3}}\).
By conditional Wiener chaos estimate, we have
\[\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B_{R}^{ H\sigma}}(w)\|_{L^{p}(d\mu_{s})}\leq \|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B_{R}^{ H\sigma}}(\mathbf{P}_{\ll M_{2}}w)\|_{L^{p}(d\mu_{s})}\] \[= \big{\|}\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_ {B_{R}^{H\sigma}}(\mathbf{P}_{\ll M_{2}}w)\|_{L^{p}(d\mu_{s}|\mathcal{B}_{ \ll M_{2}})}\big{\|}_{L^{p}(d\mu_{s})}\] \[\lesssim p\big{\|}\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d \mu_{s}|\mathcal{B}_{\ll M_{2}})}\cdot\mathbf{1}_{B_{R}^{H\sigma}}(\mathbf{P} _{\ll M_{2}}w)\big{\|}_{L^{p}(d\mu_{s})}.\]
It suffices to show that
\[\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d\mu_{s}| \mathcal{B}_{\ll M_{2}})}\lesssim N_{(1)}^{-\frac{1}{2}}\|w\|_{H^{\sigma}}^{8}. \tag{8.2}\]
Indeed, the above estimate yields
\[\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B_{R}^{ H\sigma}}(w)\|_{L^{p}(d\mu_{s})}\lesssim pN_{(1)}^{-\frac{1}{2}}R^{8}.\]
Since we have left a negative power of \(N_{(1)}\), by interpolating with the crude deterministic bound which is of the form \(N_{(1)}^{O(1)}\), we obtain the desired estimate.
Now we prove (8.2). Thanks to the fact that \(k_{2}\neq k_{3}\), we deduce that3
Footnote 3: In the summation below, we implicitly assume that \(|k_{2}|\sim M_{2},|k_{3}|\sim M_{3}\).
\[\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d\mu_{s}| \mathcal{B}_{\ll M_{2}})}\] \[\lesssim\] \[\lesssim N_{(1)}^{-2s}\Big{(}\prod_{j=4}^{10}\|w^{N_{(j)}}\|_{L^{2}} \Big{)}\cdot\Big{(}\sum_{\begin{subarray}{c}k_{2}\neq k_{3},k_{4},k_{5},k_{6},p _{1},\cdots,p_{5}\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\frac{M_{(1)}^{4(s-1)}(M_{ (3)}^{4}+|\Omega(\vec{k})|^{2})}{1+|\Omega(\vec{k})|^{2}}|w_{k_{(3)}}^{N_{(3) }}|^{2}\Big{)}^{\frac{1}{2}}.\]
By Lemma 4.2, the last sum on the right hand side can be estimated as
\[\sum_{k_{4},k_{5},k_{6},p_{1},\cdots,p_{5}}|w^{N_{(3)}}_{k_{(3)}}|^{2 }\sum_{\begin{subarray}{c}k_{2}\neq k_{3}\\ k_{2}-k_{3}=p_{1}-\cdots-p_{5}-k_{4}+k_{5}-k_{6}\end{subarray}}\frac{N^{4(s-1)} _{(1)}(N^{4}_{(3)}+|\Omega(\vec{k})|^{2})}{1+|\Omega(\vec{k})|^{2}}\] \[\lesssim \sum_{k_{4},k_{5},k_{6},p_{1},\cdots,p_{5}}|w^{N_{(3)}}_{k_{(3)}} |^{2}N^{4(s-1)}_{(1)}(N^{4}_{(3)}N^{2}_{(1)}+N^{3}_{(1)})\] \[\lesssim \|w^{N_{(3)}}\|_{L^{2}}^{2}N^{4(s-1)}_{(1)}(N^{4}_{(3)}N^{2}_{(1) }+N^{3}_{(1)})\prod_{j=4}^{10}N^{3}_{(j)},\]
thus
\[\|\mathcal{R}^{(B1)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d\mu_{s}| \mathcal{B}_{\ll M_{2}})}\lesssim N^{-2}_{(1)}(N^{\frac{3}{2}}_{(1)}+N^{2}_{(3)}N_{(1)})\Big{(} \prod_{j=3}^{10}\|w^{N_{(j)}}\|_{L^{2}}\Big{)}\Big{(}\prod_{j=4}^{10}N^{\frac{ 3}{2}}_{(j)}\Big{)}\] \[\lesssim N^{-\frac{1}{2}}_{(1)}N^{-\sigma}_{(3)}\prod_{j=4}^{10}N^{- \sigma+\frac{3}{2}}_{(j)}\cdot\|w\|^{8}_{H^{\sigma}}+N^{-1}_{(1)}N^{-\sigma+ 2}_{(3)}\prod_{j=4}^{10}N^{-\sigma+\frac{3}{2}}_{(3)}\cdot\|w\|^{8}_{H^{\sigma}}\] \[\lesssim N^{-\frac{1}{2}}_{(1)}\|w\|^{8}_{H^{\sigma}},\]
which is conclusive, thanks to the fact that \(s\geq 10\) and that \(\sigma\) is close to \(s-\frac{3}{2}\).
\(\bullet\)**Subcase: Contribution \(\mathcal{R}^{(B2)}_{1,3}\):** Next we estimate \(\mathcal{R}^{(B2)}_{1,3}\) for which \(k_{(1)},k_{(2)}\in\{p_{1},p_{2},p_{3},p_{4},p_{5}\}\). By symmetry of indices, we assume that \(k_{(1)}=p_{1},k_{(2)}=p_{2}\), then
\[\sum_{j=3}^{5}|p_{j}|+\sum_{j=2}^{6}|k_{j}|\leq|p_{1}|^{\theta}+|p_{2}|^{\theta}\]
on \(\Lambda_{B2}\). Similarly, we decompose \(\mathcal{R}^{(B2)}_{1,3}\) by dyadic sum
\[\sum_{M_{1},\cdots,M_{6},P_{1},\cdots,P_{5}}\mathcal{R}^{(B2)}_{1,3}(M_{1}, \cdots,P_{5}),\]
and this time, \(P_{1}\sim P_{2}\sim N_{(1)}\) and \(\max\{M_{1},M_{2},\cdots,M_{6}\}\lesssim N_{(3)}\). In particular, the energy weight \(\psi_{2s}(\vec{k})\) satisfies \(|\psi_{2s}(\vec{k})|\lesssim N^{2s}_{(3)}\) which is much smaller than in the previous case.
The pairing contribution \(p_{1}=p_{2}\) in \(\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\) can be controlled in the same way by the same bound as (8.1). We omit the detail.
For the non-pairing contribution in \(\mathcal{R}^{(B2)}_{1,3}\), again we apply the Wiener chaos estimate. Denote \(\mathcal{B}_{\ll P_{1}}\) the \(\sigma\)-algebra generated by \((g_{k})_{|k|\leq P_{1}/100}\). Without loss of generality, we assume that \(P_{1}\sim N_{(1)}\) is large enough such that \(N_{(3)}\lesssim N^{\theta}_{(1)}\ll\frac{P_{1}}{100}\). Consequently, with respect to \(\mu_{s}\)
\(w^{M_{2}},\cdots,w^{M_{6}},w^{P_{3}},w^{P_{4}},w^{P_{5}}\) are independent of \(w^{P_{1}},w^{P_{2}}\). Recall that
\[\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\] \[=\sum_{\begin{subarray}{c}p_{1}\neq p_{2}\\ |p_{1}|\sim P_{1},|p_{2}|\sim P_{2}\end{subarray}}w^{P_{1}}_{p_{1}}\overline{w }^{P_{2}}_{p_{2}}\sum_{\begin{subarray}{c}k_{2},\cdots,k_{6},p_{3},p_{4},p_{5} \\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\frac{\psi_{2s}(\vec{k})} {\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^{ \delta_{0}}}\Big{)}\Big{)}w^{P_{3}}_{p_{3}}\overline{w}^{P_{4}}_{p_{4}}w^{P_{5} }_{p_{5}}\overline{w}^{M_{2}}_{k_{2}}\cdots\overline{w}^{M_{6}}_{k_{6}}.\]
By conditional Wiener chaos estimate, we have
\[\|\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B^{H^{ \sigma}}_{R}}(w)\|_{L^{p}(d\mu_{s})}\leq \|\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B^{H^{ \sigma}}_{R}}(\mathbf{P}_{\ll P_{1}}w)\|_{L^{p}(d\mu_{s})}\] \[= \big{\|}\|\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_ {B^{H^{\sigma}}_{R}}(\mathbf{P}_{\ll P_{1}}w)\|_{L^{p}(d\mu_{s}|\mathcal{B}_{ \ll P_{1}})}\big{\|}_{L^{p}(d\mu_{s})}\] \[\lesssim \,p\big{\|}\|\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2 }(d\mu_{s}|\mathcal{B}_{\ll P_{1}})}\cdot\mathbf{1}_{B^{H^{\sigma}}_{R}}( \mathbf{P}_{\ll P_{1}}w)\big{\|}_{L^{p}(d\mu_{s})}.\]
As in the estimate of \(\mathcal{R}^{(B1)}_{1,3}\), here it suffices to show that
\[\|\mathcal{R}^{(B2)}_{1,3}\|_{L^{2}(d\mu_{s}|\mathcal{B}_{\ll P_{1}})}\lesssim N ^{-\frac{1}{2}}_{(1)}\|w\|^{8}_{H^{\sigma}}. \tag{8.3}\]
Thanks to the non-pairing condition \(p_{1}\neq p_{2}\), we deduce that4
Footnote 4: In the summation below, we implicitly assume that \(|p_{1}|\sim P_{1},|p_{2}|\sim P_{2}\).
\[\|\mathcal{R}^{(B2)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d\mu_{s}| \mathcal{B}_{\ll P_{1}})}\] \[\lesssim (P_{1}P_{2})^{-s}\Big{(}\sum_{p_{1}\neq p_{2}}\Big{(}\sum_{ \begin{subarray}{c}k_{2},\cdots,k_{6},p_{3},p_{4},p_{5}\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}N^{2s}_{(3)}|w^{P_{3}}_ {p_{3}}w^{P_{4}}_{p_{4}}w^{P_{5}}_{p_{5}}w^{M_{2}}_{k_{2}}\cdots w^{M_{6}}_{k_{ 6}}|\Big{)}^{2}\Big{)}^{\frac{1}{2}}\] \[\lesssim N^{-2s}_{(1)}N^{2s}_{(3)}\Big{(}\sum_{p_{1}-p_{2}+\cdots+p_{5}-k_{ 2}+\cdots-k_{6}=0}|w^{P_{3}}_{p_{3}}w^{P_{4}}_{p_{4}}w^{P_{5}}_{p_{5}}w^{M_{2} }_{k_{2}}\cdots w^{M_{6}}_{k_{6}}|^{2}\Big{)}^{\frac{1}{2}}\] \[\times\Big{(}\sup_{p_{1}\neq p_{2}}\sum_{ \begin{subarray}{c}k_{2},\cdots,k_{6},p_{3},p_{4},p_{5}\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}1\Big{)}^{\frac{1}{2}}\] \[\lesssim N^{-2s+\frac{3}{2}}_{(1)}N^{2s}_{(3)}\|w\|^{8}_{H^{\sigma}} \lesssim N^{-2s+\frac{3}{2}+2s\theta}_{(1)}\|w\|^{8}_{H^{\sigma}}\lesssim N^{ -\frac{1}{2}}_{(1)}\|w\|^{8}_{H^{\sigma}},\]
thanks to the fact that \(s\geq 10\) that \(\theta\) is close to \(\frac{1}{3}\) and the restriction \(N_{(3)}\lesssim N^{\theta}_{(1)}\).
\(\bullet\) Estimate of Type C contribution:
Without loss of generality, we assume that \(k_{(1)}=p_{1}\) and \(k_{(2)}=k_{2}\), since the other cases can be treated in the same way. In particular, \(P_{1}\sim M_{2}\sim N_{(1)}\). We write
\[\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\] \[=\sum_{\begin{subarray}{c}p_{1}\neq k_{2}\\ |p_{1}|\sim P_{1},|k_{2}|\sim M_{2}\end{subarray}}w_{p_{1}}^{P_{1}}\overline{w }_{p_{2}}^{P_{2}}\sum_{\begin{subarray}{c}p_{2}\cdots,p_{5},k_{3},\cdots,k_{6 }\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\frac{\psi_{2s}(\vec{k}) }{\Omega(\vec{k})}\Big{(}1-\chi\Big{(}\frac{\Omega(\vec{k})}{\lambda(\vec{k})^ {\delta_{0}}}\Big{)}\Big{)}\overline{w}_{p_{2}}^{P_{2}}\cdots w_{p_{5}}^{P_{5} }w_{k_{3}}^{M_{3}}\cdots\overline{w}_{k_{6}}^{M_{6}}.\]
Denote \(\mathcal{B}_{\ll P_{1}}\) the \(\sigma\)-algebra generated by \((g_{k})_{|k|\leq P_{1}/100}\). Without loss of generality, we assume that \(P_{1}\sim N_{(1)}\) is large enough such that \(N_{(3)}\lesssim N_{(1)}^{\theta}\ll\frac{P_{1}}{100}\). Consequently, with respect to \(\mu_{s}\), \(w^{M_{3}},\cdots w^{M_{6}},w^{P_{2}},\cdots,w^{P_{5}}\) are independent of \(w^{P_{1}},w^{M_{2}}\).
By conditional Wiener chaos estimate, we have
\[\|\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B_{R}^{H \sigma}}(w)\|_{L^{p}(d\mu_{s})}\leq \|\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{B_{R}^{H \sigma}}(\mathbf{P}_{\ll P_{1}}w)\|_{L^{p}(d\mu_{s})}\] \[= \big{\|}\|\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\mathbf{1}_{ B_{R}^{H\sigma}}(\mathbf{P}_{\ll P_{1}}w)\|_{L^{p}(d\mu_{s}|\mathcal{B}_{\ll P _{1}})}\big{\|}_{L^{p}(d\mu_{s})}\] \[\lesssim \,P\big{\|}\|\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2 }(d\mu_{s}|\mathcal{B}_{\ll P_{1}})}\cdot\mathbf{1}_{B_{R}^{H\sigma}}(\mathbf{ P}_{\ll P_{1}}w)\big{\|}_{L^{p}(d\mu_{s})}.\]
As in the estimate for Type (B) terms, it suffices to show that
\[\|\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d\mu_{s}|\mathcal{B}_{ \ll P_{1}})}\lesssim N_{(1)}^{-\frac{1}{2}}\|w\|_{H^{\sigma}}^{8}.\]
Since \(p_{1}\neq k_{2}\)(recall that the contribution where \(p_{1}=k_{2}\) is contained in \(\mathcal{S}_{1,1}\)), we estimate5
Footnote 5: In the summation below, we implicitly assume that the sum is taken in the range \(|p_{1}|\sim P_{1},|k_{2}|\sim M_{2}\).
\[\|\mathcal{R}^{(C)}_{1,3}(M_{1},\cdots,P_{5})\|_{L^{2}(d\mu_{s}| \mathcal{B}_{\ll P_{1}})}\] \[\lesssim\] \[\lesssim N_{(1)}^{-2s}\Big{(}\prod_{\begin{subarray}{c}p_{1}\neq k_{2} \\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\frac{|\psi_{2s}(\vec{k} )|^{2}}{1+|\Omega(\vec{k})|^{2}}|w_{k_{(3)}}^{N_{(3)}}|^{2}\Big{)}^{\frac{1}{2 }}\Big{(}\sup_{\begin{subarray}{c}p_{1}\neq k_{2}\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\sum_{\begin{subarray}{c }p_{j}=4\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}\prod_{j=4}^{10}|w_{k_{(j )}}^{N_{(j)}}|^{2}\Big{)}^{\frac{1}{2}}\] \[\lesssim N_{(1)}^{-2s}\Big{(}\prod_{\begin{subarray}{c}j=4\\ p_{1}\neq k_{2},k_{3},\cdots,k_{6},p_{2}\cdots,p_{5}\\ p_{1}-\cdots+p_{5}-k_{2}+\cdots-k_{6}=0\end{subarray}}^{10}\frac{|\psi_{2s}( \vec{k})|^{2}}{1+|\Omega(\vec{k})|^{2}}|w_{k_{(3)}}^{N_{(3)}}|^{2}\Big{)}^{ \frac{1}{2}}. \tag{8.4}\]
The estimate the last sum on the right hand side is very similar as for the estimate of \(\mathcal{R}^{(B1)}_{1,3}\). The only difference here is that we might have the pairing of \(k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\) and \(k_{2}\), although \(p_{1}\neq k_{2}\). Note that in the case of pairing
\[k_{1}=p_{1}-(p_{2}-p_{3}+p_{4}-p_{5})=k_{2}\]
we have \(|\psi_{2s}(\vec{k})|\lesssim N_{(3)}^{2s}\), thus we control the paired contribution crudely by
\[\sum_{p_{1}\neq k_{2}}\sum_{\begin{subarray}{c}p_{2}\cdots,p_{5},k_{3},\cdots,k_{ 6}\\ p_{1}-\cdots+p_{5}=k_{2}\\ k_{3}-k_{4}+k_{5}-k_{6}=0\end{subarray}}N_{(3)}^{4s}|w_{k_{(3)}}^{N_{(3)}}|^{2} \lesssim N_{(1)}^{3}N_{(3)}^{4s}\|w^{N_{(3)}}\|_{L^{2}}^{2}\prod_{j=4}^{10}N_{ (j)}^{3}\]
For the non-pairing contribution, we can argue exactly as the last part of the estimate of \(\mathcal{R}_{1,3}^{(B1)}\) by using Lemma 4.2:
\[\sum_{k_{3}\cdots,k_{6},p_{2}\cdots,p_{5}}|w_{k_{(3)}}^{N_{(3)}}|^{2}\sum_{ \begin{subarray}{c}k_{1}\neq k_{2}\\ k_{1}=p_{1}-p_{2}+p_{3}-p_{4}+p_{5}\\ k_{1}-k_{2}+k_{3}-k_{4}+k_{5}-k_{6}=0\end{subarray}}\frac{M_{(1)}^{4(s-1)}(M_{ (3)}^{4}+|\Omega(\vec{k})|^{2})}{1+|\Omega(\vec{k})|^{2}}. \tag{8.5}\]
For fixed \(p_{2},\cdots,p_{5},k_{3},\cdots,k_{6}\),
\[|\Omega(\vec{k})|=|p_{1}-\mathbf{p}|^{2}-|k_{2}|^{2}+\mathbf{c}\]
with \(\mathbf{p}=p_{2}-p_{3}+p_{4}-p_{5}\) and \(\mathbf{c}=|k_{3}|^{2}-|k_{4}|^{2}+|k_{5}|^{2}-|k_{6}|^{2}.\) When \(\mathbf{p}\neq p_{1}-k_{2}\), by Lemma 4.2, the choices of \(p_{1},k_{2}\) such that \(p_{1}-\mathbf{p}-k_{2}=\mathbf{k}=k_{3}-k_{4}+k_{5}-k_{6}\) are bounded by \(M_{(1)}^{2}\), hence (8.5) is bounded by
\[N_{(1)}^{4(s-1)}(N_{(3)}^{4}N_{(1)}^{2}+N_{(1)}^{3})\|w^{N_{(3)}}\|_{L^{2}}^{2 }\prod_{j=4}^{10}N_{(j)}^{3}.\]
Therefore, the right hand side of (8.4) can be bounded by
\[\big{(}N_{(1)}^{-2s+\frac{3}{2}}N_{(3)}^{2s}+N_{(1)}^{-2}(N_{(1)}^{\frac{3}{2 }}+N_{(3)}^{2}N_{(1)})\big{)}\prod_{j=3}^{10}\|w^{N_{(j)}}\|_{L^{2}}\cdot \Big{(}\prod_{j=4}^{10}N_{(j)}^{\frac{3}{2}}\Big{)}\lesssim N_{(1)}^{-\frac{1 }{2}}\|w\|_{H^{\sigma}}^{8},\]
thanks to the fact that \(s\geq 10\), that \(\theta<\frac{1}{3}\), \(\theta\) close to \(\frac{1}{3}\) and that \(N_{(3)}\lesssim N_{(1)}^{\theta}\). This completes the proof of Proposition 8.1.
## Appendix A Long time approximations
In this appendix, we prove the approximation results used in Section 3. The proof is a consequence of the global regularity theory of [23].
### Ingredients in the global regularity theory for the energy critical NLS on \(\mathbb{T}^{3}\)
In the sequel, we follow the notations in [23] and [22] about basic definitions and properties for function spaces \(U^{p},V^{p},U^{p}_{\Lambda},V^{p}_{\Lambda}\) related to critical problems. Let us briefly recall some related
function spaces as well as multilinear estimates from [22] and [23] that we will use. For \(s\in\mathbb{R}\),
\[\|u\|_{\widetilde{X}^{s}(\mathbb{R})}:=\Big{(}\sum_{k\in\mathbb{Z}^{ 3}}\langle k\rangle^{2s}\|\mathrm{e}^{it|k|^{2}}(\mathcal{F}u)(t,k)\|_{U_{t}^{2 }}^{2}\Big{)}^{\frac{1}{2}},\] (A.1) \[\|u\|_{\widetilde{Y}^{s}(\mathbb{R})}:=\Big{(}\sum_{k\in\mathbb{Z} ^{3}}\langle k\rangle^{2s}\|\mathrm{e}^{it|k|^{2}}(\mathcal{F}u)(t,k)\|_{V_{t} ^{2}}^{2}\Big{)}^{\frac{1}{2}}.\] (A.2)
We have the continuous embedding property:
\[\widetilde{X}^{s}(\mathbb{R})\hookrightarrow\widetilde{Y}^{s}(\mathbb{R}) \hookrightarrow L^{\infty}(\mathbb{R};H^{s}(\mathbb{T}^{3})).\]
For intervals \(I\subset\mathbb{R}\), the space \(X^{s}(I)\) is defined via the restriction norms:
\[\|u\|_{X^{s}(I)}:=\sup_{J\subset I,|J|\leq 1}\inf_{v\mathbf{1}_{J}(t)=u \mathbf{1}_{J}(t)}\|v\|_{\widetilde{X}^{s}}.\]
Similarly for the space \(Y^{s}(I)\). Note that by definition, for linear solution \(u(t)=\mathrm{e}^{it\Delta}\phi\),
\[\|u(t)\|_{X^{s}(I)}\leq\|\phi\|_{H^{s}(\mathbb{T}^{3})}.\] (A.3)
The critical Strichartz type norm is defined via the norm
\[\|u\|_{Z(I)}:=\sum_{p\in\{p_{0},p_{1}\}}\sup_{J\subset I,|J|\leq 1}\Big{(} \sum_{N\in\mathbb{Z}^{3}}N^{5-\frac{p}{2}}\|\mathbf{P}_{N}u(t)\|_{L_{t,x}^{p} (J\times\mathbb{T}^{3})}^{p}\Big{)}^{\frac{1}{p}},\]
where \(\mathbf{P}_{N}=\mathbf{P}_{\leq N}-\mathbf{P}_{\leq N/2}\), and \(\mathbf{P}_{\leq N}\) are square Littlewood-Paley projectors defined in Section 2 of [23].
By definition we remark that if \(T\geq 1\) and \(I_{T}=[-T,T]\),
\[\|u\|_{Z(I_{T})}\sim_{T}\sum_{p\in\{p_{0},p_{1}\}}\|u(t)\|_{L_{t}^{p}([0,T]; \widetilde{B}_{p,p}^{\frac{5}{p}-\frac{1}{2}}(\mathbb{T}^{3}))},\]
while if \(T<\frac{1}{2}\),
\[\|u\|_{Z(I_{T})}=\sum_{p\in\{p_{0},p_{1}\}}\|u(t)\|_{L_{t}^{p}([0,T];\widetilde {B}_{p,p}^{\frac{5}{p}-\frac{1}{2}}(\mathbb{T}^{3}))}\]
where \(\widetilde{B}_{p,q}^{s}\) are Besov spaces related to the Littlewood-Paley projectors \(\mathbf{P}_{N}\).
The inhomogeneous term on an interval \(I=(a,b)\) will be controlled by the \(N^{s}(I)\) norm:
\[\|F\|_{N^{s}(I)}:=\Big{\|}\int_{a}^{t}\mathrm{e}^{i(t-t^{\prime})\Delta}F(t^{ \prime})dt^{\prime}\Big{\|}_{X^{s}(I)}.\]
It turns out that ([22], Proposition 2.11)
\[\|F\|_{N^{s}(I)}\leq\sup_{\begin{subarray}{c}G\in Y^{-s}(I)\\ \|G\|_{Y^{-s}(I)}\leq 1\end{subarray}}\Big{|}\int_{I}\int_{\mathbb{T}^{3}}F(t,x) \overline{G}(t,x)dxdt\Big{|}.\]
Recall the key Strichartz estimate:
**Lemma A.1** ([23], Corollary 2.2).: _Let \(p\in(4,\infty)\) and \(\mathbf{P}_{C}\) the frequency projector to some cube \(C\) of size \(N\). For any interval \(I\subset\mathbb{R}\), \(|I|\leq 1\),_
\[\|\mathbf{P}_{C}u\|_{L^{p}_{t,x}(I\times\mathbb{T}^{3})}\lesssim N^{\frac{3}{2}- \frac{5}{p}}\|u\|_{U^{p}_{\Delta}(I;L^{2}_{x})},\]
_where the implicit constant is independent of intervals \(I\)._
As a consequence of Lemma A.1 and the embedding \(X^{0}(I)\hookrightarrow U^{p}_{\Delta}(I;L^{2}_{x})\) (basically since \(U^{2}\hookrightarrow U^{p}\)) for \(p>2\), we have
\[\|u\|_{Z(I)}\lesssim\|u\|_{X^{1}(I)}\] (A.4)
for any interval \(I\), where the implicit constant is uniform. The key multilinear estimate we will use reads:
**Lemma A.2** ([23], Lemma 3.2).: _Let \(\sigma\geq 1\). For \(u_{j}\in X^{1}(I)\), \(j=1,2,3,4,5\), \(|I|\leq 1\). the estimate_
\[\Big{\|}\prod_{j=1}^{5}u_{j}^{\pm}\Big{\|}_{N^{\sigma}(I)}\lesssim\sum_{ \sigma\in\mathfrak{S}_{5}}\|u_{\sigma(1)}\|_{X^{\sigma}(I)}\prod_{j\geq 2} \|u_{\sigma(j)}\|_{Z(I)}^{\frac{1}{2}}\|u_{\sigma(j)}\|_{X^{1}(I)}^{\frac{1}{2 }}\] (A.5)
_holds true, where \(\mathfrak{S}_{5}\) is the permutation group of \(5\) elements and \(u_{j}^{\pm}\in\{u_{j},\overline{u}_{j}\}\), and the implicit constant in the inequality is independent of intervals \(I\) such that \(|I|\leq 1\)._
We remark that Lemma [23] treats the case \(\sigma=1\). For \(\sigma>1\), the proof follows in the similar way by the more precise estimate
\[\int_{I\times\mathbb{T}^{3}}\Big{|}\sum_{N_{0},N_{1}\geq N_{2} \geq N_{3}\geq N_{4}\geq N_{5}}\prod_{j=0}^{5}\mathbf{P}_{N_{j}}u_{j}^{\pm} \Big{|}dxdt\] \[\lesssim\]
for some \(\delta>0\). Hence we omit the details of the proof.
Finally, we recall the global regularity theory of Ionescu-Pausader. Following Section 6 of [23], given \(R>0\) and \(\tau\geq 0\), consider the non-negative function (possibly \(\infty\))
\[\Sigma(R,\tau):=\sup\big{\{}\|u\|_{Z(I)}^{2}:\ H(u)\leq R,|I|\leq\tau\big{\}},\]
where \(H(u)\) is the energy of \(u\) and the supremum is taken over all strong solutions of (1.1) of energy less than or equal to \(R\) and all intervals \(I\) of length \(|I|\leq\tau\). As an increasing function in \(\tau\), the limit (possibly \(\infty\) in a priori)
\[\Sigma_{*}(R):=\lim_{\tau\to 0^{+}}\Sigma(R,\tau)\]
exists. Moreover, \(\Sigma(R,\tau)\) is quasi subadditive in \(\tau\):
\[\Sigma(R,\tau_{1}+\tau_{2})\lesssim\Sigma(R,\tau_{1})+\Sigma(R,\tau_{2})\]
for any \(\tau_{1},\tau_{2}>0\).
The global regularity result of Ionescu-Pausader can be stated as:
**Theorem A.1** ([23]).: _For any \(R>0\), \(\Sigma_{*}(R)<\infty\). Consequently, for any \(\tau>0\), \(\Sigma(R,\tau)<\infty\) and moreover_
\[\Sigma(R,\tau)\leq\Sigma(R,1)\mathrm{e}^{C_{0}(1+\tau)},\]
_where \(C_{0}>0\) is an absolute constant. In particular, for any \(\phi\in H^{1}(\mathbb{T}^{3})\) of energy smaller than or equal to \(R\), the strong solution \(u(t)\) of (1.1) with initial data \(\phi\) is global and_
\[\|u(t)\|_{Z([0,\tau])}\leq\Sigma(R,1)\mathrm{e}^{C_{0}(1+\tau)}.\]
_Finally,_
\[\|u\|_{X^{1}([0,\tau])}\leq C(R,\|u\|_{Z([0,\tau])}).\]
We denote \(\Phi(t)\) the global flow of (1.1) in \(H^{1}(\mathbb{T}^{3})\). The following Corollary shows that we can extend \(\Phi(t)\) on \(H^{\sigma}(\mathbb{T}^{3})\) for \(\sigma\geq 1\):
**Corollary A.2**.: _Let \(\sigma\geq 1\) and \(T\geq 1\). Assume that \(\phi\in H^{\sigma}(\mathbb{T}^{3})\) such that \(H(\phi)\leq R\), then the flow \(\Phi(t)\) of (1.1) can be extended on \(H^{\sigma}(\mathbb{T}^{3})\) globally._
Proof.: Denote \(u(t)=\Phi(t)\phi\). From Theorem A.1, for any \(I\subset[-T,T]\),
\[\|u\|_{Z([-T,T])}\leq\Sigma(R,T),\quad\|u\|_{X^{1}([-T,T])}\leq C(R,T),\]
where \(C(R,T)\) depends only on \(R\) and \(T\).
By the Duhamel formula and (A.5) of Lemma A.2, for any \(I=(a,b)\subset[-T,T]\),
\[\|u\|_{X^{\sigma}(I)}\leq \|e^{i(t-a)\Delta}u(a)\|_{X^{\sigma}(I)}+\||u|^{4}u\|_{N^{\sigma}( I)}\] \[\leq \|u(a)\|_{H^{\sigma}_{x}}+C\|u\|_{X^{\sigma}(I)}\|u\|_{X^{1}(I)} ^{2}\|u\|_{Z(I)}^{2}\] \[\leq \|u(a)\|_{H^{\sigma}_{x}}+C(R,T)\|u\|_{Z(I)}^{2}\|u\|_{X^{\sigma} (I)},\]
where \(C(R,T)\) is a constant depending only on \(R\) and \(T\) that can change from line to line.
Next, we partition \([0,T]=\bigcup_{j=1}^{\kappa}[a_{j-1},a_{j}]\) such that \(\|u\|_{Z([a_{j-1},a_{j}])}<\frac{1}{\sqrt{2C(R,T)}}\), hence for all \(j=1,2,\cdots,k\),
\[\|u\|_{X^{\sigma}([a_{j-1},a_{j}])}\leq 2\|u(a_{j-1})\|_{H^{\sigma}_{x}}.\]
By the embedding property, for all \(j\geq 1\),
\[\|u\|_{X^{\sigma}([a_{j},a_{j+1}])}\leq C\|u\|_{X^{\sigma}([a_{j},a_{j-1}])}.\]
This shows that for all \(t\in[-T,T]\),
\[\|u(t)\|_{H^{\sigma}}\leq C^{\kappa}\|\phi\|_{H^{\sigma}_{x}}.\]
This completes the proof.
**Remark A.3**.: When \(\sigma>1\), the above argument does not give a uniform control of the \(H^{\sigma}\) norm for the solution, since the index \(\kappa\) depends on the profile of each individual global \(H^{1}\)-solution \(u(t)\). Later we shall strengthen the \(H^{\sigma}\)-estimate uniformly on any bounded ball of \(H^{\sigma}\) by choosing a uniform partition of \([0,T]\).
### Local convergence and stability
From now on, denote \(\Phi(t)\) the flow of the energy critical NLS. Denote \(\Phi_{N}(t)\) the flow of the truncated NLS:
\[i\partial_{t}u_{N}+\Delta u_{N}=S_{N}(|S_{N}u_{N}|^{4}S_{N}u_{N}),\] (A.6)
where \(S_{N}\) is the smooth Fourier truncation at size \(N\) defined at the beginning of Section 2.
**Proposition A.4** (Local convergence).: _Assume that \(\sigma\geq 1\). Let \(\phi,\widetilde{\phi}\in H^{\sigma}(\mathbb{T}^{3})\) and \(I\subset\mathbb{R}\) be an interval and \(t_{0}\in I\). Suppose \(\|\phi\|_{H^{\sigma}_{x}}\leq A,\|\widetilde{\phi}\|_{H^{\sigma}_{x}}\leq A\). Then for any \(\epsilon>0\), there exist \(\delta=\delta(A,\epsilon)>0\) such that if_
\[\|e^{i(t-t_{0})\Delta}\phi\|_{Z(I)}<\delta,\ \|\phi-\widetilde{\phi}\|_{H^{ \sigma}_{x}}<\delta,\]
_there exist unique solutions \(u=\Phi(t-t_{0})\phi\) and \(u_{N}=\Phi_{N}(t-t_{0})\widetilde{\phi}\) in \(C(\overline{I};H^{\sigma}_{x})\cap X^{\sigma}(I)\) satisfying_
\[\|u_{N}\|_{X^{\sigma}(I)}+\|u\|_{X^{\sigma}(I)}\leq C_{0}A,\ \|u_{N}\|_{Z(I)}+\|u \|_{Z(I)}<\epsilon,\]
_where \(C_{0}>0\) is an absolute constant. Moreover,_
\[\|\Phi_{N}(t)\widetilde{\phi}-\Phi(t)\phi\|_{C(\overline{I};H^{ \sigma}_{x})}\leq C_{0}\|\phi-\widetilde{\phi}\|_{H^{\sigma}_{x}}+C_{0}\delta _{N}(A,\epsilon,\phi),\] (A.7)
_where \(\delta_{N}(A,\epsilon,\phi)\to 0\) as \(N\to\infty\), uniformly in \(\phi\) on a compact set \(K\) of \(H^{\sigma}(\mathbb{T}^{3})\) such that \(\|\phi\|_{H^{\sigma}_{x}}\leq A\)._
**Remark A.5**.: Consequently, taking \(\widetilde{\phi}=\phi\), under the hypothesis of Lemma A.4, we have
\[\|\Phi_{N}(t)\phi-\Phi(t)\phi\|_{X^{\sigma}(I)}\to 0,\]
uniformly on any compact set of \(H^{1}(\mathbb{T}^{3})\). Taking the limit \(N\to\infty\) in (A.7), we obtain also that under the hypothesis of Lemma A.4,
\[\|\Phi(t)\widetilde{\phi}-\Phi(t)\phi\|_{X^{\sigma}(I)}\leq C_{0}\|\phi- \widetilde{\phi}\|_{H^{\sigma}_{x}}.\]
Proof.: First we prove the case when \(\sigma=1\). Note that the existence of solutions on \(\overline{I}\) is a direct consequence of Proposition 3.3 of [23]. So we will only concentrate on the bounds for solutions \(u_{N}(t)\) and \(u(t)\). We argue by several steps.
Step 1: Uniform bound:
Without loss of generality, we assume that \(t_{0}=0\). By the Strichartz estimate (A.4) and Lemma A.2, we have
\[\|u\|_{Z(I)}\leq \|{\rm e}^{it\Delta}\phi\|_{Z(I)}+C\||u|^{4}u\|_{N^{1}(I)}\] \[\leq \|{\rm e}^{it\Delta}\phi\|_{Z(I)}+C\|u\|^{2}_{Z(I)}\|u\|^{3}_{X^{ 1}(I)},\] (A.8)
and
\[\|u\|_{X^{1}(I)}\leq \|\mathrm{e}^{it\Delta}\phi\|_{Z(I)}+C\||u|^{4}u\|_{N^{1}(I)}\] \[\leq \|\phi\|_{H^{1}_{x}}+C\|u\|_{Z(I)}^{2}\|u\|_{X^{1}(I)}^{3},\] (A.9)
where \(C>0\) is independent of \(I\). Note that the same inequalities holds for \(u_{N}\), as the smooth spectral projector \(S_{N}\) is bounded from \(L^{r}_{x}\) to \(L^{r}_{x}\) for any \(1<r<\infty\). Then the desired control for \(\|u\|_{Z(I)}\) and \(\|u\|_{X^{1}(I)}\), as well as \(\|u_{N}\|_{Z(I)},\|u_{N}\|_{X^{1}(I)}\) follow from the following elementary lemma:
**Lemma A.6**.: _Let \(A>0,C_{0}>0\) and \(f\in C([0,\tau_{0}];[0,\infty))\) be an increasing continuous function such that \(f(0)=0\) and \(h:[0,\tau_{0}]\to[0,\infty)\) is increasing. Then for any small \(\epsilon>0\), there exists \(\delta=\delta(A,\epsilon)>0\), such that if_
\[f(t)\leq\delta+C_{0}f(t)^{4}h(t),\quad h(t)\leq C_{0}A+C_{0}f(t)^{4}h(t),\quad \forall t\in[0,\tau_{0}],\]
_then \(f(t)\leq\epsilon\) and \(h(t)\leq 2C_{0}A\)._
Proof.: This follows from a standard continuity argument. Let \(\tau\leq\tau_{0}\) be the largest time such that \(f(t)\leq\epsilon\) (\(\tau\) exists since \(f(0)=0\)). We claim that if \(\epsilon\ll 1\) is such that \(16C_{0}\epsilon^{4}<\frac{1}{2}\) and \(32C_{0}^{2}A\epsilon^{3}<\frac{1}{2}\), then \(\tau=\tau_{0}\). By contradiction, if \(\tau<\tau_{0}\), by continuity of \(f\), there exists \(\tau_{1}\in(\tau,\tau_{0})\) such that \(f(\tau_{1})<2\epsilon\). Then for all \(0\leq t\leq\tau_{1}\), \(h(t)\leq C_{0}A+16C_{0}\epsilon^{4}h(t)<2C_{0}A\), thanks to the smallness of \(\epsilon\). Plugging into the inequality of \(f(t)\), we deduce that for all \(0\leq t\leq\tau_{1}\)
\[f(t)\leq\delta+32C_{0}^{2}A\epsilon^{4}.\]
Choosing \(\delta<\frac{\epsilon}{2}\), hence, \(\delta+32C_{0}^{2}A\epsilon^{4}<\epsilon\), we obtain a contradiction.
Step 2: Quantitative convergence:
By the same application of the Strichartz inequality, we get
\[\|u_{N}-u\|_{X^{1}(I)}\leq\|\phi_{N}-\phi\|_{H^{1}_{x}}+\|S_{N}^{\perp}(|S_{N} u_{N}|^{4}S_{N}u_{N})\|_{N^{1}(I)}+\||S_{N}u_{N}|^{4}S_{N}u_{N}-|u|^{4}u\|_{N^{1}(I)}.\]
By splitting \(S_{N}u_{N}=S_{M}u_{N}+S_{M}^{\perp}S_{N}u_{N}\), we observe that for \(M=\frac{N}{16}\), \(S_{N}^{\perp}(|S_{M}u_{N}|^{4}S_{M}u_{N})=0\). Therefore, by the uniform boundedness of \(S_{N},S_{M}\) on \(L^{r}_{x}\) (\(1<r<\infty\)) and on \(X^{1}\),
\[\|S_{N}^{\perp}(|S_{N}u_{N}|^{4}S_{N}u_{N})\|_{N^{1}(I)}\leq C\|S_{M}^{\perp}u_{N}\|_{Z(I)}\|u_{N}\|_{Z(I)}^{3}\|u_{N}\|_{X^{1}(I)}^{3}\] \[+ C\|u_{N}\|_{Z(I)}^{2}\|S_{M}^{\perp}u_{N}\|_{X^{1}(I)}\|u_{N}\|_{ X^{1}(I)}^{2}\] \[\leq C\|S_{M}^{\perp}u_{N}\|_{X^{1}(I)}\|u_{N}\|_{Z(I)}^{2}\|u_{N}\|_{ X^{1}(I)}^{2}\] (A.10) \[\leq C(\|S_{M}^{\perp}u\|_{X^{1}(I)}+\|u_{N}-u\|_{X^{1}(I)})\|u_{N}\|_{ Z(I)}^{2}\|u_{N}\|_{X^{1}(I)}^{2}\] \[\leq CA^{2}\epsilon^{2}(\|S_{M}^{\perp}u\|_{X^{1}(I)}+\|u_{N}-u\|_{X^{1} (I)}).\]
For the other term, algebraic manipulation yields
\[\||S_{N}u_{N}|^{4}S_{N}u_{N}-|u|^{4}u\|_{N^{1}(I)}\leq C\|S_{N}u_{N}-u\|_{X^{1}(I)}(\|S_{N}u_{N}\|_{Z(I)}^{2}+\|u\|_{Z(I)}^{ 2})(\|S_{N}u_{N}\|_{X^{1}(I)}^{2}+\|u\|_{X^{1}(I)}^{2})\] \[\leq CA^{2}\epsilon^{2}(\|S_{N}^{\perp}u\|_{X^{1}(I)}+\|u_{N}-u\|_{X^{1 }(I)})\] \[\leq CA^{2}\epsilon^{2}(\|S_{M}^{\perp}u\|_{X^{1}(I)}+\|u_{N}-u\|_{X^{1 }(I)}).\]
In summary, we have
\[\|u_{N}-u\|_{X^{1}(I)}\leq \|\phi-\widetilde{\phi}\|_{H^{1}_{x}}+CA^{2}\epsilon^{2}(\|S_{N/1 6}^{\perp}u\|_{X^{1}(I)}+\|u_{N}-u\|_{X^{1}(I)}).\] (A.11)
Furthermore, from the Duhamel formula of \(u\) and the similar argument as (A.10), we have the recursive inequality
\[\|S_{N}^{\perp}u\|_{X^{1}(I)}\leq \|S_{N}^{\perp}\phi\|_{H^{1}_{x}}+CA^{2}\epsilon^{2}\|S_{N/16}^{ \perp}u\|_{X^{1}(I)}.\] (A.12)
To conclude, we invoke the following elementary result:
**Lemma A.7**.: _Let \(\{a_{j}\},\{b_{j}\}\) be two positive sequences and \(A_{0}>0\) is an absolute constant. Assume that \(0<\theta<1\) and for \(1\leq j\leq m\),_
\[a_{j}\leq A_{0}b_{j}+\theta a_{j-1}.\]
_Then we have_
\[a_{m}\leq A_{0}\sum_{j=0}^{m-1}\theta^{j}b_{m-j}+\theta^{m-1}a_{1}.\]
_In particular, if \(b_{m}\to 0\), then \(a_{m}\to 0\)._
The proof of this elementary lemma is straitforward, hence we omit the detail.
Consequently, we have
\[\|S_{N/16}^{\perp}u\|_{X^{1}(I)}\leq A_{0}\sum_{j=1}^{\log_{16}N-1}(CA\epsilon) ^{j}\|S_{N/16^{j}}^{\perp}\phi\|_{H^{1}_{x}}+(CA\epsilon)^{\log_{16}N-1}\| \phi\|_{H^{1}_{x}},\]
where \(C,A>0\) are absolute constants and \(\epsilon\ll 1\) such that \(CA\epsilon<1\). We denote
\[\delta_{N}(A,\epsilon,\phi):=\sum_{j=1}^{\log_{16}N-1}(CA\epsilon)^{j}\|S_{N/1 6^{j}}^{\perp}\phi\|_{H^{1}_{x}}+(CA\epsilon)^{\log_{16}N-1}\|\phi\|_{H^{1}_{ x}}.\] (A.13)
Since \(\|S_{N}^{\perp}\phi\|_{H^{1}_{x}}\to 0\), uniformly on any compact set of \(H^{1}\), we deduce that \(\delta_{N}(A,\epsilon,\phi)\) converges to \(0\), uniformly on a compact set of \(H^{1}(\mathbb{T}^{3})\). Plugging into (A.11), we obtain that
\[\|u_{N}-u\|_{X^{1}(I)}\leq A_{0}\|\phi-\widetilde{\phi}\|_{H^{1}_{x}}+A_{0} \delta_{N}(A,\epsilon,\phi).\] (A.14)
By the embedding \(X^{1}(I)\hookrightarrow L^{\infty}(I;H^{1}_{x})\) and the fact that \(u_{N}(t),u(t)\in C(\overline{I};H^{1}_{x})\), we obtain that
\[\sup_{t\in\overline{I}}\|u_{N}(t)-u(t)\|_{H^{1}_{x}}\leq C_{0}\|\phi- \widetilde{\phi}\|_{H^{1}_{x}}+C_{0}\delta_{N}(A,\epsilon,\phi),\]
for some absolute constant \(C_{0}>0\). This completes the proof of Lemma A.4 when \(\sigma=1\).
The general case \(\sigma>1\) follows from similar analysis. Here we only indicate the necessary modification in the proof. Indeed, in Step 1, we replace \(X^{1},H^{1}\) norms by \(X^{\sigma},H^{\sigma}\) norms in (A.9), thanks to (A.5). In Step 2, all inequalities remain unchanged when replacing all \(X^{1},H^{1},N^{1}\) norms by \(X^{\sigma},H^{\sigma},N^{\sigma}\) norms (up to change the numerical constants \(C\) in front of each inequality), thanks to (A.5) and the trivial embedding \(X^{\sigma}\hookrightarrow X^{1}\). This completes the proof Lemma A.4.
**Definition A.8** (\((A,\delta)\)-partition).: _Let \(A>\delta>0\). Given an interval \([-T,T]\) and \(\phi\in H^{1}(\mathbb{T}^{3})\), we define an \((A,\delta)\)-partition with respect to \(\phi\) a collection of finite intervals \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) such that_
\[-T=\tau_{0}<\tau_{2}<\cdots<\tau_{m}=T,\quad\|\Phi(\tau_{j-1})\phi\|_{H^{1}_{ x}}\leq A,\ \|\Phi(t)\phi\|_{Z([\tau_{j-1},\tau])}<\delta,\ \forall j=1,\cdots,m.\]
We collect some basic properties. The following property is immediate:
**Proposition A.9** (Refinement of \((A,\delta)\)-partition).: _Any refinement of an \((A,\delta)\)-partition with respect to \(\phi\) is an \((A,\delta)\)-partition._
**Proposition A.10**.: _Assume that \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) is an \((A,\delta)\)-partition with respect to \(\phi\). Then for sufficiently small \(\delta=\delta(A)>0\),_
\[\|\mathrm{e}^{i(t-\tau_{j-1})\Delta}\Phi(\tau_{j-1})\phi\|_{Z([\tau_{j-1},\tau _{j}])}<2\delta,\ \forall j=1,\cdots,m.\]
Proof.: Denote \(u(t)=\Phi(t)\phi\), then
\[\mathrm{e}^{i(t-\tau_{j-1})\Delta}u(\tau_{j-1})=u(t)-\frac{1}{i}\int_{\tau_{j -1}}^{t}\mathrm{e}^{i(t-t^{\prime})\Delta}(|u(t^{\prime})|^{4}u(t^{\prime}))dt ^{\prime}.\]
The desired consequence follows from the Strichartz inequality as in the proof of Lemma A.4. Hence we omit the detail.
**Proposition A.11** (\(H^{1}\)-Stability of an \((A,\delta)\)-partition).: _Assume that \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) is an \((A,\delta)\)-partition with respect to \(\phi\). There exists \(\delta_{1}>0\) such that for any \(\delta<\delta_{1}\), there exists \(\epsilon_{0}=\epsilon_{0}(m,A,\delta)>0\), such that for any \(\phi_{1}\) in the \(\epsilon_{0}\)-neighborhood of \(\phi\) (with respect to the \(H^{1}\)-topology), \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) is an \((2A,2\delta)\)-partition for \(\phi_{1}\)._
Proof.: Denote \(u=\Phi(t)\phi\) and \(u_{1}=\Phi(t)\phi_{1}\). Fix an interval \(I_{j}=[\tau_{j-1},\tau_{j}]\). By the Strichartz inequality,
\[\|\mathrm{e}^{i(t-\tau_{0})\Delta}(\phi-\phi_{1})\|_{Z(I_{1})}\leq C_{1}\| \mathrm{e}^{i(t-\tau_{0})\Delta}(\phi-\phi_{1})\|_{X^{1}(I_{1})}\leq C_{1}\| \phi-\phi_{1}\|_{H^{1}_{x}}\leq C_{1}\epsilon_{0}\]
for some absolute constant \(C_{1}>1\). Pick \(\delta<\delta_{1}\) as in Lemma A.4, and for \(\epsilon_{0}\ll\delta\) (so that \(C_{1}\epsilon_{0}<\delta\)) and for any \(\phi_{1}\) in an \(\epsilon_{0}\)-neighborhood of \(\phi\) (with respect to the \(\dot{H}^{1}\)-topology), the solution \(u_{1}\) (which exists thanks to Lemma A.4) with initial data \(\phi_{1}\) satisfies
\[\|u_{1}\|_{X^{1}(I_{1})}\leq\frac{3}{2}A,\ \|u_{1}\|_{Z(I_{1})}<\frac{3}{2}\delta.\]
Applying Lemma A.2, we have
\[\|u-u_{1}\|_{X^{1}(I_{1})}\leq \|\mathrm{e}^{it\Delta}(\phi-\phi_{1})\|_{X^{1}(I_{1})}+\big{\|}|u|^ {4}u-|u_{1}|^{4}u_{1}\big{\|}_{N^{1}(I_{1})}\] \[\leq \|\phi-\phi_{1}\|_{H^{1}_{x}}+C\|u-u_{1}\|_{X^{1}(I_{1})}(\|u\|_{Z (I_{1})}^{2}+\|u_{1}\|_{Z(I_{1})}^{2})(\|u\|_{X^{1}(I_{1})}^{2}+\|u_{1}\|_{X^{1 }(I_{1})}^{2})\] \[\leq \|\phi-\phi_{1}\|_{H^{1}_{x}}+CA^{2}\delta^{2}\|u-u_{1}\|_{X^{1}( I_{1})}.\]
Taking \(\delta>0\) small enough such that \(1-CA^{2}\delta^{2}>\frac{1}{2}\), we deduce that
\[\|u-u_{1}\|_{Z(I_{1})}\leq C_{1}\|u-u_{1}\|_{X^{1}(I_{1})}\leq 2C_{1}\|\phi- \phi_{1}\|_{H^{1}_{x}}<2C_{2}\epsilon_{0}.\]
In particular, by the embedding property \(X^{1}(I)\hook L^{\infty}(I;H^{1}_{x})\), for almost every \(\tau_{1}^{*}\in(\tau_{0},\tau_{1})\),
\[\|u(\tau_{1}^{*})-u_{1}(\tau_{1}^{*})\|_{H^{1}_{x}}\leq C_{2}\|u-u_{1}\|_{X^{1 }(I_{1})}\leq 2C_{2}\|\phi-\widetilde{\phi}\|_{H^{1}_{x}}\leq 2C_{2}\epsilon_{0},\]
where \(C_{2}>0\) is another absolute constant. By the continuity of the flows \(t\mapsto u(t),u(t_{1})\), we have
\[\|u(\tau_{1})-u_{1}(\tau_{1})\|_{H^{1}_{x}}\leq 2C_{2}\epsilon_{0},\quad\|u_{1} \|_{Z(I_{1})}\leq\|u\|_{Z(I_{1})}+\|u-u_{1}\|_{Z(I_{1})}<\delta+2C_{1}\epsilon _{0},\]
hence
\[\|\mathrm{e}^{i(t-\tau_{1})\Delta}(u(\tau_{1})-u_{1}(\tau_{1}))\|_{Z(I_{2})} \leq C_{1}\|u(\tau_{1})-u_{1}(\tau_{1})\|_{H^{1}_{x}}\leq 2C_{1}C_{2} \epsilon_{0}.\]
By choosing \(\epsilon_{0}\) small enough such that \(\epsilon_{0}\sum_{j=1}^{m}(2C_{1}C_{2})^{j}<\delta\), we can repeat the argument above until \(I_{m}\). In particular, \(\|u_{1}\|_{Z(I_{j})}<2\delta,\|u_{1}\|_{X^{1}(I_{j})}<2A\) for all \(j=1,2,\cdots,m\) This implies that \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) is an \((2A,2\delta)\)-partition with respect to \(\phi_{1}\). The proof of Proposition A.11 is complete.
Now we are ready to prove:
**Proposition A.12** (Long-time approximation).: _Given \(T\geq 1\) and \(\phi\in H^{1}(\mathbb{T}^{3})\)._
\[\lim_{N\to\infty}\|\Phi_{N}(t)\phi-\Phi(t)\phi\|_{H^{1}(\mathbb{T}^{3})}=0, \forall t\in[-T,T],\]
_uniformly for \(\phi\) on a compact set \(K\subset H^{1}(\mathbb{T}^{3})\). Moreover, for any \(|t|\leq T\) and \(N\in\mathbb{N}\), the sets \(\Phi(t)(K),\Phi_{N}(t)(K)\) are compact in \(H^{1}(\mathbb{T}^{3})\)._
Proof.: Fix \(A>0,T>0\) and \(K\subset B^{H^{1}}_{A/2}(0):=\{\phi:\ \|\phi\|_{H^{1}}\leq A/2\}\) a compact set of \(H^{1}(\mathbb{T}^{3})\). Note that for any \(\phi\in B^{H^{1}}_{A/2}\), \(H[\phi]\leq C_{0}A^{2}\). We divide the proof into several steps. Set \(A_{1}=4\sqrt{C_{0}}A\).
Step 1: Existence of a uniform \((A_{1},\delta)\)-partition:
Thanks to Theorem A.1, for any \(\phi\in K\) in \(H^{1}(\mathbb{T}^{3})\), \(\|\Phi(t)\phi\|_{Z([-T,T])}\leq\Lambda(C_{0}A^{2},T)<\infty\). In particular, there exists an \((\frac{A_{1}}{2},\frac{\delta}{2})\)-partition \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) with respect to \(\phi\). By stability (Proposition A.11), there exists \(\epsilon_{0}=\epsilon_{0}(m,A_{1},\delta)>0\), such that \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) is an \((A_{1},\delta)\)-partition with respect to all \(\phi_{1}\in B^{H^{1}}_{\epsilon_{0}}(\phi)\). Since \(K\) is compact, there exist finitely many \(\phi_{1},\cdots,\phi_{n}\in K\), \(\epsilon_{i}>0,i=1,\cdots,n\) and \((A,\delta)\)-partitions \(([\tau_{j-1}^{(i)},\tau_{j}^{(i)}])_{j=1}^{m_{i}},i=1,\cdots,n\), such that
1. \(K\subset\bigcup_{i=1}^{n}B_{\epsilon_{u}}^{H^{1}}(\phi_{i})\);
2. \(([\tau_{j-1}^{(i)},\tau_{j}^{(i)}])_{j=1}^{m_{i}}\) is an \((A_{1},\delta)\)-partition for all \(\phi\in B_{\epsilon_{i}}^{H^{1}}(\phi_{i})\), \(i=1,\cdots,n\).
Consider a refinement \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) of partitions \(([\tau_{j-1}^{(i)},\tau_{j}^{(i)}])_{j=1}^{m_{i}}\). By Proposition A.9, \(([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) is a uniform \((A_{1},\delta)\)-partition with respect to all \(\phi\in K\).
Step 2: Long-time convergence:
Now we are able to iterate Lemma A.4 (with the small parameter \(2\delta\) instead of \(\delta\) in the statement) from \(I_{1}=[\tau_{0},\tau_{1}]\) to \(I_{m}=[\tau_{m-1},\tau_{m}]\). Thanks to Proposition A.10 and the energy conservation law, we have
\[\|\mathrm{e}^{i(t-\tau_{j-1})\Delta}\Phi(\tau_{j-1})\phi\|_{Z(I_{j})}<2\delta, \;\|\Phi(\tau_{j-1})\phi\|_{H^{1}_{x}}\leq A_{1}.\]
In order to apply Lemma A.4 on each interval \(I_{j}\) with initial data \(\Phi_{N}(\tau_{j-1})\phi\) and \(\Phi(\tau_{j-1})\phi\), we have to ensure that
\[\|\Phi_{N}(\tau_{j-1})\phi-\Phi(\tau_{j-1})\phi\|_{H^{1}_{x}}<2\delta.\] (A.15)
First, since \(\delta_{N}(A_{1},2\delta,\phi)\to 0\) uniformly on the compact set \(K\), we can choose \(N_{0}\) large enough, such that for all \(N\geq N_{0},\phi\in K\),
\[(C_{0}+C_{0}^{2}+\cdots+C_{0}^{m})\delta_{N}(A_{1},2\delta,\phi)<\delta.\]
Now we argue by induction that
\[\|\Phi_{N}(t)\phi-\Phi(t)\phi\|_{C(\overline{I_{j}};H^{1}_{x})}\leq(C_{0}+ \cdots+C_{0}^{j})\delta_{N}(A_{1},2\delta,\phi).\] (A.16)
. When \(j=1\), \(\Phi_{N}(\tau_{0})\phi=\Phi(\tau_{0})\phi\), and from the last assertion of Lemma A.4, we have
\[\|\Phi_{N}(t)\phi-\Phi(t)\phi\|_{C(\overline{I_{1}};H^{1}_{x})}\leq C_{0} \delta_{N}(A_{1},2\delta,\phi).\]
Assume that (A.16) holds for some \(j\geq 1\), in particular, (A.15) holds thanks to our choice. Then we are able to apply Lemma A.4 on the time interval \(I_{j+1}\) to obtain that
\[\|\Phi_{N}(t-\tau_{j-1})\Phi_{N}(\tau_{j-1})\phi-\Phi(t-\tau_{j-1 })\Phi(\tau_{j-1})\phi\|_{C(\overline{I_{j}};H^{1}_{x})}\] \[\leq C_{0}\|\Phi_{N}(\tau_{j-1})\phi-\Phi(\tau_{j-1})\phi\|_{H^{1}_{x }}+C_{0}\delta_{N}(A_{1},2\delta,\phi)\] \[\leq (C_{0}+C_{0}^{2}+\cdots+C_{0}^{j+1})\delta_{N}(A_{1},2\delta,\phi).\]
Hence (A.16) holds for all \(j=1,2,\cdots,m\). In particular, we have
\[\|\Phi_{N}(t)\phi-\Phi(t)\phi\|_{C([-T,T];H^{1}_{x})}\leq(C_{0}+\cdots+C_{0}^{ m})\delta_{N}(A_{1},2\delta,\phi)\]
which converges to \(0\), as \(N\to\infty\), uniformly in \(\phi\in K\).
This completes the proof of Proposition A.12.
Now we are ready to accomplish the proof of Proposition 3.1 and Proposition 3.2:
Proof of Proposition 3.1.: If \(\sigma=1\), the \(H^{1}\)-uniform bound for \(\Phi_{N}(t)\phi\) and \(\Phi(t)\phi\) follows from the defocusing feature of (1.1) and the conservation of energy. Now we assume that \(\sigma>1\). By the compact embedding \(H^{\sigma}(\mathbb{T}^{3})\hookrightarrow H^{1}(\mathbb{T}^{3})\), the ball \(B_{R}^{H^{\sigma}}\) is compact with respect to the \(H^{1}\)-topology. By the same argument as Step 1 in the proof of Proposition A.12, there exists a uniform \((A,\delta)\)-partition (with \(A=R\) here and \(\delta<\frac{1}{\sqrt{2C(R,T)}}\) as in the proof of Corollary A.2) (\([\tau_{j-1},\tau_{j}])_{j=1}^{m}\) of \([-T,T]\), where \(m\) depends only on the \(R,T\) and \(\sigma\). Repeating the analysis in the proof of Corollary A.2, we obtain that for all \(|t|\leq T\) and \(N\in\mathbb{N}\),
\[\|\Phi(t)\phi\|_{H^{\sigma}_{x}}+\|\Phi_{N}(t)\phi\|_{H^{\sigma}_{x}}\leq C^{m }\|\phi\|_{H^{\sigma}_{x}}.\]
This completes the proof of Proposition 3.1.
Proof of Proposition 3.2.: We assume that \(\sigma>1\), otherwise, the proof is completed as Proposition A.12. Let \(K\) be a compact set of \(H^{\sigma}(\mathbb{T}^{3})\). In particular, \(K\) is bounded of \(H^{\sigma}(\mathbb{T}^{3})\) and compact with respect to the \(H^{1}(\mathbb{T}^{3})\)-topology. To prove the uniform convergence on \(K\), we follow the same scheme of analysis as in the previous section. By Proposition 3.1, there exists a constant \(D(K,T)\) depending only on \(T>0\) and the compact set \(K\) in \(H^{\sigma}\), such that
\[\sup_{t\in[-T,T]}\|\Phi(t)\phi\|_{H^{\sigma}}+\sup_{t\in[-T,T]}\|\Phi_{N}(t) \phi\|_{H^{\sigma}}\leq D(K,T)\] (A.17)
for all \(N\in\mathbb{N}\). At this stage, we are able to repeat the argument as in the proof of Proposition A.12 line by line if we replace the norms \(H^{1},X^{1},N^{1}\) everywhere by \(H^{\sigma},X^{\sigma},N^{\sigma}\) and the constants \(A\) by \(D(K,T)\). We omit the details and conclude.
|
2302.00307 | Topological invariant of multilayer Haldane models with irregular
stackings | We study multilayer Haldane models with irregular type of stacking,
considering the nearest interlayer hopping. We prove that the value of the
topological invariant is equal to the number of layers times the value of the
topological invariant of monolayer Haldane model, regardless of stacking type,
and interlayer hoppings do not induce gap closing and phase transitions. | Xi Wu | 2023-02-01T08:21:52Z | http://arxiv.org/abs/2302.00307v1 | # Topological invariant of multilayer Haldane models with irregular stackings
###### Abstract
We study multilayer Haldane models with irregular type of stacking, considering the nearest interlayer hopping. We prove that the value of the topological invariant is equal to the number of layers times the value of the topological invariant of monolayer Haldane model, regardless of stacking type, and interlayer hoppings do not induce gap closing and phase transitions.
###### Contents
* I Introduction
* II Multilayer graphene and Multilayer Haldane model
* III The one-to-one correspondence and the proof
* IV Examples of models that do have phase transitions
* V Discussion
* Acknowledgments
## I Introduction
Multilayer graphene have been studied quite a lot in the past two decades and their band structures have been well analyzed with different types of stacking. In nature, stable graphite has Bernal type, which corresponds to ABAB... stacking, rhombohedral type, which correspond to ABCABC...stacking and turbostratic type, which corresponds to irregular stacking that mixes both[1; 2; 3; 4]. It is well-known that the type of stacking affect the band structure and gives different phenomena in, for instance quantum transport[3; 5; 6], optical absorption[7; 4].
On the other hand, Haldane model is constructed also in a hexagonal lattice, the same lattice structure as graphene. Haldane model[8] may be the first example of Chern insulator invented, showing the topological structure of electronic Hamiltonian can define new phases of matter. The anomalous Hall conductivity is unchanged by any perturbation if the band gap remains open[9]. Haldane model thus can be stacked in a similar way to make multilayer Haldane models. Questions naturally arise: Do multilayer Haldane models resemble multilayer graphene in any way? How does the stacking types and the interlayer hopping parameters affect the properties of multilayer Haldane models such as anomalous Hall conductivity?
For ABC stacking: It was shown analytically[10] that, when only the nearest neighbor
hopping is taking into account, the Hall conductivity is proportional to the number of layers times the Hall conductivity in monolayer Haldane model and interlayer hopping parameter do not induce band closing. How about other stacking type such as Bernal or even turbostratic?
Moreover, constructing large Chern number device has been an interesting topic in the passed decade. One approach shows that taking into account distant hoppings in a monolayer Haldane model can give rise to larger Chern numbers[11] and this gives beautiful analytical results. Another way is to consider multilayer models but they are usually done in numerical methods[12; 13]. This makes one suspect whether analytical method can be practical in determining the Chern number in multilayer models.
In this paper, generalizing the method in [10], we provide the analytical approach to study the topological number of multilayer Haldane models. Limited to the nearest neighbor interlayer hoppings, we find a one-to-one correspondence between the spectra, eigen-wavefunctions of multilayer graphene and that of multilayer Haldane model. Using this correspondence, we prove that the property for ABC stacking is valid for all the stacking types mentioned above and that the values of each interlayer hopping parameters do not matter, either.
The paper is organized as following: In Sec. (II) we review the construction of multilayer graphene and propose multilayer Haldane model in a similar way; in Sec. (III) we show the correspondence and give the proof; in Sec.(IV) we give examples which violate some of the conditions of the prove, leading to gap closing and thus phase transitions; in Sec. (V) of discussion, we discuss the gap closing at the limit when interlayer hopping parameters go to infinity, the applications of our result and possible future directions.
## II Multilayer graphene and multilayer Haldane model
In Sec.(II.1), we review the stacking types of multilayer graphene for the benefit of defining our multilayer Haldane model; in Sec.(II.2) we define multilayer Haldane models with irregular stacking and show an important anti-commutation relation that serves for the proof.
### Review of multilayer graphene
Here we mainly follow the discussion in [3; 5]. Graphene is a two-dimensional set of carbon atoms, arranged into a honeycomb structure. The structure can be decomposed into two sublattices, called \(\alpha\) and \(\beta\). Monolayer graphene has linear energy-momentum (dispersion) relation near the Fermi level that the conduction band contacts with the valence band at two Fermi points K and K', having effective Hamiltonian
\[H_{1}^{G}=\left[\begin{array}{cc}0&v(p_{1}-ip_{2})\\ v(p_{1}+ip_{2})&0\end{array}\right], \tag{1}\]
where \(v\) is the Fermi-velocity and \(p_{1}\)\(p_{2}\) are the momenta.
Considering multilayer case, because of the \(C_{3}\) symmetry of the honeycomb lattice, there are three distinct positions for the latter layers, labelled A-, B-, and C-type. If one takes into account only the coupling between the nearest layers, the effective Hamiltonian of multilayer graphene can be written as
\[\mathcal{H}_{n}^{G}=\left[\begin{array}{cccc}H_{1}^{G}&\mathbf{t}_{12}^{T}& &\\ \mathbf{t}_{12}&H_{1}^{G}&\mathbf{t}_{23}^{T}&\\ &\mathbf{t}_{23}&H_{1}^{G}&\mathbf{t}_{34}^{T}&\\ &&...&\\ &&\mathbf{t}_{n-1,n}&H_{1}^{G}\end{array}\right]_{2n\times 2n}\,. \tag{2}\]
\(\mathbf{t}_{i-1,i}\) is the coupling matrix between layer \(i-1\) and layer \(i\), defining the stacking types.
The stacking patterns are AA..., ABAB..., and ABCABC... and irregular patterns with mixing between ABAB... and ABCABC... The simplest case is AA stacking, defined by
\[\mathbf{t}_{i-1,i}=t\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right]. \tag{3}\]
The \(\alpha/\beta\) sublattices from one layer overlaps with the same type of sublattices of the next layer, making it energetically unstable. The spectrum is
\[E_{r}^{\pm}=\pm v|\mathbf{p}|+2t\cos\!\left(\frac{r\pi}{n+1}\right),r=1,...,n\,. \tag{4}\]
The most energetically favorable one is ABAB... stacking. It is defined by
\[\mathbf{t}_{2i-1,2i}=t\left[\begin{array}{cc}0&1\\ 0&0\end{array}\right],\ \ \ \mathbf{t}_{2i,2i+1}=t\left[\begin{array}{cc}0&0\\ 1&0\end{array}\right], \tag{5}\]
and the spectrum is
\[E_{r}^{\pm}=\pm\sqrt{v^{2}|\mathbf{p}|^{2}+t^{2}\cos^{2}(\frac{r\pi}{n+1})}+t\cos \!\left(\frac{r\pi}{n+1}\right),r=1,...,n\,. \tag{6}\]
Another common example is ABC stacking with
\[\mathfrak{t}_{i-1,i}=t\left[\begin{array}{cc}0&1\\ 0&0\end{array}\right]. \tag{7}\]
The exact spectrum is not solved, instead the effective Hamiltonian is given by
\[H_{n}^{eff}=-\frac{1}{t^{n-1}}\left(\begin{array}{cc}0&v^{n}(p_{1}-ip_{2})^ {n}\\ v^{n}(p_{1}+ip_{2})^{n}&0\end{array}\right)\,. \tag{8}\]
with dispersion
\[E_{n}^{\pm}=\pm\frac{v^{n}p^{n}}{t^{n-1}}\,. \tag{9}\]
For an irregular stacking n-layer graphene, we apply the chiral decomposition: (i) Identify the longest ABC stacking chain with its layer, say \(J_{1}\) and partition it out; (ii) repeat step (i) until all the layers are exhausted. Then we have \(J_{1}+J_{2}+...J_{D}=n\). And the effective Hamiltonian of the whole multilayer graphene is
\[\mathcal{H}_{n}^{eff}\approx H_{J_{1}}\oplus H_{J_{2}}\oplus...H_{J_{D}}\,, \tag{10}\]
where each \(H_{J_{i}}\) is of form Eq. (8).
### Multilayer Haldane model
In this subsection, we discuss various kinds of multilayer Haldane models corresponding to multilayer graphene models considered in the previous subsection. The monolayer Haldane model has the Hamiltonian
\[H_{1}^{H}=h_{1}\sigma_{1}+h_{2}\sigma_{2}+h_{3}\sigma_{3}\,, \tag{11}\]
here we ignore the term proportional to identity matrix for simplicity, which can be set to zero in the original Haldane model by setting \(\cos\phi=0\). And the n-layer Haldane model has
the Hamiltonian
\[\mathbb{H}_{n}^{H}=\left[\begin{array}{ccccc}H_{1}^{H}&\mathfrak{t}_{12}^{T}&& \\ \mathfrak{t}_{12}&H_{1}^{H}&\mathfrak{t}_{23}^{T}&&\\ &\mathfrak{t}_{23}&H_{1}^{H}&\mathfrak{t}_{34}^{T}&\\ &&...&\\ &&\mathfrak{t}_{n-1,n}&H_{1}^{H}\end{array}\right]_{2n\times 2n} =\,\sum_{i}h_{i}\mathbbm{1}_{n}\otimes\sigma_{i}+T \tag{12}\] \[=\,h_{3}\mathbbm{1}_{n}\otimes\sigma_{3}+\mathbb{H}_{n}^{G}\,,\]
where
\[\mathfrak{t}_{i,i+1}=\left[\begin{array}{cc}0&t_{i,i+1}\\ 0&0\end{array}\right]\,\,\,\text{or}\,\,\left[\begin{array}{cc}0&0\\ t_{i,i+1}&0\end{array}\right]=t_{i,i+1}(\sigma_{1}\pm i\sigma_{2}) \tag{13}\]
are the inter-layer hopping matrices. \(\mathbb{H}_{n}^{G}\) have the same matrix structure as, and at K points identical to \(\mathcal{H}_{n}^{G}\). In the following part of this paper, we will refer to \(\mathbb{H}_{n}^{G}\) instead of \(\mathcal{H}_{n}^{G}\) as multilayer graphene.
It is obvious that \(h_{3}\mathbbm{1}_{n}\otimes\sigma_{3}\) anti-commutes with \(h_{1}\mathbbm{1}_{n}\otimes\sigma_{1}\) and \(h_{2}\mathbbm{1}_{n}\otimes\sigma_{2}\). Let's show \(\{T,h_{3}\mathbbm{1}_{n}\otimes\sigma_{3}\}=0\) in the following:
\[\{\mathfrak{t}_{i,i+1},\sigma_{3}\}=0\Leftrightarrow\mathfrak{t}_{i,i+1} \sigma_{3}=-\sigma_{3}\mathfrak{t}_{i,i+1} \tag{14}\]
thus
\[\mathbbm{1}_{n}\otimes\sigma_{3}\,\,T=-T\,\,\mathbbm{1}_{n}\otimes\sigma_{3} \tag{15}\]
and thus
\[\{T,h_{3}\mathbbm{1}_{n}\otimes\sigma_{3}\}=0\,. \tag{16}\]
As a result, we get
\[\{h_{3}\mathbbm{1}_{n}\otimes\sigma_{3},\mathbb{H}_{n}^{G}\}=0\,, \tag{17}\]
and because of this anti-commutation relation, there is a one-to-one correspondence between \(\mathbb{H}_{n}^{G}\) and \(\mathbb{H}_{n}^{H}\).
## III The one-to-one correspondence and the proof
In this section, we prove that for all the stacking types considered in [5] except AA stacking(we consider it in Sec. (IV.1)), the topological invariant responsible for anomalous Hall conductivity is proportional to the number of layers times that of monolayer Haldane model. The basic logic is the same as in [10], in which we the topological number in the zero interlayer hopping limit and then prove that the gap does not close for any finite values of interlayer hoppings. This section is set into four subsections: in Sec. (III.1) We calculate the topological invariant of multilayer Haldane model when the interlaced hoppings are turned off; in Sec.(III.2) we prove a one-to-one correspondence between multilayer graphene and multilayer Haldane model, which shows that the gap closing of multilayer Haldane model cannot happen in Brillouin zone except at multilayer graphene gap closing point; in Sec.(III.3) we study the gap closing condition of multilayer graphene and find that interlayer hoppings do not change it; in Sec.(III.4) we summarize the ingredients in previous subsections and explains that the gap of multilayer Haldane model is protected by monolayer Haldane model gap and together with the result in Section. (III.1) we complete the proof.
### Interlayer hopping zero limit
The result in this subsection is not new, but for the completeness of the proof we put it here. The topological invariant [14; 15; 16; 17] responsible for the conductivity of anomalous quantum Hall effect, is defined as follows
\[\mathcal{N}[\mathbb{G}]=\frac{1}{3!}\int\frac{d^{3}p}{(2\pi)^{2}}\,\epsilon^{ ijk}\,\mathrm{Tr}\big{(}\mathbb{G}\partial_{i}\mathbb{G}^{-1}\mathbb{G} \partial_{j}\mathbb{G}^{-1}\mathbb{G}\partial_{k}\mathbb{G}^{-1}\big{)}\,. \tag{18}\]
In these expressions \(\mathbb{G}\) is the two-point Green function of electrons. In this section, we consider the limit that the interlayer hopping is zero, namely \(t_{\perp}=0\). \(\mathbb{G}\) is expressed as
\[\mathbb{G}^{-1}=i\omega-\mathbb{H}_{n}^{H}|_{t_{\perp}=0}=\left[\begin{array} []{cccc}Q_{1}&&&\\ &Q_{2}&&\\ &&...&\\ &&&Q_{n}\end{array}\right]_{2n\times 2n}\,. \tag{19}\]
where \(Q_{i}=i\omega-H_{1}^{i}\) and \(i\) is the label of the layer. The matrix \(\mathbb{G}^{-1}\) becomes a direct sum of \(Q_{i}\). If a matrix \(\mathbb{G}\) is a direct sum of the two other matrices \(G_{1}\) and \(G_{2}\), then the topological
invariant, or the winding number, of \(\mathbb{G}\) will be the sum of the topological invariant of \(G_{1}\) and that of \(G_{2}\). Namely, if \(\mathbb{G}=\left[\begin{array}{cc}G_{1}&0\\ 0&G_{2}\end{array}\right]\), then \(N[\mathbb{G}]=N[G_{1}]+N[G_{2}]\)[18].Because:
\[\mathcal{N}[\mathbb{G}] = \frac{1}{3!}\int\frac{d^{3}p}{(2\pi)^{2}}\,\epsilon^{ijk}\,\text{ Tr}\big{(}\mathbb{G}\partial_{i}\mathbb{G}^{-1}\mathbb{G}\partial_{j}\mathbb{G}^{-1} \mathbb{G}\partial_{k}\mathbb{G}^{-1}\big{)} \tag{20}\] \[= \frac{1}{3!}\int\frac{d^{3}p}{(2\pi)^{2}}\,\epsilon^{ijk}\,\text{ Tr}\big{(}G_{1}\partial_{i}G_{1}^{-1}G_{1}\partial_{j}G_{1}^{-1}G_{1}\partial_{k}G_{1 }^{-1}+G_{2}\partial_{i}G_{2}^{-1}G_{2}\partial_{j}G_{2}^{-1}G_{2}\partial_{ k}G_{2}^{-1}\big{)}\] \[= \mathcal{N}[G_{1}]+\mathcal{N}[G_{2}]\,.\]
Therefore,
\[\mathcal{N}[\mathbb{G}_{n}]=\sum_{i}\mathcal{N}[G_{i}]\,. \tag{21}\]
One can see that without the inter-layer hoppings, the n-layer Haldane model would have the topological invariant with the value equal to the sum of the topological invariants of each layer. This would be generalized if we turn on the inter-layer hopping and the energy gap does not close, which will be shown in the next subsection.
### One-to-one correspondence
In this subsection we prove a lemma that tells there is a one-to-one correspondence of the energy spectrum and eigenfunction between multilayer graphene and multilayer Haldane model.
We start from considering matrices
\[H_{H}=H_{G}+H_{3}\,, \tag{22}\]
satisfying
\[\{H_{G},H_{3}\}=0, \tag{23}\]
and
\[H_{3}^{2}=h_{3}^{2}\ \mathbb{1}\,, \tag{24}\]
where \(h_{3}\) is some number. These conditions are the ones satisfied by the Hamiltonians of multilayer graphene and multilayer Haldane model, shown in Sec.(II.2).
The lemma is as following: For each eigenequation
\[(H_{G}-E_{i}^{G})\psi_{i}^{G}=0\,, \tag{25}\]
there exists an eigen-equation
\[(H_{H}-E_{i}^{H})\psi_{i}^{H}=0\,, \tag{26}\]
satisfying
\[(E_{i}^{H})^{2}=h_{3}^{2}+(E_{i}^{G})^{2}\,, \tag{27}\]
and vice versa.
Proof: From Eq. (25) we get
\[\Big{(}H_{G}^{2}-(E_{i}^{G})^{2}\Big{)}\psi_{i}^{G}=0\,. \tag{28}\]
Eq. (22) (23)and (24) gives
\[H_{H}^{2}=H_{G}^{2}+h_{3}^{2}\ \mathbb{1}\;. \tag{29}\]
Substituting Eq. (29) into Eq. (28), we have
\[\Big{(}H_{H}^{2}-h_{3}^{2}-(E_{i}^{G})^{2}\Big{)}\psi_{i}^{G}=0\,. \tag{30}\]
There are two solutions for Eq. (30)
\[\Big{(}H_{H}\mp\sqrt{h_{3}^{2}+(E_{i}^{G})^{2}}\Big{)}\psi_{i}^{H\pm}=0\,. \tag{31}\]
with
\[\psi_{i}^{H\pm}=\Big{(}H_{H}\pm\sqrt{h_{3}^{2}+(E_{i}^{G})^{2}} \Big{)}\psi_{i}^{G}\,. \tag{32}\]
Define
\[E_{i}^{H} := \mbox{sgn}(E_{i}^{G})\sqrt{h_{3}^{2}+(E_{i}^{G})^{2}} \tag{33}\] \[\psi_{i}^{H} := \Big{(}H_{H}+\mbox{sgn}(E_{i}^{G})\sqrt{h_{3}^{2}+(E_{i}^{G})^{2 }}\Big{)}\psi_{i}^{G}\,, \tag{34}\]
we arrive at Eq. (26) and (27). The sign of \(E_{i}^{H}\) is chosen such that at the limit \(h_{3}\to 0\) Eq. (25) and (26)become identical, which means
\[E_{i}^{H}(h_{3}\to 0)\to E_{i}^{G}\,, \tag{35}\]
and also \(\psi_{i}^{H}\neq 0\). The above procedure can be reversed: starting from Eq. (26) we can derive Eq. (25) and (27) so the lemma is proven.
Remark: the condition Eq. (24) makes sure that the correspondence is one-to-one, without it, there is still a correspondence similar as Eq. (33) and (34)but not one-to-one, between the two spectra, with \(h_{3}^{2}\) replaced by eigenvalues of \(H_{3}^{2}\).
### Gap closing condition of multilayer graphene
Eq. (33) tells us that the gap closing condition of multilayer Haldane model, namely \(E_{i}^{H}=0\) for some \(i\) is that \(h_{3}=E_{i}^{G}=0\). In this subsection, we look for gap closing condition of multilayer graphene \(E_{i}^{G}=0\). The secular determinant of multilayer graphene is
\[\det\bigl{(}\mathbb{H}_{n}^{G}-E^{G}\mathbb{1}_{n}\bigr{)}=\det\bigl{(}h_{1} \mathbb{1}_{n}\otimes\sigma_{1}+h_{2}\mathbb{1}_{n}\otimes\sigma_{2}+T-E^{G} \mathbb{1}\bigr{)} \tag{36}\]
and the gap closes at half-filling when
\[\det\bigl{(}\mathbb{H}_{n}^{G}-E^{G}\mathbb{1}_{n}\bigr{)}\Big{|}_{E^{G}=0}= \det\bigl{(}\mathbb{H}_{n}^{G}\bigr{)}=0\,. \tag{37}\]
The choices of \(h_{1}\) and \(h_{2}\) that satisfy Eq. (37) determine the gap closing points. Though stacking type and values of \(t_{i.i+1}\) in general determine the energy eigenvalues of multilayer graphene and it is very hard or maybe impossible to find the energy spectrum, Eq. (37) is doable.
Next we show that
\[\det\bigl{(}\mathbb{H}_{n}^{G}\bigr{)}=(-h_{1}^{2}-h_{2}^{2})^{n} \tag{38}\]
for \(\mathfrak{t}_{i,i+1}\) in Eq. (13) and any value of \(t_{i.i+1}\,,i=1,...,n-1\). When \(t_{i.i+1}=0\) for all \(i=1,...,n-1\) it is easy to see that
\[\det\bigl{(}\mathbb{H}_{n}^{G}\bigr{)}=(-h_{1}^{2}-h_{2}^{2})^{n}\,, \tag{39}\]
and we just need to prove terms involving \(t_{i.i+1}\) do not contribute to the determinant. We use mathematical induction. From here we change the notation of \(\mathfrak{t}_{i,i+1}\) from \(t_{i,i+1}\) into
\(\left[\begin{array}{cc}0&u_{i}\\ v_{i}&0\end{array}\right]\) for convenience. For \(n=2\) we have
\[\det\!\left(\mathbb{H}_{2}^{G}\right)\,=\,\left|\begin{array}{ccccc}0&h_{1}- ih_{2}&&&v\\ h_{1}+ih_{2}&0&u&&\\ &u&0&h_{1}-ih_{2}\\ v&&&h_{1}+ih_{2}&0\end{array}\right|=(-h_{1}^{2}-h_{2}^{2})^{2}\,, \tag{40}\]
for either \(u=0\) or \(v=0\). Suppose for \(n=k\)
\[\det\!\left(\mathbb{H}_{k}^{G}\right)=(-h_{1}^{2}-h_{2}^{2})^{k}\,, \tag{41}\]
we calculate \(\det\!\left(\mathbb{H}_{k+1}^{G}\right)\). It is expressed as following:
\[\det\!\left(\mathbb{H}_{k+1}^{G}\right) \tag{42}\] \[=\,\left|\begin{array}{cccccc}0&h_{1}-ih_{2}&0&v_{1}&0&0&{\bf 0 }\\ h_{1}+ih_{2}&0&u_{1}&0&0&0&{\bf 0}\\ 0&u_{1}&0&h_{1}-ih_{2}&0&v_{2}&{\bf 0}\\ v_{1}&0&h_{1}+ih_{2}&0&u_{2}&0&{\bf 0}\\ 0&0&0&u_{2}&0&h_{1}-ih_{2}&...\\ 0&0&v_{2}&0&h_{1}+ih_{2}&0&...\\ {\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}&...&...&...\end{array}\right|\,.\]
Let us consider \(u_{1}=0\) and \(v_{1}=0\) separately. \(u_{1}=0\) or \(v_{1}=0\) can contribute to the determinant only if their cofactors are nonzero. If \(u_{1}=0\), we can show that the cofactors of the two \(v_{1}\)s equal to zero:
\[A_{4,1}=-\left|\begin{array}{cccccc}h_{1}+ih_{2}&0&0&0&0&{\bf 0}\\ 0&0&0&0&v_{2}&{\bf 0}\\ v_{1}&0&h_{1}+ih_{2}&u_{2}&0&{\bf 0}\\ 0&0&0&0&h_{1}-ih_{2}&...\\ 0&0&v_{2}&h_{1}+ih_{2}&0&...\\ {\bf 0}&{\bf 0}&{\bf 0}&...&...&...\end{array}\right|=0 \tag{43}\]
because the second column is zero and similarly \(A_{1,4}\) is zero because its second row is zero.
If \(v_{1}=0\), the cofactors of the two \(u_{1}\)s are zero:
\[A_{2,3}=-\left|\begin{array}{cccccc}0&h_{1}-ih_{2}&0&0&0&\mathbf{0}\\ 0&u_{1}&h_{1}-ih_{2}&0&v_{2}&\mathbf{0}\\ 0&0&0&u_{2}&0&\mathbf{0}\\ 0&0&u_{2}&0&h_{1}-ih_{2}&...\\ 0&0&0&h_{1}+ih_{2}&0&...\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&...&...&...\\ \end{array}\right|. \tag{44}\]
because the first column is zero and similarly \(A_{3,2}\) is zero because its first row is zero. Therefore there is no contribution from \(u_{1}\) or \(v_{1}\). As a result,
\[\det\bigl{(}\mathbb{H}_{k+1}^{G}\bigr{)}=(-h_{1}^{2}-h_{2}^{2})\det\bigl{(} \mathbb{H}_{k}^{G}\bigr{)}=(-h_{1}^{2}-h_{2}^{2})^{k+1}\,. \tag{45}\]
Therefore, the gap closing condition is still \(h_{1}=h_{2}=0\) unchanged by interlayer hoppings. Remark: this result can be generalized, namely the \(h_{1}\) and \(h_{2}\) in each layer can be different \(h_{1}^{i}\) and \(h_{2}^{i}\), then we will have \(\det\bigl{(}\mathbb{H}_{n}^{G}\bigr{)}=\prod_{i=1}^{n}(-(h_{1}^{i})^{2}-(h_{2 }^{i})^{2})\) and the gap closing condition is \(h_{1}^{i}=h_{2}^{i}=0\) for one of the \(i\)'s.
### The final proof
Let us summarize the conclusions in precious subsections: from Sec.(II.2) the Hamiltonian of multilayer Haldane model composes of \(h_{3}\mathbb{1}_{n}\otimes\sigma_{3}\) and the Hamiltonian of multilayer graphene, which anti-commute with each other, and from Sec. (III.2) this leads to a one-to-one correspondence between the spectra of them Eq. (33). As far as \(AA\) stacking is excluded, we showed in Sec (III.3) multilayer graphene band gap closes, namely \(E_{i}^{G}=0\), only when \(h_{1}=h_{2}=0\), but then \(h_{3}\neq 0\) as is required by the gap of monolayer Haldane model:
\[h_{1}^{2}+h_{2}^{2}+h_{3}^{2}>0\,. \tag{46}\]
Therefore \(h_{3}=0\) and \(E_{i}^{G}=0\) can never be satisfied simultaneously except the possibility at the limit \(t_{i,i+1}\rightarrow\infty\). So the topological invariant of multilayer Haldane model is not modified by the introduction of the nearest interlayer hoppings, even for the irregular stacking types. Sec. (III.1) shows that at the limit interlayer hoppings are zero, the topological invariant of multilayer Haldane model is the sum of all topological invariant of each monolayer Haldane
models. This completes the proof. In a sense, our proof is not even limited to Haldane model, because we do not need the detailed information of \(h_{1}\)\(h_{2}\) and \(h_{3}\).
Remark: This conclusion can be generalized into the situation that for each layer with label \(i\), \(h_{1}^{i}\) and \(h_{2}^{i}\) do not have to be the same with other layers. The bottom line is that \(h_{3}^{i}\) has to be the same in order to anti-commute with \(T\) matrix. Since each layer is an insulator, \((h_{1}^{i})^{2}+(h_{2}^{i})^{2}+h_{3}^{2}>0\). Therefore there is no phase transition when turning on any of the interlayer hopping. Then the topological invariant of the whole layered system is the sum of the topological invariants of each layer.
## IV Examples of models that do have phase transitions
To show the non-triviality of the result in the previous section, we consider two examples that do have phase transitions, violating one of the requirements in previous sections. For simplicity we only consider bilayer models. The models are: AA stacking, in which Eq. (17) is not obeyed, and \(\alpha\beta\&\beta\alpha\) stacking, in which \(h_{3}=0\) and \(E_{i}^{G}=0\) can be simultaneously satisfied.
### AA stacking
The AA stacking model is defined as following:
\[H_{AA}^{2} = \left[\begin{array}{cccc}h_{3}&h_{1}-ih_{2}&t&&\\ h_{1}+ih_{2}&-h_{3}&&t\\ t&&h_{3}&h_{1}-ih_{2}\\ &t&h_{1}+ih_{2}&-h_{3}\end{array}\right] \tag{47}\] \[=: \sum_{i=1}^{3}h_{i}\mathbbm{1}\otimes\sigma_{i}+t\sigma_{1}\otimes 1\:.\]
\(H_{AA}^{2}\) is block-diagonalized by a transformation \(S=\frac{1}{\sqrt{2}}(\mathbbm{1}-i\sigma_{2})\otimes\mathbbm{1}\):
\[SH_{AA}^{2}S^{-1}=\left[\begin{array}{cccc}h_{3}+t&h_{1}-ih_{2}&\\ h_{1}+ih_{2}&-h_{3}+t&\\ &&h_{3}-t&h_{1}-ih_{2}\\ &&h_{1}+ih_{2}&-h_{3}-t\end{array}\right] \tag{48}\]
therefore the spectrum is
\[E_{AA}=\pm t\pm\sqrt{\sum_{i}h_{i}^{2}}=:\pm t\pm h\,. \tag{49}\]
The band-closing points are
\[|t|=h_{min},h_{max} \tag{50}\]
and if we tune the value of \(t\) from zero to infinity the bilayer model will transit from topological/normal(depending on the monolayer model) insulator to metal and then to normal insulator.
### \(\alpha\beta\&\beta\alpha\) stacking
We define \(\alpha\beta\&\beta\alpha\) stacking as following: there are two interlayer hoppoings: both from the \(\alpha\) sub degrees of freedom to the \(\beta\) sub degrees of freedom and from the \(\beta\) sub degrees of freedom to the \(\alpha\) sub degrees of freedom. The continuum models were introduced in [19; 20]. The Hamiltonian is as following
\[H^{2}_{\alpha\beta\&\beta\alpha} = \left[\begin{array}{cccc}h_{3}&h_{1}-ih_{2}&&v\\ h_{1}+ih_{2}&-h_{3}&u&&\\ &u&h_{3}&h_{1}-ih_{2}\\ v&&h_{1}+ih_{2}&-h_{3}\end{array}\right] \tag{51}\] \[= \sum_{i=1}^{3}h_{i}\mathbb{1}\otimes\sigma_{i}+\frac{u+v}{2} \sigma_{1}\otimes\sigma_{1}+\frac{u-v}{2}\sigma_{2}\otimes\sigma_{2}\] \[=: \sum_{i=1}^{3}h_{i}\mathbb{1}\otimes\sigma_{i}+t_{1}\sigma_{1} \otimes\sigma_{1}+t_{2}\sigma_{2}\otimes\sigma_{2}\,.\]
When \(u=0\) or \(v=0\), this Hamiltonian reduces to bilayer case of the model in Eq. (12). Although the gap does not close for the model in Eq. (12), for general \(u\) and \(v\), as we will see, the gap does close. The spectrum of this Hamiltonian satisfies
\[E^{2}_{\alpha\beta\&\beta\alpha}=h_{3}^{2}+h_{1}^{2}+h_{2}^{2}+t_{1}^{2}+t_{2 }^{2}\pm 2\sqrt{h_{1}^{2}t_{1}^{2}+h_{2}^{2}t_{2}^{2}+t_{1}^{2}t_{2}^{2}}\,,. \tag{52}\]
Now we define the following function
\[f := (h_{1}^{2}+h_{2}^{2}+t_{1}^{2}+t_{2}^{2})^{2}-4(h_{1}^{2}t_{1}^{2}+h_{2}^ {2}t_{2}^{2}+t_{1}^{2}t_{2}^{2})\,, \tag{53}\]
and then gap closing condition can be reinterpreted as
\[\left\{\begin{array}{c}f=0\\ h_{3}=0\,.\end{array}\right. \tag{54}\]
\(f\) is simplified as
\[f=(h_{1}^{2}-h_{2}^{2}-t_{1}^{2}+t_{2}^{2})^{2}+4h_{1}^{2}h_{2}^{2}\,. \tag{55}\]
So the gap closing condition becomes
\[\left\{\begin{array}{c}t_{1}^{2}-t_{2}^{2}=h_{1}^{2}-h_{2}^{2}\ \mbox{or}\ uv=h_{1}^{2}-h_{2}^{2}\\ h_{1}h_{2}=0\\ h_{3}=0\,.\end{array}\right. \tag{56}\]
In the case of bilayer model of Eq. (12), all three equations cannot be simultaneously satisfied because \(uv=0\). So \(uv=0\) case are topologically nontrivial, if the monolayer model is topological. There is a phase transition from topologically nontrivial phase into topologically trivial phase when turning \(|uv|\) from zero to infinity: From Eq. (56)we can see when increasing \(|uv|\) there are always some values of \(|uv|\), say \(|uv|_{0}\) to close the gap, and after all the gap-closing values, the phase becomes trivial because it is smoothly connected to the case that \(u\rightarrow\pm\infty\) and \(v\rightarrow\pm\infty\).
Figure 1: The phase diagram of \(H^{2}_{\alpha\beta k;\beta\alpha}\). The colored regions are topological nontrivial if monolayer Hamiltonian is so.
Next let's consider Wilson fermion[21] for each layer of Eq. (51) instead of Haldane model for simplicity and study the phase transition. It was also proposed Qi, Wu and Zhang to be realized as spin Hall effect in two-dimensional paramagnetic semiconductors[22]. The Hamiltonian for Wilson fermion is
\[H_{W}=(m+\cos p_{1}+\cos p_{2})\sigma_{3}+\sin p_{1}\sigma_{1}+\sin p_{2}\sigma _{2}\,, \tag{57}\]
from Eq. (56) the gap closing condition is
\[\mbox{If }uv>0\left\{\begin{array}{ll}&uv=\sin^{2}p_{1}\\ &\sin p_{2}=0\\ &m\pm 1+\cos p_{1}=0\,.\end{array}\right. \tag{58}\] \[\mbox{If }uv<0\left\{\begin{array}{ll}&uv=-\sin^{2}p_{2}\\ &\sin p_{1}=0\\ &m\pm 1+\cos p_{2}=0\,.\end{array}\right. \tag{59}\]
in which \(+\) or- is taken such that \((1\pm m)^{2}\leq 1\). Eq. (58) and (58) can be combined into one condition
\[(uv)^{2}=(1-(1\pm m)^{2})^{2}\,. \tag{60}\]
## V Discussion
Though we proved that the gap does not closed for any finite value of interlayer hopping \(t_{i,i+1}\), the gap does get smaller and smaller when we increase the value of \(t_{i,i+1}\), and closes at \(t_{i,i+1}=\infty\). The reason is that at \(t_{i,i+1}=\infty\), there is one \(E^{G}\) goes to the zero limit for all values of \(h_{1}\) and \(h_{2}\) so the gap closing point of multilayer Haldane model is determined solely by \(h_{3}=0\). In [10] we showed this for bilayer model. Now we show it is true in n-layer model also, at least when all \(t_{i,i+1}\) have the same value \(t_{bot}\). When we increase the value of \(t_{bot}\) such that \(t_{bot}>>\sqrt{h_{1}^{2}+h_{2}^{2}}\), then the low energy effective theory nearby \(K\) points in [3] becomes a good approximation throughout the Brillouin zone. And we have
\[{\cal H}_{n}^{G}\approx H_{J_{1}}\oplus H_{J_{2}}\oplus...H_{J_{D}}\,, \tag{61}\]
where
\[H_{J_{i}}=-\frac{1}{t_{\perp}^{J_{i}-1}}\left(\begin{array}{cc}0&(h_{1}-ih_ {2})^{J_{i}}\\ (h_{1}+ih_{2})^{J_{i}}&0\end{array}\right) \tag{62}\]
where \(J_{i}\) are integer numbers that corresponding to the chiral decomposition and \(J_{1}+J_{2}+...J_{D}=n\). And the energy eigenvalues
\[E_{J_{i}}=\pm\frac{\sqrt{h_{1}^{2}+h_{2}^{2}}^{J_{i}}}{t_{\perp}^{J_{i}-1}} \to 0\text{ as }t_{\perp}\to 0 \tag{63}\]
for \(J_{i}\geq 2\). But it is impossible that all \(J_{i}=1\) because of the mechanism of chiral decomposition. Therefore if \(t_{bot}\) becomes too big, though it is impossible to transit into another topological/trivial insulator phase, the material becomes a semiconductor practically.
From the proof we can see that multilayer Haldane models even with irregular stacking can be used as a method to build large Chern number device since the phase is understood. Moreover, from Sec. (III.4),as the behavior is predictable when changing \(h_{1}^{i}\) and \(h_{2}^{i}\) in each layer, we can change the Chern number not only by adding or removing layers but also by modifying each layer. And one way to do this is to consider distant neighbor intra-layer hoppings: as shown in[11], distant neighbor hoppings can also create large Chern number so the combination of both methods can give even more structures.
In his paper, we have found the one-to-one correspondence between multilayer graphene and multilayer Haldane models, there may be more interesting phenomena of multilayer Haldane models to be discovered. We only consider nearest neighbor interlayer hoppings. As shown in multilayer graphene[23], distant interlayer hoppings can induce trigonal warping in the band structure. It is interesting to understand what will happen to multilayer Haldane model if we allow distant interlayer hoppings. Maybe the one-to-one correspondence will be modified or even ruined. We leave this as a future direction.
###### Acknowledgements.
X. Wu is grateful for valuable discussions with C.X.Zhang.
|
2308.15037 | Is it an i or an l: Test-time Adaptation of Text Line Recognition Models | Recognizing text lines from images is a challenging problem, especially for
handwritten documents due to large variations in writing styles. While text
line recognition models are generally trained on large corpora of real and
synthetic data, such models can still make frequent mistakes if the handwriting
is inscrutable or the image acquisition process adds corruptions, such as
noise, blur, compression, etc. Writing style is generally quite consistent for
an individual, which can be leveraged to correct mistakes made by such models.
Motivated by this, we introduce the problem of adapting text line recognition
models during test time. We focus on a challenging and realistic setting where,
given only a single test image consisting of multiple text lines, the task is
to adapt the model such that it performs better on the image, without any
labels. We propose an iterative self-training approach that uses feedback from
the language model to update the optical model, with confident self-labels in
each iteration. The confidence measure is based on an augmentation mechanism
that evaluates the divergence of the prediction of the model in a local region.
We perform rigorous evaluation of our method on several benchmark datasets as
well as their corrupted versions. Experimental results on multiple datasets
spanning multiple scripts show that the proposed adaptation method offers an
absolute improvement of up to 8% in character error rate with just a few
iterations of self-training at test time. | Debapriya Tula, Sujoy Paul, Gagan Madan, Peter Garst, Reeve Ingle, Gaurav Aggarwal | 2023-08-29T05:44:00Z | http://arxiv.org/abs/2308.15037v1 | # Is it an i or an l: Test-time Adaptation of Text Line Recognition Models
###### Abstract
Recognizing text lines from images is a challenging problem, especially for handwritten documents due to large variations in writing styles. While text line recognition models are generally trained on large corpora of real and synthetic data, such models can still make frequent mistakes if the handwriting is inscrutable or the image acquisition process adds corruptions, such as noise, blur, compression, etc. Writing style is generally quite consistent for an individual, which can be leveraged to correct mistakes made by such models. Motivated by this, we introduce the problem of adapting text line recognition models during test time. We focus on a challenging and realistic setting where, given only a single test image consisting of multiple text lines, the task is to adapt the model such that it performs better on the image, without any labels. We propose an iterative self-training approach that uses feedback from the language model to update the optical model, with confident self-labels in each iteration. The confidence measure is based on an augmentation mechanism that evaluates the divergence of the model's prediction in a local region. We perform rigorous evaluation of our method on several benchmark datasets as well as their corrupted versions. Experimental results on multiple datasets spanning multiple scripts show that the proposed adaptation method offers an absolute improvement of up to 8% in character error rate with just a few iterations of self-training at test time.
## 1 Introduction
Text line recognition [14, 15] has been a challenging problem in the field of computer vision and machine learning for several decades. The task of recognizing handwritten text involves understanding and interpreting human handwriting - having a free flowing nature [1] - in various languages and styles, making it a complex and multifaceted problem. Over the years, various sophisticated models [1, 13, 14, 15] have been developed, which are trained on large corpora of labeled and synthetic data. However, recognizing handwriting data still remains a challenge due to large variations in styles across individuals. Additionally, the image acquisition process often adds corruptions such as blur, noise, compression, making recognition even more challenging. Deep learning models are often known to have generalization issues leading to significant drop in performance for distribution shifts [16, 17], corruptions [16], etc. This is no different for text line recognition. To solve this problem, we develop an algorithm to adapt an existing text line recognition model to specific writing styles during test time, given only a single test image.
Unsupervised domain adaptation [16, 17] is a well-studied problem, where the task is to adapt a model trained on a labeled source dataset to a new target dataset using only unlabeled samples from the target. Such a paradigm may not align with real world settings where we are given only a single test instance for adaptation, without having access to any training data. Recently, there has been a pragmatic direction of work namely Test Time Adaptation (TTA) [23, 24, 15], where a model needs to be adapted on the fly, using only a few test samples, with no access to the training data. While these models need multiple test instances to show performance gains, we may not have multiple pages of handwriting for every test writer or style. Hence, we specifically look into the problem of single image test time adaptation for text line recognition models.
Writing style generally varies significantly across individuals, and is often considered as the signature of an individual. For example, some writer's "i" may look like an "l", "n"
Figure 1: Sample image from the GNHK dataset [14] showing how the letter “o” might look like “q” or “a” in handwritten text, which is the writer’s style. We want our algorithm to adapt to such styles on the fly during test time.
may look like an "r", and so on (Figure 1). However, instead of manually listing all such idiosyncrasies in handwriting across individuals, we would want to develop an algorithm that can automatically learn them on the fly. Typically, one individual's idiosyncrasies in writing style do not carry over to other individuals. Thus, we do not look into the online paradigm of updating models, but rather reset the model to the source model after every adaptation. This not only saves the model from diverging away, but also avoids any potential privacy concerns [22].
TTA for image classification and semantic segmentation has been of recent interest in the literature. A few TTA methods [23, 1] require access to training data to learn from additional self-supervised losses which can then be optimized for the test instances. Other methods like [22, 23] do not assume access to source data, but adapt to test data by minimising entropy over a batch where each batch has several data samples. On the contrary, we look into the problem of adapting to one writer style at a time, using only a single test instance. We also do not modify the training strategy, which would otherwise need access to the training data. Moreover, some of the metrics such as entropy used in these works, are non-trivial to compute for text line recognition models specifically for CTC decoder based models, as there can be many mappings to the same output string [1].
Existing works on adaptation of text line recognition models require access to lots of unlabeled target data along with labeled source data [23, 24, 25, 26]. But this may not always be possible due to privacy/storage concerns entailing the source data. Other methods like [10, 11] need a few labeled samples of the new writer to adapt the source model. Our method does not need any access to the source data, can be applied on any off-the-shelf text line recognizer, and does not need any labeled data during test-time for adaptation. To the best of our knowledge, TTA from a single handwritten image of a writer, without access to any source data, and without using any writer identification information during source model training, is a novel and more realistic setting which has not been explored yet. Table 1 shows the comparison of different related works in the literature.
Our TTA setting takes as input a single handwritten image of a writer containing a few lines of handwritten text in it. We look into a de-coupled text-line recognition model consisting of an encoder (optical part) and decoder (language model). Such a model allows us to plug-and-play the language model (LM) based on the domain at hand. Our algorithm exploits both the encoder and the decoder for adaptation. As shown in Figure 2, we first progressively update the optical model using the output from the LM decoder via a weighted CTC loss. The weights are obtained by judging the model's confidence in a local region around the input image. The iterative nature helps the algorithm to self-improve beyond the original model. We use a computationally efficient character n-gram language model within the loop to get superior quality pseudo-labels than just the optical model's prediction, which acts as an additional supervisory source. After updating the optical model, which more often looks at local context, we exploit longer context information via a Large Language Model (LLM). Specifically, we get the top-k predictions from the updated model using the beam search algorithm, and then re-rank them based on log-likelihood score of a large language model, which looks at the entire line. We show that both these steps are complementary to each other and offer a significant improvement in performance.
We perform experiments on five benchmark datasets: ICDAR2015-HTR [27], GNHK [13], IAM [12], CVL [14] and KOHTD [15]. We also create corrupted versions of these datasets similar to [1] for image classification, to simulate corruptions which may occur during image acquisition. We perform rigorous qualitative and quantitative experiments, and ablation studies to show the efficacy of the proposed method. The key contributions of this paper are:
1. We introduce the problem of single image test time adaptation for the novel objective of adapting text line recognition to individual styles.
2. We develop an algorithm that uses LM in the loop to update the optical model, followed by refining the predictions by looking at longer context using an LLM.
3. Our algorithm shows significant performance improvements over baselines [1] on multiple datasets.
## 2 Related Work
**Test-Time Adaptation.** Test-Time Adaptation (TTA) has been recently studied in the literature for classification [22, 13] and segmentation tasks [23, 14]. These methods can be grouped into two broader categories. First, methods in which the training algorithm includes additional heads for self-supervised tasks [23, 13, 12]. These additional losses are also optimized on the test images during test time. Self-supervised losses include rotation prediction [23], self-reconstruction [1], student-teacher feature prediction
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Source data \\ Few \\ \end{tabular} } & \multirow{2}{*}{Target Label} & \multicolumn{2}{c|}{\begin{tabular}{c} Target data \\ One \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Writer-specific \\ Information \\ \end{tabular} } \\ \hline [23] & & & & & \multirow{2}{*}{
\begin{tabular}{c} Writer-specific \\ Information \\ \end{tabular} } \\ [23] & & ✓ & & & ✓ & \\ [22] & & & & & \\ [23] & & & & & \\ \hline [1] & ✓ & ✓ & ✓ & ✓ & ✓ \\ [1] & ✓ & & ✓ & ✓ & ✓ \\ [1] & ✓ & & & ✓ & ✓ \\ \hline [22] & ✓ & & & & ✓ \\ \hline Ours & & & ✓ & & \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different methods of adaptation. Our TTA setting considers a very challenging realistic scenario where the model has to be adapted to a single handwriting image, without access to any source data, target data or target labels.
(Bartler et al., 2022), etc. The hypothesis is that such losses not only regularize the model during training, but also improve the performance when optimized for test samples, given the gradients for such losses align with the gradients computed w.r.t. the actual labels of the test samples. The second group of methods, on the other hand, do not modify the training strategy and only update the source model using some pseudo-losses such as entropy (Zhang et al., 2021; Wang et al., 2021), self-label cross-entropy (Goyal et al., 2022; Chen et al., 2022), or some specific parameters of the network, such as batch-normalization (Khurana et al., 2021; Hu et al., 2021; Schneider et al., 2020; Nado et al., 2020). Our method falls into the second category where we do not modify the training strategy and do not add any additional head to the network, thus making it applicable to any off-the-shelf text line recognition model.
As these methods are designed for tasks such as classification, it is non-trivial to extend them to sequence learning tasks, which is the focus of this paper. We still adapt some of these methods to the problem at hand, and compare with them in Section 4. Most of these methods do not use any additional source of information beyond the network's prediction to improve. In this work, we leverage on a language model to improve beyond the optical model's performance.
**Domain Adaptation for Text Line Recognition.** Compared to TTA for classification and segmentation tasks where the notion of style is limited, in text line recognition, the notion of style is quite prevalent, as it can vary widely across writers. There have been some works on adaptation of text line recognition models, which can be classified into three broader categories. First, adapting to an entire dataset with a lot of unlabeled samples (Kang et al., 2020; Tang et al., 2022; Fogel et al., 2020; Zhang et al., 2019). Most of the losses and techniques used in such works are analogous to those used in classification and segmentation tasks in literature. The second category of work (Wang and Du, 2022; Kohut et al., 2023) assumes access to writer identification information while training the source model, and trains a separate module to extract writer-specific information. Finally, the third category of work operates in the few shot adaptation technique, assuming access to a few labeled instances for new test writers (Bhunia et al., 2021).
Contrary to aforementioned works, our method does not need access to the source dataset, no information about writers during training, no labeled data for the test writers, and adapts to only one image of handwriting at test time. This makes our model useful for real-world scenarios and applicable to any off-the-shelf model, without making any changes to the model training.
## 3 Methodology
We first formally define the problem statement, then explain the architecture of the model we use and then describe the proposed approach of test time adaptation.
### Problem Statement
Consider that we are given a text line recognition model, which given an image, predicts the text in it, which is a sequence of characters, i.e., \(f_{\theta}:\mathbf{x}\rightarrow\mathbf{y}\), where \(\mathbf{x}\in\mathbb{R}^{h\times w\times 3}\), and \(\mathbf{y}\in\mathbb{V}^{n}\), where \(\mathbb{V}\) is vocabulary set of character, and \(n\) is variable depending on the model's prediction.
We consider the problem of test-time adaptation given an image of a page consisting of multiple text lines. We can apply off-the-shelf text line detectors to get a list of text lines \(\{\mathbf{x}_{i}\}_{i=1}^{m}\). We can then apply the text line recognizer \(f_{\theta}\) on the individual lines to get the predicted text. This line recognizer model would generally perform well on handwriting styles that it has seen before, or are similar to such styles. But when encountered with new styles of handwriting, or corruptions on such handwritten images, the model's performance may degrade. Our objective is to adapt the model \(f_{\theta}\) so that it performs better on the test image than the source model. In our setting, after adapting to a test image, we reset the model back to the source model before adapting to a new image.
### Model Architecture
Text line recognition models generally consist of two parts - an encoder, which takes as input a raw image and outputs
Figure 2: **Framework Overview. Given an input image, we extract the lines from it. Then pass the original and the augmented version of the lines through the line recognition model which comprises of an optical and a language model. We then compare the outputs for the original and the augmented version of lines to get a confidence measure for each line. Based on this measure, we progressively choose lines and use their self-labels to compute the CTC loss and update the optical model. After updating the optical model in this way, we then extract top-k predictions for every line along with their scores. We then pass these top-k lines through a large language model to obtain the likelihood scores and combine it with the line recognition scores to re-score the top-k predictions and finally predict only the top-1.**
features or logits, and a decoder, which produces a sequence of output characters from the features generated by the encoder. The decoder can utilize an explicit language model to encode knowledge about the domain to correct the errors made by the optical model. There are two broad categories of models - end-to-end encoder-decoder models Li et al. (2021), and de-coupled encoder decoder models trained separately Diaz et al. (2021). In the latter, the optical model is trained on image data using CTC loss, and the language model is trained on only text data. Typically, the two are combined during inference using scores from both the models. In this work, we use this de-coupled model as the source model, as it is often light-weight, and the de-coupled language model decoder allows a plug-and-play approach for new domains. We follow the self-attention based model architecture of Diaz et al. (2021). The composite text line recognizer is represented as \(f_{\theta}=h\circ g_{\theta}\).
**Optical Encoder** (\(g_{\theta}\)) comprises of a convolutional neural network (CNN) like MobileNet Howard et al. (2017), followed by multiple transformer-like multi-headed self-attention layers Vaswani et al. (2017) to capture longer range context. Finally, a linear classifier is used to predict the symbols for every frame. The input to this block is \(\mathbf{x}\in\mathbb{R}^{h\times w\times 3}\), and the output is logits \(g_{\theta}(\mathbf{x})\in\mathbb{R}^{w^{\prime}\times C}\), where \(w^{\prime}\) is a down-sampled version of \(w\), and \(C\) is the number of character symbols in the vocabulary. This model is learned using CTC loss Graves et al. (2006) between the ground-truth label string and the logit.
**Language Model Decoder** (\(h\)) takes in the encoder network's logits and combines it with the language model to decode text content from it. The decoded string \(\mathbf{y}^{*}\) can be obtained by optimizing the following -
\[\mathbf{y}^{*}=\operatorname*{arg\,max}_{\mathbf{y}}p(\mathbf{y}|\mathbf{x})p (\mathbf{y})^{\alpha} \tag{1}\]
where \(\alpha\) is the weight of the language model. It is interesting to note that the formulation of the CTC algorithm Graves et al. (2006) is such that multiple strings can map to the same final decoded output. An approximate solution to the above problem is obtained by using beam decoding which combines the language model scores, \(p(\mathbf{y})\) and optical model scores, \(p(\mathbf{y}|\mathbf{x})\) at every step of decoding.
### Test-Time Adaptation
Given an image consisting of multiple lines as input, we want to adapt the model and refine its predictions such that it performs better than when we apply the original source model only. Our adaptation process consists of two parts - updating the optical model, with a computationally efficient language model in the loop (local context), and finally tuning the predictions for every line using a large language model (global context). We next discuss these in detail.
**Adapting the Optical Model** As discussed before, there are often idiosyncrasies in an individual's handwriting which are typically consistent. Understanding them would help the model to adapt and personalize the predictions. We develop a self-training mechanism to automatically identify such idiosyncrasies and update the optical model. It is interesting to note that the optical model alone can make certain errors, which the language model corrects. This acts as a feedback signal to the optical model and improves its performance. However, the confidence of the model can vary across lines, and we would want the model to self-improve by progressively learning from confident lines. We next define the measure of confidence we use in our algorithm.
**Confidence of the Optical Model.** The predictions of deep neural networks have been often found to be incorrect with high confidence Nguyen et al. (2015). Because of the highly non-linear nature of neural networks, small perturbations in inputs have been shown to have huge differences in outputs Goodfellow et al. (2014). We hypothesize that smoother transitions in the model's outputs owing to changes in the input, leads to better correlation between confidence and correctness. In other words, if we perturb the image by a small amount, the predictions should not be perturbed significantly. We use this idea and formalize a measure to assign confidence to every line prediction in the image. This can be formally represented as follows,
\[c(f_{\theta}(\mathbf{x}))=1-d(f_{\theta}(\mathbf{x}),f_{\theta}(\mathbf{\hat{ x}})) \tag{2}\]
where \(\mathbf{\hat{x}}\) is an augmented version of the image \(\mathbf{x}\), and \(d()\) can be any distance measure. In our algorithm, we use the normalized edit distance (NED) (\(=\textsc{EditDis}(f_{\theta}(\mathbf{x}),f_{\theta}(\mathbf{\hat{x}}))/[f_{ \theta}(\mathbf{x})]\)) between the two strings as the distance measure. We use very light augmentations to obtain \(\mathbf{\hat{x}}\), as we want to judge local smoothness of the function. More details are discussed in Section 4.
**Self-Training Loss.** The CTC loss used to train a text line recognizer is at a line level, rather than at a frame level (analogous to pixels in segmentation). Hence, we can only gather self-labels at a line level. Given an image, we first extract the lines from it using any off-the-shelf text line detector Diaz et al. (2021) to obtain a list of lines \(\{\mathbf{x}_{i}\}_{i=1}^{m}\). We pass these lines through the encoder and the decoder to obtain self-labels \(\{\mathbf{\hat{y}}_{i}=f_{\theta}(\mathbf{x})\}_{i=1}^{m}\). As we only train the optical model using self-training, we can then compute the CTC loss, \(\mathcal{L}_{CTC}\) between the self-labels (output of the decoder) and the output of the encoder, i.e., the optical model. Now, as the model may not be equally confident for all lines, we apply the confidence of the model in Eqn 2 to weight the losses. The optimization problem we solve can be represented as follows:
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\sum_{i=1}^{m}c(f_{\theta}( \mathbf{x}_{i}))\mathcal{L}_{CTC}\big{(}g_{\theta}(\mathbf{x}_{i}),\mathbf{ \hat{y}}_{i}\big{)} \tag{3}\]
Note that all learnable parameters of the network pertain to \(g_{\theta}\), and the above equation optimizes it only. One can also compute the self-labels using just \(g_{\theta}\), i.e., from the optical model's output itself instead of using the language model incorporated decoder \(h\circ g_{\theta}\). But, we hypothesize that the language model acts as an additional supervisory signal, which helps to improve the performance. We also show that ablation in Section 4. We do not compute any gradients through the confidence function \(c()\), but they are updated in every step through forward propagation, as is discussed next.
**Progressive Updates.** Although we use a confidence measure \(c()\) to weight the CTC loss for every line, the confidence metric as well as the self-labels gets better as we adapt. Hence, we progressively update the model starting with high confidence predictions, while updating the self-labels as well as the confidence measure in each iteration. This would allow the model to identify the writer's style and self-improve, compared to fixing the self-labels and the confidence measure once. We start from the most confident lines, and in each iteration, we keep adding the next most confident ones progressively. Thus, considering we back-propagate for \(K\) iterations in total, in the iteration number \(k\leq K\), we progressively add \(m_{k}=(k/\kappa)m\) samples, which have the highest values of the confidence metric, to back-propagate and optimize Eqn 3.
Our method to adapt the optical model is shown in Algorithm 1. In each progressive iteration, we first make a forward pass to compute the self-labels \(\mathbf{\hat{y}}_{i}\), and the confidence measure \(c(f_{\theta}(\mathbf{x}_{i}))\), and then backpropagate only using the fraction of the samples based on the confidence measure. Note that our model learns from self-labels throughout all iterations, and there is a possibility of divergence if the initial predictions turn out to be more incorrect than not. Hence, after adaptation, we compare the final prediction with the initial prediction and if the edit distance between the two is greater than a certain threshold (=\(0.75\) used in all experiments), then we use the initial prediction itself.
The above process only updates the encoder, i.e., the optical model. We next discuss how we can tune the predictions using a large language model, instead of the computationally cheap LM used in the decoder, \(h\) here.
```
0: Source model: \(f_{\theta}=h\circ g_{\theta}\), List of lines: \(\{\mathbf{x}_{i}\}_{i=1}^{m}\)
0: Updated model: \(f_{\theta}=h\circ g_{\theta^{*}}\)
1:\(\theta_{0}\leftarrow\theta\)
2:for\(k=1\dots K\)do
3:\(\mathbf{\hat{x}}_{i}\leftarrow\text{Augment}(\mathbf{x}_{i}),\forall i\in[1,m]\)
4:\(//\) Forward Propagate
5: Obtain \(f_{\theta}(\mathbf{x}_{i})\) and \(f_{\theta}(\mathbf{\hat{x}}_{i})\)\(\forall i\in[1,m]\)
6:\(c(f_{\theta}(\mathbf{x}_{i}))\gets 1-d(f_{\theta}(\mathbf{x}_{i}),f_{\theta}( \mathbf{\hat{x}}_{i})),\forall i\in[1,m]\)
7:\(//\) Back Propagate
8:\(\theta_{k}\leftarrow\theta_{k-1}-\)
9:\(\eta\sum_{i\in\text{up-m}_{k}}c(f_{\theta}(\mathbf{x}_{i}))\nabla_{\theta} \mathcal{L}_{CTC}\big{(}g_{\theta}(\mathbf{x}_{i}),\mathbf{\hat{y}}_{i}\big{)}\)
10:endfor
11:\(\theta^{*}\leftarrow\theta_{K}\)
```
**Algorithm 1**Adapting the Optical Model
Adaptation using a Large Language ModelThe optical model looks at local context to make a prediction. However, longer length context often helps to correct some of the errors. Large Language Models (LLM) using words tokens instead of characters offers a good mechanism to extract longer context information present in natural text. While an LLM can be used in Eqn 1 itself, that can be computationally expensive. To avoid that, we extract top-k predictions from the CTC decoder \(\{\mathbf{y}^{\mathbf{i}}\}_{i=1}^{k}\), and then re-score the lines using an LLM. We use a pre-trained FlanT5-XL model Chung et al. (2022) as the language model for re-scoring. The log-likelihood score of the LLM can be computed as follows:
\[\mathcal{L}_{LLM}=-\sum_{j=1}^{W}\log P(\mathbf{w}_{j}|\mathbf{w}_{1},..., \mathbf{w}_{j-1}) \tag{5}\]
where \(\mathbf{w}_{i}\) represents the \(j^{th}\) word token in a sentence \(\mathbf{y}\). Using only the LLM loss may make it hallucinate, and thus to remain grounded to the actual text in the image, we add the optical score \(\mathcal{L}_{opt}\), which is a combination of the scores from the optical logits and the n-gram LM used in the decoder. Finally, we take a weighted sum of these two scores to pick the best candidate amongst the generated top-k predictions as follows -
\[\text{best\_candidate}=\operatorname*{arg\,min}_{i}\mathcal{L}_{opt}^{i}+w* \mathcal{L}_{LLM}^{i} \tag{6}\]
where \(w\) is the weight given to the LLM score, which is a hyper parameter (set to \(0.5\) for all our experiments).
## 4 Experiments
To showcase the effectiveness of our method, we perform thorough experimentation and qualitative analysis on five benchmark datasets.
**Datasets**:
**ICDAR2015-HTR**Sanchez et al. (2015) consists of handwritten historical documents in English. This is a particularly challenging dataset due to the variance in writing styles and quality of images. We use the entire dataset of \(433\) images (train, test, and validation) for testing. We use the split of the dataset which contains line-level annotations.
**GoodNotes Handwriting Collection (GNHK)**Lee et al. (2021) is a handwritten English text dataset, sourced from different regions in the world. It contains various types of texts, like diary notes, shopping lists, etc. captured
Figure 3: Exemplars from the datasets used in this paper.
through a mobile phone. This dataset has only word-level annotations, with line ids. We convert that to line annotation by concatenating the word annotations using spaces, and then joining the word images. We use the train and test split consisting of \(687\) images for evaluation. We omit lines which are printed and have math symbols, as there are no transcriptions for the latter.
**IAM Handwriting Database**[14] consists of English handwritten text consisting of \(1539\) pages from \(657\) unique writers. The images for this dataset are much clearer than the other datasets and also has line-level annotations. We use the entire dataset for evaluation.
**CVL Database**[11] consists of English handwritten text consisting of \(1598\) pages from \(7\) handwritten texts (1 German and 6 English Texts). \(310\) writers participated in the dataset. 27 of which wrote 7 texts and 283 writers had to write 5 texts. We use lines from both train and test splits to evaluate the efficacy of our approach.
**KOHTD**[11] consists of a large collection of exam papers filled by students in the Kazakh Language (99%) and Russian Language (1%). It contains a total of 1891 images, containing word level annotations for each page. A sample image from each of the datasets is shown in Figure 3.
**Corruptions**: In real world scenarios, image acquisition generally adds corruptions to the image such as blur, noise, compression, etc. Following [1], we create \(19\) different corrupted versions for each of the five datasets (Figure 4).
**Implementation Details**: The source model is trained in the same way and on the same datasets as in [1]. This model is strong enough as it is trained on lots of labeled and synthetically generated text lines. To calculate the confidence measure for all lines, we augment them using three augmentations, viz, mean filter, median filter and sharpness. Our n-gram model is also similar to [1] (\(n=9\)). For test-time adaptation, we use SGD, with a momentum of 0.9, and a learning rate of \(10^{-3}\). All lines in an image form a single batch of data, to which the model is to be adapted. We follow the same model architecture and configurations as in [1]. Only the optical encoder \(g_{\theta}\) is adapted to a writer's handwriting.
**Baselines**: To the best of our knowledge, this is the first work on test-time adaptation of text line recognition models from only a single test image, without accessing any source data to adapt or without learning any writer identification models while training the source model. As there is no direct baseline in the literature for this use case, we consider three strong baselines from the TTA literature for classification and segmentation, namely BatchNormalization Adaptation (BN) [10], TENT [20], and Prediction Time Normalization (PTN) [15]. These are suited for our problem statement as they do not need any additional changes to the source model training pipeline. While BN and PTN are trivial to extend to this use case, the same is not the case for TENT, which was designed for classification tasks. TENT minimizes the entropy of the predictions by modifying only the affine parameters of the batch normalization layer. As the problem at hand involves a many-to-one mapping from input frames to final sequence, computing the entropy over unique mappings is non-trivial and computationally intractable. Thus, we minimize the mean entropy of all the frames over all lines in the page. We do it for the same number of iterations as in our algorithm.
**Comparison with baselines:** Table 2 shows the performance of our approach compared against the three baselines and the source model. For all the five datasets, the proposed approach outperforms all others. The quantum of improvement is further established when the experiment is repeated on the corrupted versions of the datasets, as shown in Figure 4. The relative improvements across the \(19\) corrup
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Source} & BN & PTN & TENT & Ours \\ \hline \multirow{2}{*}{IC15-HTR} & Original & 14.4 & 17.0 & 46.8 & 14.3 & **13.2** \\ & Corrupted & 27.6 & 30.6 & 61.6 & 27.6 & **25.3** \\ \hline \multirow{2}{*}{GNHK} & Original & 14.0 & 14.2 & 21.2 & 14.3 & **13.3** \\ & Corrupted & 24.4 & 24.2 & 38.6 & 25.5 & **23.0** \\ \hline \multirow{2}{*}{IAM} & Original & 6.0 & 6.1 & 12.5 & 6.1 & **5.4** \\ & Corrupted & 15.4 & 15.1 & 28.7 & 15.5 & **13.8** \\ \hline \multirow{2}{*}{CVL} & Original & 8.6 & 8.7 & 14.6 & 8.6 & **7.7** \\ & Corrupted & 26.4 & **25.8** & 41.8 & 26.4 & 25.9 \\ \hline \multirow{2}{*}{KOHTD} & Original & 23.3 & 23.3 & 29.4 & 23.3 & **16.8** \\ & Corrupted & 32.0 & 31.9 & 40.8 & 32.0 & **25.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance (CER - lower better) comparison of the proposed method with baselines. “Original” denotes the performance on the non-corrupted dataset and “Corrupted” denotes the average performance over all corruptions. The proposed approach outperforms all, both on original and corrupted versions.
Figure 4: An example original image line and \(19\) corruptions used from [1].
tions for two Latin and one non-Latin datasets are shown in Figure 5. Please refer to the appendix for corruption wise improvements. Please note that the original algorithm for TENT uses a batch of more than 100 images, and that too with online updates, i.e., without resetting the model back to the source model after every update, which is what we do in this work. Moreover, we observe that changing the BatchNorm parameters of the network has a significant impact on the performance. We can see this from the performance of BN which combines the target statistics (mean and variance) with the source statistics using a convex combination. PTN takes this to the extreme by completely replacing the source statistics with the target's.
**Ablation of different choices in self-training**: There are several design choices in the proposed self-training algorithm, namely, a) whether to update the self-labels in every learning iteration, b) whether to include the self-labels in the training set progressively instead of taking all of the lines at once, c) whether to weight the loss using a confidence metric, and finally d) the confidence measure we use to choose the lines for progressive updates. We conduct an ablation over these choices to observe their individual importance. Table 3 shows this analysis for ICDAR2015-HTR (please refer to appendix for the other datasets). The augmentation based NED metric we use in our algorithm performs better (\(0.8\%\)) than just using the CTC loss itself as the confidence metric. Moreover, progressive updates lead to **1.6\(\%\)** improvement than using all of the lines in back-propagation for all iterations.
**Ablation of Optical and LLM updates**: Our test time adaptation algorithm consists of two parts - adapting the optical model and re-scoring using a large language model. We show an ablation by switching on/off these two blocks in Table 5. The first row represents the source model. As we can see, the optical model updates and LLM re-scoring on their own bring about \(2.0\%\) and \(0.4\%\) improvement on average over all corruptions, and the composite model outperforms the source model by about **2.2\(\%\)**.
**Ablation of number of iterations**: In our approach, if a page has \(m\) lines, we progressively add \(\nicefrac{{m}}{{K}}\) lines based on the confidence metric rank in each iteration, where \(K\) is the total number of progressive update iterations. In all our experiments over all datasets, we use \(K=4\). Figure 6 shows the variations in performance for different values of \(K\). The performance variations are more apparent for corruptions where the performance is low. If too few updates are used, then there is less scope for improvement through self-labeling. On the other hand, if too many rounds of updates are performed, model outputs tend to diverge, effectively over-fitting to inaccurate self-labeled data.
**What are the changes that the model makes?**: Moreover, we analyze the changes in character predictions, the model makes that leads to improvement in performance. We create a list of all replacement edits between the ground-truth and the source model predictions, as well as between the ground-truth and the TTA adapted model. We then look at the replacements which are not there in the second list, but there in the first list. We analyze this on the ICDAR2015-HTR dataset and present the most frequent replacements in
Figure 5: Absolute improvement (in \(\%\)) obtained with our approach for various corruptions. Apart from minor regressions for a couple of corruption types, the proposed approach shows significant improvement across the board.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \begin{tabular}{c} Update \\ self-labels \\ \end{tabular} & Progressive & Weighting &
\begin{tabular}{c} Confidence \\ CTC \\ \end{tabular} & NED & Perf (CER\(\downarrow\)) \\ \hline \hline \multirow{4}{*}{\(\checkmark\)} & \multirow{4}{*}{\(\checkmark\)} & \multirow{4}{*}{\(\checkmark\)} & \multirow{4}{*}{\(\checkmark\)} & 26.9 \\ & & & & 26.6 \\ \cline{1-1} \cline{3-4} & \(\checkmark\) & & \(\checkmark\) & 27.4 \\ \cline{1-1} \cline{3-4} & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 25.5 \\ \cline{1-1} \cline{3-4} & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & **24.7** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation for self-training algorithm to update the optical model. All methods are trained for the same number of iterations. “NED” here denotes the normalized edit distance, which is the distance function used to compute the confidence measure. All performances are average over \(20\) datasets (original + 19 corruptions).
Table 4. Most replacements occur among similar looking characters and our TTA algorithm is able to figure them out on the fly.
**The role of LM in adapting the optical model**: As shown in Figure 2, our TTA algorithm includes the language model in the loop to update the optical model. This plays an interesting role, compared to other works in TTA which do not use any extra information. As the model is updated using self-training, it can often diverge because of incorrect self-labels. However, the LM acts as a correction module, preventing accidental divergence. We performed an experiment without LM in our TTA algorithm and observed a CER of \(29.5\%\) averaged over all corruptions on GNHK, compared to \(23.9\%\) when LM is used in TTA. We also plot the performance improvement of the source model with and without using LM in the decoder vs the performance difference with and without using LM in TTA (Figure 7). It shows that for corruptions where using LM in decoding offers higher improvement for source model's inference, using LM in TTA shows even greater improvement compared to TTA without LM. This is interesting and further highlights the importance of using LM in TTA particularly when the optical model is more confused.
## 5 Conclusion
We introduce the problem of adapting text line recognition models to specific writers using a single test image, without accessing any labeled source data. We develop a method that first progressively adapts the optical model, using an augmentation based confidence function (local context). We further tune the predictions using an LLM which looks at the context of the entire line. Through rigorous experiments and ablation studies on five benchmark datasets, we establish the efficacy of our method.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline Before & \(o\) & \(l\) & \(a\) & \(a\) & \(n\) & \(e\) & \(i\) & \(m\) & \(e\) & \(e\) & \(T\) & \(a\) & \(n\) & \(u\) & \(a\) & \(s\) & \(o\) & \(d\) & \(e\) & \(r\) \\ After & \(e\) & \(t\) & \(o\) & \(e\) & \(r\) & \(o\) & \(e\) & \(n\) & \(i\) & \(a\) & \(t\) & \(u\) & \(e\) & \(e\) & \(r\) & \(e\) & \(a\) & \(t\) & \(s\) & \(n\) \\ \hline Count & 182 & 155 & 151 & 150 & 149 & 135 & 122 & 108 & 103 & 93 & 92 & 91 & 87 & 87 & 84 & 82 & 80 & 80 & 80 & 78 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Most frequent character replacements made by our TTA algorithm that lead to a better performance.
Figure 6: **Performance variation for different number of updates.** Each line shows the performance for one corruption. The average performance is highlighted in the darker shade.
Figure 7: **The role of LM in adapting the optical model.** Each point here denotes the relative improvement the LM introduces for various corrupted versions of the GNHK dataset. The x value of each point is the CER improvement when decoding is done using the LM Decoder vs GreedyDecoder when evaluating the source model. The y-axis shows the CER improvement when our proposed method is used with the LM Decoder vs GreedyDecoder. |
2303.17552 | Structure Formation in Non-local Bouncing Models | In this study, we investigate the growth of structures within the
Deser-Woodard nonlocal theory and extend it to various bouncing cosmology
scenarios. Our findings show that the observable structure growth rate,
$f\sigma_8$, in a vacuum-dominated universe is finite within the redshift range
of $0<z<2$, contrary to previous literature. Although $f\sigma_8$ exhibits no
divergences, we observe a slight difference between the evolution of the
$\Lambda$CDM and the non-local DW II models. Regarding structure formation in
bouncing cosmologies, we evaluate the evolution of $f\sigma_8$ near the
bouncing point. Among the different bouncing cases we explore, the oscillatory
bounce and pre-inflationary asymmetrical bounce demonstrate a physical profile
where the growth rate begins as a small perturbation in the early epoch and
increases with inflation, which can be regarded as the seeds of large-scale
structures. These findings are significant because they shed light on the
growth of seed fluctuations into cosmic structures resulting from non-local
effects. | D. Jackson, R. Bufalo | 2023-03-30T17:20:56Z | http://arxiv.org/abs/2303.17552v1 | # Structure Formation in Non-local Bouncing Models
###### Abstract
In this study, we investigate the growth of structures within the Deser-Woodard nonlocal theory and extend it to various bouncing cosmology scenarios. Our findings show that the observable structure growth rate, \(f\sigma_{8}\), in a vacuum-dominated universe is finite within the redshift range of \(0<z<2\), contrary to previous literature. Although \(f\sigma_{8}\) exhibits no divergences, we observe a slight difference between the evolution of the \(\Lambda\)CDM and the non-local DW II models. Regarding structure formation in bouncing cosmologies, we evaluate the evolution of \(f\sigma_{8}\) near the bouncing point. Among the different bouncing cases we explore, the oscillatory bounce and pre-inflationary asymmetrical bounce demonstrate a physical profile where the growth rate begins as a small perturbation in the early epoch and increases with inflation, which can be regarded as the seeds of large-scale structures. These findings are significant because they shed light on the growth of seed fluctuations into cosmic structures resulting from non-local effects.
N -local effects, Structure Formation, Modified Gravity, Bouncing Cosmology. ]D. Jackson,\({}^{a}\) R. Bufalo,\({}^{b,}\)1]R. Bufalo, \({}^{b,}\)1]Instituto de Fisica Teorica, IFT-UNESP,
R. Dr. Bento Teobaldo Ferraz, 271, Varzea da Barra Funda, Sao Paulo - SP, 01140-070, Brazil 2]Departamento de Fisica, Universidade Federal de Lavras
Caixa Postal 3037, 37200-900 Lavras, MG, Brazil dimas.jacksonunesp.br [email protected]
Footnote 1: Corresponding author.
## 1 Introduction
Despite the fact that more than two decades have passed since the seminal discovery of the accelerated expansion of the Universe [1; 2], and that it dominates the Universe's energy budget and pushes galaxies away at an accelerated pace, the physics mechanism behind it is unclear and still under debate. The minimal modification of the Einstein gravity in order to handle the accelerated expansion of current universe is known as the standard cosmological model, or \(\Lambda\)CDM. This model does not change the geometric terms of Einstein field equation, rather it introduces an extra and assumptive component of matter, called dark energy, in the form of a cosmological constant \(\Lambda\), which is ultimately interpreted as the energy density of the vacuum.
Although the \(\Lambda\)CDM model possesses a simple structure, and is formally and observationally consistent model, it carries some unsolved puzzles. In the context of the accelerated expansion of the Universe, we have the coincidence problem: \(\Lambda\)CDM can not explain why the accelerated phase in the expansion began only recently in the cosmological time. Consequently, in order to describe some of these unsolved puzzles a wealth of alternative, more complicated cosmological models are continuously developed and proposed by either changing the matter content (dark energy models) or modified gravity (modify Einstein-Hilbert action to provide extra geometric terms in field equation).
Several modified gravitational theories were proposed as attempts to generalize Einstein's gravitational theory, usually involving addition of new degrees of freedom. This can be reached by the insertion of new fields, considering a different geometrical framework or even by enforcing a symmetry principle. Typically, these new models are required to emulate the background expansion history of the universe given by \(\Lambda\)CDM, well supported by the data. The imposition of this condition is called the reconstruction problem [3; 4]. Once this step is fulfilled, then one can observationally distinguish among models by looking at their predictions beyond the background, such as solar system tests and the structure formation in the universe [5; 6; 7]. It is precisely within the implications of modified gravity models that our interest lies, in special examining whether bouncing cosmologies [8; 9; 10; 11; 12; 13] produce physically well-behaved patterns within the context of formation of large scale structures.
An approach to modify the GR inspired by infrared (IR) quantum corrections are the _non-local theories_, initially proposed in [14; 15]. In the ref. [14] the effective equations for the
gravitational fields were obtained using a non-local approximation for the quantum effective action, and it was obtained quantum corrections to the newtonian potential. In contrast, in ref. [15] was proposed the addition of a term proportional to \(R\square^{-1}R\) to the Einstein-Hilbert action, using a pure phenomenological approach. This kind of non-local terms involving inverse powers of the d'Alembertian appear in the IR limit of the quantum effective action [16; 17; 18; 19]. The issues of causality, domain of validity and boundary conditions in non-local classical and quantum field theories have been discussed [23; 25], and in both cases, physically viable models can be constructed. In recent years, we have seen a great interest in phenomenological aspects of nonlocal gravity models [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30].
Our point of interest is the Deser-Woodard improved model [31], inspired by quantum effective action corrections, which was further elaborated than their previous model [20] in order to fully satisfy the screening mechanism and also to reproduce the late time accelerated expansion of \(\Lambda\)CDM via the reconstruction procedure, without the necessity of a cosmological constant. 1 This improved model, which we will call DW II, minimally modifies the Einstein-Hilbert action, by the presence of a algebraic function of a non-local operator
Footnote 1: The original DW I model, based on the Lagrangian \(\mathcal{L}=Rf\left(\square^{-1}R\right)\) where \(f\) is an algebraic function, has been shown to be inconsistent, since it failed a decisive test by not satisfying the screening mechanism to avoid non-local effects in Solar System scale, violating thus observational constraints [23].
\[\mathcal{L}=\frac{1}{16\pi G}R\left[1+f(Y)\right], \tag{1}\]
where
\[Y =\square^{-1}g^{\mu\nu}\partial_{\mu}X\partial_{\nu}X\,, \tag{2a}\] \[X =\square^{-1}R\,. \tag{2b}\]
Here \(\square=g^{\mu\nu}D_{\mu}D_{\nu}\) is the covariant d'Alembertian operator and \(R\) is the curvature scalar. Actually, the term \(g^{\mu\nu}\partial_{\mu}X\partial_{\nu}X\) is negative in the Solar System scale and positive in the cosmological scale, which makes viable a screening effect.
The two Deser-Woodard models have been already examined in the context of structure formation [32; 33; 34; 35]. The authors studied the growth rate \(f\sigma_{8}\) predicted by the DW model in \(\Lambda\)CDM background, and found that the models lead to a good agreement with the Redshift-space distortions observations (RSD), which is known to provide a big database for testing modified gravity models. However, some discrepancies have been found among these analyzes.
We therefore revisit the analysis of structural growth rate of the Universe for the DW II model in order to shed some light in its issues, and show how our results disagree with those in [35]. Our analysis shows a growth rate \(f\sigma_{8}\) continuous for \(0<z<2\), see Fig. 2, while \(f\sigma_{8}\) has a prominent discontinuity in [35]. Furthermore, we also extend the bouncing solutions examined at the level of background cosmology in the DW II model [30] to the early time perturbations, by discussing bouncing models in the context of structure formation. The most interesting result is that in some bouncing universes we conclude that the growth of seed fluctuations into cosmic (large scale) structure can be ascribed to non-local effects.
The main interest of the present work is to analyze the perturbative growth of structures in the \(\Lambda\)CDM for the DW II model and extend it to five different bouncing cosmology models: symmetric bounce [36; 37; 38], oscillatory bounce [9; 36; 39], matter bounce [40; 41], finite time singularity model [42; 43; 44; 45] and pre-inflationary asymmetric bounce [46]. The paper is organized as follows: in Sec. 2 we review the main features of the DW II model, in special, in the
reconstruction process to determine the distortion function \(f(Y)\) that emulates the \(\Lambda\)CDM cosmology. In Section 3 we workout the cosmological (time) perturbation, considering scalar perturbations over a flat FLRW geometry background, and evaluate numerically the solution to the contrast density of matter and its physical observable, the structural growth rate \(f\sigma_{8}\). We discuss these results in order to highlight some possible causes to the disagreement with those in [35]. In Section 4 we examine the evolution of the growth rate \(f\sigma_{8}\) for the aforementioned bounce models. For the cases of oscillatory and asymmetrical bouncing universes, we find that they render physically acceptable patterns for the \(f\sigma_{8}\), which allow us to conclude that the growth of seed fluctuations into cosmological structures can be ascribed to non-local effects. At last, we present our final remarks and perspectives in Sec. 5.
## 2 Reconstruction procedure and background equations
In this section we shall describe the main aspects of the analysis regarding the zeroth order perturbative field equations (based on the action (1)), which is based in the reconstruction process in order to obtain the solution to the distortion function \(f(Y)\). The first step in the reconstruction process is to localize the action, which can be achieved by the introduction of two auxiliary scalar fields \(U\) and \(V\) as Lagrange multipliers in equation (1), resulting into
\[\mathcal{L}=\frac{1}{16\pi G}\left[R\left(1+f(Y)+U\right)+g^{\mu\nu}E_{\mu\nu }\right], \tag{2}\]
in which we have introduced \(E_{\mu\nu}=\partial_{\mu}U\partial_{\nu}X+\partial_{\mu}V\partial_{\nu}Y+V \partial_{\mu}X\partial_{\nu}X\), by means of notation. Hence, by considering \(X,Y,U\) and \(V\) as four independent scalar fields, the action \(S=\int d^{4}x\mathcal{L}\) is regarded as local.
One can observe from (2), obtained after localization procedures, that non-local terms result as effective scalar fields. From a phenomenological point of view, this means that non-local corrections give rise to effective lengths and masses which could alleviate several shortcomings of General Relativity at UV and IR scales, intrinsically related with regularization and renormalization of gravitational effective action.
Another important aspect of the model is that the fields \(X,Y,U\) and \(V\) in equation (2) are subject to retarded boundary conditions [31], which require that all the fields and their first time derivatives vanish in an initial value surface. In summary, unless these boundary conditions are satisfied by all the auxiliary scalars fields, unwanted new degrees of freedom would arise, these are known as ghosts (since they have negative kinetic terms) [31]. Hence, retarded boundary conditions will be used throughout our analysis.
Varying the action in respect to each of these fields, result into the following set of (constraint) equations
\[R-\Box X =0\,, \tag{3a}\] \[g^{\mu\nu}\partial_{\mu}X\partial_{\nu}X-\Box Y =0\,,\] (3b) \[2D^{\mu}\left(VD_{\mu}X\right)+\Box U =0\,,\] (3c) \[R\frac{\partial f(Y)}{\partial Y}-\Box V =0\,. \tag{3d}\]
The gravitational field equations are obtained by varying the action (2) in respect to the metric \(g^{\mu\nu}\)2.
Footnote 2: The indices in parenthesis denote the symmetric part \(E_{(\mu\nu)}=\frac{1}{2}\left(E_{\mu\nu}+E_{\nu\mu}\right)\).
\[\left(G_{\mu\nu}-D_{\mu}D_{\nu}+g_{\mu\nu}\Box\right)\left(1+U+f(Y)\right)+E_{ (\mu\nu)}-\frac{1}{2}g_{\mu\nu}g^{\rho\sigma}E_{\rho\sigma}=8\pi GT_{\mu\nu}\,, \tag{4}\]
where the energy momentum tensor \(T_{\mu\nu}=\left(\rho+p\right)u_{\mu}u_{\nu}+pg_{\mu\nu}\) corresponds to the usual baryonic matter and does not include the dark energy source term
This non-local model is known to reproduce the current accelerated expansion of the universe without cosmological constant when the _non-local distortion function_\(f(Y)\) satisfies
\[f(Y)\sim e^{1.1(Y+16.7)}\,. \tag{4}\]
This expression is an exponential fit to the numerical solution obtained trough the reconstruction process [31], which consists in requiring that the Friedmann equations of General Relativity should be satisfied by the DW II model.
Since we wish to analyze some bouncing universes within the DW II model at perturbative level, which corresponds in examine the behavior of the distortion function \(f(Y)\) trough the reconstruction process under the influence of bouncing universes, we shall revise aspects of the reconstruction process regarding the (zeroth order perturbative) field equations necessary to obtain (4).
We start the reconstruction procedure by expanding the field equations (3) over the Friedmann-Lemaitre-Robertson-Walker (FLRW) background:
\[ds^{2}=dt^{2}-a^{2}(t)dx_{i}dx^{i}\,. \tag{5}\]
This metric can also be seen as the zeroth order perturbative metric in the newtonian gauge. The d'Alembertian operator acting on a scalar function \(W(t)\), which depends only on time, is written as
\[\Box W(t)=d_{t}^{2}W(t)+3Hd_{t}W(t)\,, \tag{6}\]
where \(H=\frac{\dot{a}}{a}\) is the Hubble parameter. Therefore, the zeroth order field equations (00) and (ij) components (3) are, respectively, given by
\[\left(3H^{2}+3Hd_{t}\right)\left(1+U+f(Y)\right)+\frac{1}{2} \left(\dot{X}\dot{U}+\dot{Y}\dot{V}+V\dot{X}^{2}\right) =8\pi G\rho\,, \tag{7a}\] \[-\left[2\dot{H}+3H^{2}+d_{t}^{2}+2Hd_{t}\right]\left(1+U+f(Y) \right)+\frac{1}{2}\left(\dot{X}\dot{U}+\dot{Y}\dot{V}+V\dot{X}^{2}\right) =8\pi Gp\,. \tag{7b}\]
Furthermore, subtracting the equations (7a) and (7b) we find a differential equation for the function \(F(t)\equiv 1+U(t)+f[Y(t)]\), which is cast as
\[\left[2\dot{H}+6H^{2}+d_{t}^{2}+5Hd_{t}\right]F(t)=8\pi G\left( \rho-p\right)\,. \tag{8}\]
For the reconstruction process analysis, it is convenient to parametrize the (time dependence of the) field equations in terms of the \(e\)-folding time, \(N=\ln a_{0}/a\) (with \(a_{0}=1\)), so that \(f(Y)\) can be solved independently of a particular form of the scale factor 3. Since we want to reconstruct the accelerated expansion, the Friedmann equations from \(\Lambda\)CDM are used as source
Footnote 3: In this case is required only that the universe remains expanding, increasing in size by a factor of \(e\), \(N\) times. In Section 4, on the other hand, we will discuss some bouncing universes, i.e. the collapse and re-expansion.
\[H^{2}=H_{0}^{2}\left(\Omega_{M}e^{3N}+\Omega_{R}e^{4N}+\Omega_{ \Lambda}\right)\,, \tag{9a}\]
\[8\pi G\rho=3H_{0}^{2}\left(\Omega_{R}e^{4N}+\Omega_{M}e^{3N}\right)\,, \tag{9b}\] \[8\pi Gp=3H_{0}^{2}\frac{\Omega_{R}}{3}e^{4N}\,. \tag{9c}\]
The parameters \(\Omega_{M},\Omega_{R}\) and \(\Omega_{\Lambda}\) are respectively the matter, radiation and dark energy fractions of energy density at the present day. Hence, applying the above changes, equation (8) becomes
\[\left[\partial_{N}^{2}+(\epsilon-5)\partial_{N}+(6-2\epsilon)\right]F(N)=\frac {H_{0}^{2}}{H^{2}}\left(3e^{3N}\Omega_{M}+2e^{4N}\Omega_{R}\right)\,, \tag{10}\]
where \(\epsilon=\partial_{N}H/H=-\dot{H}/H^{2}\).
At last, we turn our attention to the auxiliary field equations (2a)-(2d): we expand them over the background (5) and also write them \(e\)-folding time \(N\), yielding to
\[12\left(1-\frac{\epsilon}{2}\right)\frac{H}{H_{0}}e^{-3N}+ \partial_{N}\left(\frac{H}{H_{0}}e^{-3N}\partial_{N}X\right) =0\,, \tag{11a}\] \[-\left(\partial_{N}X\right)^{2}\frac{H}{H_{0}}e^{-3N}+ \partial_{N}\left(\frac{H}{H_{0}}e^{-3N}\partial_{N}Y\right) =0\,,\] (11b) \[\partial_{N}U+2V\partial_{N}X =0\,,\] (11c) \[2\partial_{N}^{2}V+2\left(\epsilon-3\right)\partial_{N}V+12\left(2- \epsilon\right)\left(2\frac{\partial_{N}X}{\partial_{N}Y}V+\frac{\partial_{N} F}{\partial_{N}Y}\right) =0\,, \tag{11d}\]
Therefore, having in hand the solutions for \(F\) and \(U\), the non-local distortion function can be numerically obtained through the relation \(f=F-U-1\). The exact solution, which emulates the \(\Lambda\)CDM cosmology, reads
\[f(Y)\approx e^{0.73(Y+16.13)}\,. \tag{12}\]
and the curve fitted to the solution is present in Figure 1. On can note that our result (12) is not precisely the same of the original authors (4) (which might which may result from the small difference of the initial conditions), but our distortion function also presents the desired exponential growth at the recent epoch.
One last remark is that equations (7a) and (7b) are the zeroth order perturbative field equations (which ultimately led to our eq. (12)), and following the analysis developed above we will calculate the first order perturbative equations in the newtonian gauge in the next section.
Figure 1: Reconstruction process for the distortion function \(f(Y)\) for the \(\Lambda\)CDM universe.
Cosmological Perturbation Theory
In this section we will calculate the perturbative field equations for the DW II model, laying the ground for the application of the reconstruction process for bouncing universes. We will consider scalar cosmological perturbations, starting from the perturbative metric on the newtonian gauge, which is given by
\[ds^{2}=\left(1+2\Psi(\vec{r},t)\right)dt^{2}-\left(1+2\Phi(\vec{r},t)\right)a^{2 }(t)dx^{i}dx_{i}. \tag{10}\]
Here \(\Psi\) and \(\Phi\) are the two gauge invariant perturbation degrees of freedom, also called the Bardeen potentials [6, 7, 47].
The validity of the metric (10) in the description of large scale structures have been extensively discussed in the literature: analytical arguments and computational simulations have favoured that this metric provides a good approximation to the actual metric of the Universe (in the scalar-perturbation sector), encompassing the cosmological solution FLRW and the static Schwarzchild solution, for further details see [47, 24] and references therein.
Let us now present some important results and remarks to write down the perturbative field equations, as well some key aspects related with the non-local contributions. Keeping up to the first-order terms in the perturbation, one founds the \(00\) and \((ij)\) components of the Ricci tensor
\[R_{00} =a^{-2}\nabla^{2}\Psi+3H\left(\partial_{t}\Psi-2\partial_{t}\Phi \right)-3\dot{H}-3\partial_{t}^{2}\Phi-3H^{2} \tag{11a}\] \[R_{ij} =a^{2}\left[3H^{2}+\dot{H}-H\partial_{t}\left(\Psi-6\Phi\right)+ \partial_{t}^{2}\Phi+2\left(3H^{2}+\dot{H}\right)\left(\Phi-\Psi\right)\right] \delta_{ij}\] \[\quad-\partial_{i}\partial_{j}\left(\Phi+\Psi\right)-\nabla^{2} \Phi\delta_{ij}. \tag{11b}\]
The perturbed Einstein tensor can be cast as
\[G_{00} =3H^{2}+6H\partial_{t}\Phi-2a^{-2}\nabla^{2}\Phi \tag{12a}\] \[G_{ij} =a^{2}\left[-\left(3H^{2}+2\dot{H}\right)\left(1+2\Phi-2\Psi \right)-H\partial_{t}\left(2\Psi+6\Phi\right)-2\partial_{t}^{2}\Phi\right] \delta_{ij}\] \[\quad+\nabla^{2}\left(\Phi+\Psi\right)\delta_{ij}-\partial_{i} \partial_{j}\left(\Phi+\Psi\right), \tag{12b}\]
which can be identified as \(G_{\mu\nu}=\bar{G}_{\mu\nu}+\delta G_{\mu\nu}\), where \(\bar{G}_{\mu\nu}\) are the zero-order part and \(\delta G_{\mu\nu}\) the first-order perturbation. On the other hand, the field equations of the DW II model (3) can be written as \(G_{\mu\nu}+\Delta G_{\mu\nu}=8\pi GT_{\mu\nu}\), in which the symbol \(\Delta\) denotes the non-local correction and must not be confused with the perturbative correction, represented by \(\delta\). Therefore, we found the non-local contribution of the DW II model
\[\Delta G_{\mu\nu}=\left(U+f\left(Y\right)\right)G_{\mu\nu}+\left(g_{\mu\nu} \square-D_{\mu}D_{\nu}\right)\left(U+f\left(Y\right)\right)+E_{\left(\mu\nu \right)}-\frac{1}{2}g_{\mu\nu}g^{\rho\sigma}E_{\rho\sigma}. \tag{13}\]
In order to obtain the perturbative field equations, we decompose the perturbed auxiliary fields \(\left(X,Y,U,V\right)\), into the background term and the perturbation,
\[U(\vec{r},t)=U_{c}\left(t\right)+\delta U\left(\vec{r},t\right) \,,\quad X(\vec{r},t)=X_{c}\left(t\right)+\delta X\left(\vec{r},t\right)\,, \tag{14a}\] \[V(\vec{r},t)=V_{c}\left(t\right)+\delta V\left(\vec{r},t\right) \,,\quad Y(\vec{r},t)=Y_{c}\left(t\right)+\delta Y\left(\vec{r},t\right)\,, \tag{14b}\]
where the subscript \(c\) denotes that the fields are evaluated in the time dependent cosmological background. The spatial dependence of the (perturbed) fields \(X,Y,U\) and \(V\) can be readily
understood from the fact that the perturbative potentials introduced in the metric (13) depends on \(\vec{r}\). Moreover, in our analysis of the perturbed field equations, we will also need the perturbative expression of the d'Alembertian operator, which in first-order reads
\[\square X(\vec{r},t) =(1+2\Psi)^{-1}\,\partial_{t}^{2}X-a^{-2}\left(1+2\Phi\right)^{-1} \nabla^{2}X\] \[\quad-\partial_{t}\Psi\partial_{t}X+\left(3H+3\partial_{t}\Phi-6H \Psi\right)\partial_{t}X\] \[\quad+a^{-2}\left[\partial_{x}\left(\Psi+\Phi\right)\partial_{x}X +\partial_{y}\left(\Psi+\Phi\right)\partial_{y}X+\partial_{z}\left(\Psi+\Phi \right)\partial_{z}X\right]. \tag{14}\]
Finally, by replacing the results (14) and (15b) in (13), and after some algebraic manipulations, one can find the perturbed non-local correction of the \(00\) Einstein equation
\[\delta\Delta G_{00} =\left[3\partial_{t}\Phi\partial_{t}+6H\partial_{t}\Phi-\partial _{t}^{2}-2\frac{\nabla^{2}}{a^{2}}\Phi\right]\left(U_{c}+f\right)\] \[\quad+\left[-\frac{\nabla^{2}}{a^{2}}+3H\partial_{t}+3H^{2} \right]\left(\delta U+\frac{df}{dY}\delta Y\right)\] \[\quad+\frac{1}{2}\left(\dot{X}_{c}\dot{\delta U}+\dot{U}_{c}\dot {\delta X}+\dot{Y}_{c}\dot{\delta V}+\dot{V}_{c}\delta Y+2V_{c}\dot{X}_{c} \dot{\delta X}+\delta VX_{c}^{2}\right). \tag{15}\]
To complete the perturbative field equations we also expand the stress-energy tensor
\[\delta T_{00}=\rho_{c}\frac{\delta\rho}{\rho_{c}}\equiv\rho_{c}\delta \tag{16}\]
where the matter density parameter (also called density contrast) is defined by \(\delta\equiv\frac{\delta\rho}{\rho_{c}}\).
Our perturbative analysis of the reconstruction process (for bouncing universes) takes place in the Fourier space: we consider (spatial) plane wave solutions for the perturbative modes; this implies that the spatial Laplacian operator is rewritten as \(-\nabla^{2}\to k^{2}\). In this approach, we will restrict ourselves to the sub-horizon limit (\(k\gg\dot{a}\) or \(\frac{k}{aH}\gg 1\)), i.e., in which the spatial derivatives are more relevant than the time derivatives. Physically speaking, this means that we are only considering perturbative modes with wavelength \(k^{-1}\) much less than the Hubble distance \((aH)^{-1}\). Hence, in the sub-horizon limit 4, the first-order part of the \(00\) field equation assumes a reduced form
Footnote 4: All expressions henceforth are computed in the sub-horizon limit.
\[2\Phi\left(1+U_{c}+f\right)+\delta U+\frac{df}{dY}\delta Y=8\pi G\frac{a^{2}} {k^{2}}\rho\delta. \tag{17}\]
For the \((ij)\) components, we obtain from (14b) the perturbed Einstein tensor
\[\delta G_{ij}=\left(-a^{2}k^{2}\delta_{ij}+k_{i}k_{j}\right)\left(\Phi+\Psi \right). \tag{18}\]
Furthermore, the expression (18) can be rewritten in a more convenient form by acting with the projection operator \(\left(\frac{k^{i}k^{j}}{k^{2}}-\frac{1}{3}\delta^{ij}\right)\)[47], which yields
\[\left(\frac{k^{i}k^{j}}{k^{2}}-\frac{1}{3}\delta^{ij}\right)\delta G_{ij}= \frac{2}{3}k^{2}\left(\Phi+\Psi\right), \tag{19}\]
Therefore, the perturbative expansion of the non-local part of the \((ij)\) components is written as
\[\left(\frac{k^{i}k^{j}}{k^{2}}-\frac{1}{3}\delta^{ij}\right)\delta\Delta G_{ij}= \frac{2}{3}k^{2}\left[\left(U_{c}+f\right)\left(\Phi+\Psi\right)+\left(\delta U+ \frac{df}{dY}\delta Y\right)\right]. \tag{3.12}\]
With the result (3.12) we have concluded the perturbative analysis of the metric part of the \((ij)\) field equations, we now turn our attention to the source term. In our metric signature, the variation of the stress-energy tensor is
\[T^{i}_{\ j}=-p\delta^{i}_{\ j}+\Sigma^{i}_{\ j}, \tag{3.13}\]
where \(\Sigma^{i}_{\ j}\equiv T^{i}_{\ j}-\frac{\delta^{i}_{\ j}}{3}T^{k}_{\ k}\) is the traceless part of the tensor \(T_{ij}\). It is worth mention that in the case where the source consists of radiation and non-relativistic matter, we have a vanishing anisotropic stress tensor \(\Sigma^{i}_{\ j}\simeq 0\). Moreover, defining a anisotropic stress \(\sigma\) such that,
\[\left(\rho+p\right)\sigma\equiv\left(\frac{k^{i}k^{j}}{k^{2}}-\frac{1}{3} \delta^{ij}\right)\Sigma_{ij}, \tag{3.14}\]
we find
\[\left(\frac{k^{i}k^{j}}{k^{2}}-\frac{1}{3}\delta^{ij}\right) \delta T_{ij}=\left(\rho+p\right)\sigma. \tag{3.15}\]
With these results eqs. (3.11), (3.12) and (3.15), one can calculate the longitudinal component of the \((ij)\) field equations, which at the leading order in the limit \(k\gg aH\), is given by
\[\frac{2}{3}k^{2}\left(\Phi+\Psi\right)+\frac{2}{3}k^{2}\left[\left(U_{c}+f \right)\left(\Phi+\Psi\right)-\left(\delta U+\frac{df}{dY}\delta Y\right) \right]=\left(\rho+p\right)\sigma. \tag{3.16}\]
Finally, in the (late time) epoch when the relativistic contribution is small, we can neglect the contribution coming from the anisotropic stress \(\sigma\approx 0\). Thus, we get
\[\left(\Phi+\Psi\right)\left(1+U_{c}+f\right)-\left(\delta U+\frac{df}{dY} \delta Y\right)=0. \tag{3.17}\]
The equations (3.9) and (3.17) comprise the perturbative (metric) field equations of the DW II non-local gravity, in the sub-horizon limit. However, in order to fully determine the potentials introduced in the metric (3.1) and complete our reconstruction process, it is necessary to obtain the perturbative expansion of the auxiliary scalar fields introduced in the action (2.1).
### Perturbative expansion of the auxiliary fields
We shall now solve the perturbative equations for the auxiliary fields eq. (2.2), which together with the metric field equations (3.9) and (3.17), form the set of six equations for the six undetermined variables \(\left(\delta X,\delta Y,\delta U,\delta V,\Phi,\Psi\right)\). Hence, the full set of first-order perturbed
equations is explicitly written as:
\[8\pi G\frac{a^{2}}{k^{2}}\rho\delta =2\Phi\left(1+U_{c}+f\right)+\delta U+\frac{df}{dY}\delta Y, \tag{3.18a}\] \[\delta U+\frac{df}{dY}\delta Y =\left(\Phi+\Psi\right)\left(1+U_{c}+f\right),\] (3.18b) \[\delta X =-2\left(2\Phi+\Psi\right),\] (3.18c) \[\delta Y =0,\] (3.18d) \[\delta U =2V_{c}\delta X,\] (3.18e) \[\delta V =-2\left(2\Phi+\Psi\right)\frac{\partial f}{\partial Y}. \tag{3.18f}\]
Eliminating \(\delta Y\) e \(\delta U\) in the two first equations, results into
\[8\pi G\frac{a^{2}}{k^{2}}\rho\delta =2\Phi\left(1+U_{c}+f\right)-4V_{c}\left(2\Phi+\Psi\right) \tag{3.19a}\] \[-4V_{c}\left(2\Phi+\Psi\right) =\left(\Phi+\Psi\right)\left[1+a^{2}\left(U_{c}+f\right)\right]. \tag{3.19b}\]
Solving algebraically for the potentials \(\Phi\) and \(\Psi\), it yields
\[\Phi =\frac{4\pi Ga^{2}\rho\delta}{k^{2}\left(1+f_{c}+U_{c}\right)} \frac{\left(1+f_{c}+U_{c}+4V_{c}\right)}{\left(1+f_{c}+U_{c}+2V_{c}\right)}, \tag{3.20a}\] \[\Psi =-\frac{4\pi Ga^{2}\rho\delta}{k^{2}\left(1+f_{c}+U_{c}\right)} \frac{\left(1+f_{c}+U_{c}+8V_{c}\right)}{\left(1+f_{c}+U_{c}+2V_{c}\right)}, \tag{3.20b}\]
where \(f_{c}\equiv f(Y_{c})\). Hence, we observe that the potentials \(\Phi\) and \(\Psi\) are fully determined in terms of the auxiliary fields evaluated in the cosmological background.
For the analysis of the matter density perturbation (discussed below), it is convenient to separate the matter and the radiation contributions to the energy density, i.e. \(\rho=\rho_{R}+\rho_{M}\), where
\[\rho_{R} =\rho_{0R}a^{-4} \tag{3.21a}\] \[\rho_{M} =\rho_{0M}a^{-3}. \tag{3.21b}\]
Thus, we can rewrite equations (3.20a) and (3.20b) in terms of the density parameter \(\Omega^{0}\equiv\frac{8\pi G}{3H_{0}^{2}}\rho_{0}\),
\[\Phi =\frac{3H_{0}^{2}\left(\Omega_{R}^{0}a^{-2}\rho_{0R}\delta_{R}+ \Omega_{M}^{0}a^{-1}\rho_{0M}\delta_{M}\right)}{2k^{2}\left(1+f_{c}+U_{c} \right)}\frac{\left(1+f_{c}+U_{c}+4V_{c}\right)}{\left(1+f_{c}+U_{c}+2V_{c} \right)}, \tag{3.22a}\] \[\Psi =-\frac{3H_{0}^{2}\left(\Omega_{R}^{0}a^{-2}\rho_{0R}\delta_{R}+ \Omega_{M}^{0}a^{-1}\rho_{0M}\delta_{M}\right)}{2k^{2}\left(1+f_{c}+U_{c} \right)}\frac{\left(1+f_{c}+U_{c}-8V_{c}\right)}{\left(1+f_{c}+U_{c}+2V_{c} \right)}. \tag{3.22b}\]
As discussed above, in the sub-horizon limit \(k\gg aH\), the non-relativistic matter is more relevant than the radiation one. Therefore, the potentials are solely expressed in terms of the matter density perturbation
\[\Phi =\frac{3H_{0}^{2}}{2ak^{2}}\frac{\left(1+f_{c}+U_{c}+4V_{c} \right)}{\left(1+f_{c}+U_{c}+2V_{c}\right)}\frac{\Omega_{M}^{0}\rho_{0M} \delta_{M}}{\left(1+f_{c}+U_{c}\right)}, \tag{3.23a}\] \[\Psi =-\frac{3H_{0}^{2}}{2ak^{2}}\frac{\left(1+f_{c}+U_{c}+8V_{c} \right)}{\left(1+f_{c}+U_{c}+2V_{c}\right)}\frac{\Omega_{M}^{0}\rho_{0M} \delta_{M}}{\left(1+f_{c}+U_{c}\right)}. \tag{3.23b}\]
Some remarks about our results for the potentials (3.23a) and (3.23b): once the matter density contrast \(\delta_{M}\) is obtained, the potentials \(\Phi\) and \(\Psi\) are determined. In addition, we shall use our result for \(\delta_{M}\) to compare it with \(f\sigma_{8}\) data. At last, we will discuss the effects of bouncing universes in the profile of the matter density contrast \(\delta_{M}\), and consequently in the potentials. These aspects will be analyzed in the next sections.
One last remark is that our expressions are similar to those found in [35], except for a small difference: the term \(2V_{c}\) in the denominator appears as \(6V_{c}\) in equations (35) and (37) of [35].
### Structural Growth in Non-local Expanding Universe
In order to examine the implications of bouncing universes in non-local gravity models over the structure formation, we must first determine the solution for the density \(\delta_{M}\). With this motivation, we establish here the differential equation for the matter density contrast \(\delta_{M}\) and obtain its numerical solution.
The differential equation for the matter density contrast \(\delta_{M}\) can be obtained by using the conservation law for the stress-energy tensor, \(D_{\mu}T^{\mu}_{\ \nu}=0\). Consider the perturbation of the perfect fluid stress-energy tensor [48],
\[T^{0}_{\ 0} =\rho_{c}+\delta\rho, \tag{34a}\] \[T^{0}_{\ i} =\left(\rho_{c}+p_{c}\right)v_{i}=-a^{2}T^{i}_{\ 0},\] (34b) \[T^{j}_{\ i} =-\left(p+\delta p\right)\delta^{j}_{\ i}+\Sigma^{j}_{\ i}, \tag{34c}\]
where \(v_{i}\equiv dx_{i}/d\tau\) is the coordinate velocity. The \(\nu=0\) component of the conservation law, at first-order approximation, provides
\[\dot{\delta\rho}-a^{-2}\partial_{j}\left(\rho_{c}+p_{c}\right)v^{j}+3H\left( \delta\rho+\delta p\right)+3\dot{\Phi}\left(\rho_{c}+p_{c}\right)=0. \tag{35}\]
Moreover, using the equation of state \(p_{c}=w\rho_{c}\), as well as the fluid sound speed \(\dfrac{\delta p}{\delta\rho}=c_{s}^{2}\), we obtain
\[\dot{\delta}+\dfrac{\dot{\rho_{c}}}{\rho_{c}}\delta+\left(3\dot{\Phi}-a^{-2} \vec{\nabla}\cdot\vec{v}\right)\left(1+w\right)+3H\left(1+c_{s}^{2}\right) \delta=0. \tag{36}\]
We can also use the zeroth order equation, \(\dot{\rho_{c}}=-3H\left(\rho_{c}+p_{c}\right)=-3H\left(1+w\right)\rho_{c}\), to simplify the relation (36) as
\[\dot{\delta}+\left(3\dot{\Phi}-a^{-2}\vec{\nabla}\cdot\vec{v}\right)\left(1+w \right)+3H\left(c_{s}^{2}-w\right)\delta=0. \tag{37}\]
On the other hand, for the spatial components \(\nu=i\) it reads
\[\dot{v}_{i}=3H\dfrac{\dot{p}_{c}}{\dot{\rho}_{c}}v_{i}+\dfrac{\partial_{i} \delta p}{\left(p_{c}+\rho_{c}\right)}+\partial_{i}\Psi. \tag{38}\]
This is the Euler equation for an ideal fluid in comoving coordinates.
Some important remarks about (38) are in order: we wish to analyze the matter perturbation of the universe, then, the term \(\dfrac{\dot{p}_{c}}{\dot{\rho}_{c}}=c_{s}^{2}\), which corresponds to relativistic corrections to fluid velocity, is negligible. Furthermore, the gradient of the pressure fluctuations \(\vec{\nabla}\delta p\), does not contribute to the matter content in the sub-horizon limit [6, 7]. This model also
consider the null pressure in the absence of perturbation, characterized by \(w=0\). Taking these considerations into account, the only relevant terms of equations (3.27) and (3.28) are
\[\dot{\delta}_{M} =a^{-2}\vec{\nabla}\cdot\vec{v}-3\dot{\Phi}, \tag{3.29a}\] \[\dot{\vec{v}} =\vec{\nabla}\Psi. \tag{3.29b}\]
We can rewrite the above expressions in a more suitable form by differentiating the first equation with respect to the time and applying the divergence into the second one, so that
\[\ddot{\delta}_{M} =a^{-2}\vec{\nabla}\cdot\dot{\vec{v}}-2Ha^{-2}\vec{\nabla}\cdot \vec{v}-3\ddot{\Phi}, \tag{3.30a}\] \[\vec{\nabla}\cdot\dot{\vec{v}} =\nabla^{2}\Psi. \tag{3.30b}\]
At last, at the sub-horizon scale (\(k\gg aH\)) it is know that \(\Phi\sim\cos\left(\frac{k}{aH}\right)\), so, \(\dot{\Phi}\approx\ddot{\Phi}\approx 0\)[6; 7]. Under this limit, we can eliminate \(\vec{v}\) in (3.30a) and find a differential equation for the contrast density of matter. Therefore, in the Fourier space we have
\[\ddot{\delta}_{M}+2H\dot{\delta}_{M}=-\frac{k^{2}}{a^{2}}\Psi. \tag{3.31}\]
In order to conclude the current analysis, it is convenient to write equation (3.31) in terms of the \(e\)-folding time \(N=\ln(a_{0}/a)\) and substitute \(\Psi\) by the expression (3.23a), it results into
\[\delta^{\prime\prime}_{M}+\left(2+\epsilon\right)\delta^{\prime}_{M}-\frac{3H _{0}^{2}e^{3N}}{2H^{2}}\Omega^{0}_{M}\frac{\left(1+f+U_{c}+6V_{c}\right)}{ \left(1+f+U_{c}\right)\left(1+f+U_{c}+2V_{c}\right)}\delta_{M}=0, \tag{3.32}\]
in which \(\epsilon=H^{\prime}/H\) and the prime denotes differentiation with respect to \(N\). An interesting point is that this equation is \(k\) independent, depending only on the cosmological \(e\)-folding time \(N\).
The product of the structural growth rate \(f=\partial_{N}\ln[\delta_{M}]\)5 and the amplitude of matter fluctuations in spheres of \(8h^{-1}\)Mpc, \(\sigma_{8}=\sigma_{8}^{0}\frac{\delta_{M}(N)}{\delta_{M}(0)}\), is a physical observable related to the density contrast [6; 7]. The value of the constant \(\sigma_{8}^{0}=0.811\) has been recently constrained by the Plank satellite observations [49]. The numerical solution of \(f\sigma_{8}\) in terms of the red-shift \(z\), for the DW II and \(\Lambda\)CDM models, is presented in Figure 2 and compared with observational data [35]. This analysis shows that we find that the \(\Lambda\)CDM solution is in good agreement with the data [49]. The analysis shows that the linear perturbation theory of the DW II model behaves regularly, and it is reliable and self-consistent as a whole. Furthermore, unlike the result reported in [35], there is no divergence in this observable for \(0<z<2\). Although the \(\Lambda\)CDM model seems to be a best fit to the data rather than DW II, the non-local model cannot be ruled out by the observation of the growth rate \(f\sigma_{8}\) at this redshift range. It is important to remark that once the background is fixed to reproduce \(\Lambda\)CDM, no more free parameters are left to adjust to the RSD data
Footnote 5: Not to be confused with the distortion function \(f(Y)\).
ructural Growth in Bouncing Cosmology
We have finally reached our main analysis, which consists in examining early time perturbations for bouncing universes. Our group have recently studied bouncing models in the context of non-local DW II cosmology [30], where we have found that the reconstruction procedure generates physically consistent solutions to the distortion function for the following bouncing solutions: symmetric bounce, oscillatory bounce, matter bounce, finite time singularity model and pre-inflationary asymmetric bounce. Since the previous study was performed at the level of background cosmology, we seek to examine these models at perturbative level in order to further restrict physically relevant models.
We present next a brief review of each bouncing scenario, which are depicted in Fig. 3:
1. The symmetric bounce model was initially proposed to study the \(F(R)\) gravity [36; 37; 38] since it generates a non-singular bounce and can be connected to the late time accelerated expansion. It was also studied in the context of non-local DW I gravity [50] and also in the \(f(T,B)\) gravity [51].
2. The oscillatory bounce arises from the quasi-stead state cosmology [9; 36; 39], which was proposed as an alternative to the standard cosmology. The oscillatory pattern of the scale factor was introduced to reproduce the cyclic behavior of the interchange between the domination of the cosmological constant and a scalar field with negative energy that create particles.
3. The matter bounce emerged in the context of loop quantum cosmology (LQC) [40; 41] and its scale factor satisfies the effective equations of LQC in the classical limit, for a dust-dominated universe. These effective equations takes into account corrections due to quantum geometry into the usual Friedmann equations of the general relativity [40; 41; 51].
4. The bounce that generates finite time singularities is a more general exponential bouncing than the symmetrical bounce and was originally proposed to discuss the generation of singularities in the evolution of the universe [42; 43; 44; 51; 45]. This model depends on the
Figure 2: Comparison between the DW II and \(\Lambda\)CDM results for the \(f\sigma_{8}\) observable in function of the cosmological red-shift \(z\).
choice of the parameter \(\alpha\), if \(\alpha\) is chosen equal to \(1\), it corresponds to the symmetric bounce. Moreover, the choice \(\alpha=0\) implies that the scale factor \(a\) grows exponentially in time (de Sitter universe). Here we consider the case \(\alpha>1\) such that the scale factor and the effective energy density remains finite for every \(t\).
5. In the pre-inflationary scenario, recently proposed in the \(f(R)\) modified gravity [46], the universe contracts until it reaches a minimum size and expands slowly entering a quasi de Sitter inflationary era. After that, the universe starts to contract again and the scale factor tends to zero. The motivation of this form of scale factor is that it avoids the cosmic singularity and approximately satisfies the String Theory scale factor duality condition \(a(t)=a^{-1}(-t)\).
The reconstruction procedure to encode the bouncing effects is analogous to the previously developed in section 2: first we should solve the equations for the auxiliary fields \(X,Y,U,V,F\), then determine \(f(t)\) from \(f=F-1-U\). Although, in this analysis we look for vacuum solution that reconstructs the bouncing evolution. Thus, the differential equation for \(F\) given by (8) now becomes
\[\left[2\dot{H}+6H^{2}+d_{t}^{2}+5Hd_{t}\right]F(t)=0, \tag{10}\]
which together with the solutions \(f(t)\) and \(Y(t)\) are used to obtain \(f(Y)\).
On the other hand, due to our purposes in studying bouncing universes, the density contrast \(\delta_{M}\) differential equation (32) is recast in terms of the time variable (instead of the
Figure 3: The scale factor \(a(t)\) for five different bouncing models: symmetric bounce, oscillatory bounce, matter bounce, finite time singularity model and pre-inflationary asymmetric bounce, respectively. In (c) the parameter \(\rho_{c}\ll 1\) is the critical density and \(\alpha>1\) in (d).
e-folding time) as
\[\ddot{\delta}(t)+2H\dot{\delta}(t)-\frac{3H_{0}^{2}(F_{c}+8V_{c})\Omega_{M}}{2a^{3 }H^{2}(F_{c}(F_{c}+2V_{c}))}\delta(t)=0\,, \tag{4.2}\]
where \(a(t)\) now assumes different forms for each bouncing cosmology (see Fig 3) and the subscript \(c\) denotes that all the fields are only time dependent, since they are evaluated at the cosmological background.
As before, expression (4.2) can be numerically solved for each bouncing scenario and used to evaluate the observable \(f\sigma_{8}\). We present the solution for each bouncing universe in terms of the cosmological redshift \(z\) in Figure 4. Given the distinct behavior observed in the growth of structures across various bouncing cosmology scenarios, it is important to provide some remarks about our findings.
* Our calculation shows that in the symmetric bounce, matter bounce and finite time singularities, the observable \(f\sigma_{8}\) has a growing pattern near the bouncing point (large \(z\)). This is a physically undesired effect since \(f\sigma_{8}\) is a measure of the growth rate of matter perturbation in the early epoch and should approach a finite value as seen in the observational data depicted in Figure 2.
* On the other hand, in the cases of oscillatory bounce and asymmetrical bounce, the matter fluctuations are very small at the bouncing point, as \(f\sigma_{8}\) approaches zero for large redshift \(z\), and the density contrast begins to grow immediately after the onset of expansion. This behavior may be regarded as the seeds, or fluctuations, that contribute to the formation of large scale structures in the universe. Thus, for an endlessly oscillating universe or a universe that starts with asymmetry, we can understand the origins of these fluctuations as non-local effects.
This analysis showed that the formation of the biggest structures currently observed in the universe cannot be described by the non-local Deser-Woodard II model in the case of the symmetric bounce, the bounce generated by critical matter density and the exponential bounce singular at a finite time. In contrast, universes with oscillatory and pre-inflationary bounces may accomplish the formation of clusters of galaxies in the framework of DW II model. Therefore, eternal universes with contractions and expansions in a nonlocal gravity model seems to be a better choice to describe the structure formation instead of models with a single minimum point.
## 5 Conclusions
In this paper we presented a comprehensive discussion of formation and growth of structures in the nonlocal Deser-Woodard II model in different bouncing cosmology scenarios. Initially, we revised the reconstruction process for the DW II model as well as the perturbation theory for the field equations. Next, we analyzed the perturbed nonlocal DW II model and its implications in some bouncing cosmologies, scrutinizing for physically acceptable solutions of the \(f\sigma_{8}\) observable.
We began by revising the reconstruction process of the distortion function \(f(Y)\) for the case of an accelerating expansion universe. During this analysis, we identified a small difference in the parameters of the exponential fit, compared to those reported by the original
authors [31]. This small difference is due to our choice of the right-hand side of equation (10) being derived directly from the equation (9b) (compare it with equation (29) of [31]). Although the parameters are different, the desired exponential growth is ensured in our solution for \(f(Y)\), see equation (12).
Furthermore, we analyzed the growth of structures in the universe by considering cosmological perturbations of the DW II model in the newtonian gauge. All field equations were expanded over a cosmological flat space FLRW background (time dependent only), with a small spatial dependence scalar perturbation. Naturally, the perfect fluid deviations of the stress energy tensor also were included. The field equations were evaluated in the sub-horizon
Figure 4: The evolution of \(f\sigma_{8}\) in terms of the redshift for five different bouncing models: symmetric bounce, oscillatory bounce, matter bounce, finite time singularity model and pre-inflationary asymmetric bounce, respectively.
limit, which provides a suitable way to study the matter density fluctuations. Our analysis shows that the structure growth rate \(f\sigma_{8}\) is finite in the redshift range \(0<z<2\), showing that the linear perturbation theory of the DW II model behaves regularly, and it is reliable and self-consistent as a whole. As we can see in Figure 2, the experimental data favor the \(\Lambda\)CDM model while differing with the non-local DW II model curve.
As a complementary analysis, we have examined the formation of large scale structures by early time perturbations for different bouncing universes. This interest was motivated by the previous results where we have worked the physical viability of some bouncing cosmologies in the DW II model [30]. Our analysis shows that the structure formation cannot be described by DW II model in the case of the symmetric bounce, matter bounce and the finite time singularity universe, as the observable \(f\sigma_{8}\) presents an undesirable growing pattern near the bouncing point. On the other hand, the bouncing models with oscillations and the pre-inflationary bounce presented a physical behavior for the observable \(f\sigma_{8}\). Thus, universes with successive contractions and expansions allows a description of the formation of large structures in term of non-local phenomena, instead of models with a single and finite bounce, at least in the particular framework that we have discussed.
This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001. R.B. acknowledges partial support from Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq Project No. 306769/2022-0).
|
2309.02662 | Subsethood Measures of Spatial Granules | Subsethood, which is to measure the degree of set inclusion relation, is
predominant in fuzzy set theory. This paper introduces some basic concepts of
spatial granules, coarse-fine relation, and operations like meet, join,
quotient meet and quotient join. All the atomic granules can be hierarchized by
set-inclusion relation and all the granules can be hierarchized by coarse-fine
relation. Viewing an information system from the micro and the macro
perspectives, we can get a micro knowledge space and a micro knowledge space,
from which a rough set model and a spatial rough granule model are respectively
obtained. The classical rough set model is the special case of the rough set
model induced from the micro knowledge space, while the spatial rough granule
model will be play a pivotal role in the problem-solving of structures. We
discuss twelve axioms of monotone increasing subsethood and twelve
corresponding axioms of monotone decreasing supsethood, and generalize
subsethood and supsethood to conditional granularity and conditional fineness
respectively. We develop five conditional granularity measures and five
conditional fineness measures and prove that each conditional granularity or
fineness measure satisfies its corresponding twelve axioms although its
subsethood or supsethood measure only hold one of the two boundary conditions.
We further define five conditional granularity entropies and five conditional
fineness entropies respectively, and each entropy only satisfies part of the
boundary conditions but all the ten monotone conditions. | Liquan Zhao, Yiyu Yao | 2023-09-06T02:14:53Z | http://arxiv.org/abs/2309.02662v1 | # Subsethood Measures of Spatial Granules
###### Abstract
Subsethood, which is to measure the degree of set inclusion relation, is predominant in fuzzy set theory. This paper introduces some basic concepts of spatial granules, coarse-fine relation, and operations like meet, join, quotient meet and quotient join. All the atomic granules can be hierarchized by set-inclusion relation and all the granules can be hierarchized by coarse-fine relation. Viewing an information system from the micro and the macro perspectives, we can get a micro knowledge space and a micro knowledge space, from which a rough set model and a spatial rough granule model are respectively obtained. The classical rough set model is the special case of the rough set model induced from the micro knowledge space, while the spatial rough granule model will be play a pivotal role in the problem-solving of structures. We discuss twelve axioms of monotone increasing subsethood and twelve corresponding axioms of monotone decreasing subsethood, and generalize subsethood and supershoot to conditional granularity and conditional fineness respectively. We develop five conditional granularity measures and five conditional fineness measures and prove that each conditional granularity or fineness measure satisfies its corresponding twelve axioms although its subsethood or supershoot measure only hold one of the two boundary conditions. We further define five conditional granularity entropies and five conditional fineness entropies respectively, and each entropy only satisfies part of the boundary conditions but all the ten monotone conditions.
Subsethood, supershoot, fuzzy set, rough set, granularity, fineness, conditional granularity, conditional fineness, conditional granularity entropy, conditional fineness entropy.
## I Introduction
Subsethood was first used to measure fuzzy sets, and it is denoted by a bivalent function to show the degree of a fuzzy set being a subset of another fuzzy set [1, 2, 3, 4, 5]. Kosko [5, 6, 7, 8] generalized this concept and defined a multivalent subsethood measure. Subsethood has drawn the attention of many scholars who related subsethood with entropy [9, 10, 11, 5], distance measure [12, 13, 14], similarity measure [15, 16, 17, 14] and logical implication [18, 19, 20, 21, 22, 23]. Most of subsethood studies focus on fuzzy sets and there are only a few of them in rough sets. What's more, these studies mainly discussed the desired properties of subsethood measures or weak subsethood measures and paid little attention to the construction of specific measures. Yao and Deng [24] constructed subsethood measures of two sets based on two views: one is different equivalent expressions of the condition \(A\subseteq B\) and the other is the grouping of objects based on two sets \(A\) and \(B\). When applying subsethood to rough sets, it shows the graded set-inclusion relation of different sets, they are quantitative generalizations of the set-inclusion relation and can be used to distinguish those sets with same size in some degree.
A partition is the simplest granulation scheme and hence measurement of partitions has been proposed and studied. Yao and Zhao [25] divide these measures into two classes: information-theoretic measures and interaction-based measures. Hartley entropy and Shannon entropy are typical representatives of information-theoretic measures. Although Hartley entropy coincides with the Shannon entropy in the case of a uniform probability distribution, Klir and Golger [26] pointed out they are semantic differences. Shannon entropy is a measure of information induced by a probability distribution while Hartley entropy is a measure of nonspecificity of a finite set. Their uses as measures of the granularity of partitions were suggested and examined in [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. Interaction based measures count the number of interacting pairs of elements of a universal set under a partition. Each pair in the equivalence relation is counted as one interaction, and the size of the equivalence relation denotes the total number of interactions. Miao and Fan [43] first defined an interaction based measure of granularity of a partition which may be interpreted as a normalized cardinality of an equivalence relation. Many authors studied this measure and extended it [25, 31, 32, 33, 34, 39, 44]. However, the extensions mainly focus on non-equivalence relations.
Granular computing (GrC) is not an algorithm or process but an idea, and, in fact, this idea has been permeated through every computing theory since the very beginning. The definition or construction of information granules is one of the basic issues of GrC. By Merriam-webster dictionary, the word "granule" has two meanings: one is a small particle, and the other is one of numerous particles forming a larger unit. People generally chose its first meaning, that is, a granule is defined as a simple crisp or fuzzy set. Zhao [45, 46] first introduced its second meaning as the general definition of granules, and extended the partitions to equivalence granules and the finite set to infinite set as well. He think a granule is made up of one or more atomic granules, which are indivisible under the giving subdivision rule. However, these atomic granules may be divisible under its finer subdivision rules, that is to say whether an atomic granule is divisible or not is relative. There are structural and nonstructural relationships between the atomic granules. This is a structural definition which can show the spatiality of a granule, and the granules defined by this way is called the spatial granules so as to distinguish from the granules defined by the previous way.
The contribution and organization of this paper is organized as follows:
In Section II, we introduce the basic notions of granules, coarse-fine relation, which is the generalization of set-inclusion relation, and operations like meet, join, quotient meet and quotient join, which are generalizations of intersection or union. All the atomic granules can be hierarchized by set
inclusion relation, and all the granules can be hierarchized by coarse-fine relation. Given an information system, when performing the micro and macro granular analysis on it, we can generate a micro knowledge space and a macro knowledge space, from which a rough set model and a spatial rough granule model are respectively induced. The rough set model can be used for incomplete and complete information systems on any domain, and the classical rough set model is the special case of this one. The coarse-fine relation is the key to the success of hierarchical machine learning algorithms, and the spatial rough granule model will play a very important role in the structure problem solving. All the atomic granules can be hierarchized in a plane by set inclusion relation, and all granules can be hierarchized in an \(n\)-dimensional space by coarse-fine relation.
In Section III, we discuss twelve properties of monotonically increasing subsethood and twelve corresponding properties of monotonically decreasing subsethood not only for atomic granules but also for granules, and the properties can be divided into two classes: boundary conditions and monotone conditions. The five monotonically increasing subsethood measures satisfy only one of the two boundary conditions but all ten monotone conditions. We construct five monotonically decreasing subsethood measures for atomic granules, and each one satisfies one or both the boundary conditions and ten monotonically decreasing conditions. Conditional granularity and conditional fineness are introduced to measure the coarse-fine relation between two granules. Conditional granularity is defined as the expectation of monotonically increasing subsethood of atomic granules with respect to the probability distribution of the meet of the two granules, and conditional fineness is defined as the expectation of monotonically decreasing subsethood of atomic granules with respect to the probability distribution of the meet of the two granules. We construct five conditional granularity measures and five conditional fineness measures and prove that each measure satisfies its corresponding twelve properties. Conditional granularity entropy and conditional fineness entropy are defined by their corresponding subsethood and the probability distribution of the meet of the two granules, where the five conditional granularity entropies satisfy part of the boundary conditions and ten monotonically increasing conditions and the five conditional fineness entropies satisfy part of the boundary conditions and ten monotonically decreasing conditions.
## II A Model of Spatial Granules
### _Preliminaries_
Given a universe of discourse \(X=\{x_{1},\cdots,x_{n}\}\), the granules and binary relations on \(X\) are one-to-one corresponding, where the granules corresponding to fuzzy equivalence relations are called fuzzy equivalence granules and the granules corresponding to equivalence relations are called equivalence granules. Each equivalence granule is a partition of a subset of \(X\), and, in particular, a partitions of \(X\) is also called a quotient granule on \(X\). For the sake of simplicity, we only discuss equivalence granules in this paper, that is, the atomic granules of a granule are its equivalence classes.
Assume \(A\) and \(B\) are two subsets of \(X,R_{A}\) and \(R_{B}\) are equivalence relations on \(A\) and \(B\) respectively, and the equivalence granules corresponding to \(R_{A}\) and \(R_{B}\) are \(A_{R}=\{a_{1},\cdots,a_{k}\}\) and \(B_{R}=\{b_{1},\cdots,b_{l}\}\) respectively. For convenience, \(A_{R}\) can also be denoted by \(A\), and use granule \(A\) or set \(A\) to to distinguish them so as not to cause ambiguity, that is, the granule \(A\) is a partition of the set \(A\). The operations of meet, join, quotient meet and quotient join are respectively defined as follow:
**Definition II.1**.:
1. \(A\wedge B\) _is called the meet of_ \(A\) _and_ \(B\)_, which is the granule corresponding to_ \(R_{A}\cap R_{B}\)_;_
2. \(A\lor B\) _is called the join of_ \(A\) _and_ \(B\)_, which is the granule corresponding to_ \(R_{A}\cup R_{B}\)_;_
3. \(A\wedge_{t}B\) _is called the quotient meet of_ \(A\) _and_ \(B\)_, which is the granule corresponding to_ \(t(R_{A}\cap R_{B})\)_, the transitive closure of_ \(R_{A}\cap R_{B}\)_;_
4. \(A\lor_{t}B\) _is called the quotient join of_ \(A\) _and_ \(B\)_, which is the granule corresponding to_ \(t(R_{A}\cup R_{B})\)_, the transitive closure of_ \(R_{A}\cup R_{B}\)_._
Where the quotient meet and quotient join operations are for (fuzzy) equivalence granules while the meet and join operations are for other granules. Obviously, for equivalence granules, the quotient meet is the same with meet but the join and quotient join are different.
### _Rough Set Model in Micro Knowledge Space_
Given an information system \(I=(X,\boldsymbol{R})\), where \(\boldsymbol{R}=\{R_{1},\cdots,R_{m}\}\) is a family of equivalence relations on subsets of \(X=\{x_{1},\cdots,x_{n}\}\). This information system can be viewed from the micro and the macro perspectives respectively. From the micro perspective, we think about all the subsets of \(X\), denoted as \(\sigma(X)\). \((\sigma(X),\supseteq)\) is a complete lattice, and all the elements in \(\sigma(X)\) can be hierarchized under set inclusion relation.
Assume the equivalence granules corresponding to \(R_{i}\) are \(P_{i}(i=1,\cdots,m)\), respectively, \(R\) is the intersection of all \(R_{i}(i=1,\cdots,m)\), and \(P\) is the quotient meet of all \(P_{i}(i=1,\cdots,m)\). For any \(A\in\sigma(X),A\) is called \(R\)-definable if it is one of the equivalence classes in \(P\) or a union of two or more equivalence classes in \(P\). Assume \(d(\sigma(X))\) is a family of all definable sets in \(\sigma(X)\) and \(d_{0}(\sigma(X))\) is a family of the empty set and all definable sets. Then \((d_{0}(\sigma(X)),\supseteq)\) is a complete bounded sublattice of \((\sigma(X),\supseteq)\), and \(d_{0}(\sigma(X))\), which is closed under under union and intersection operations, is called the micro knowledge space generated from \(I=(X,\boldsymbol{R})\). Therefore, \(\sigma(X)\) can be divided into two categories: \(d(\sigma(X))\) and \(\widetilde{d}(\sigma(X))\), i.e., the family of all undefinable sets. By rough set theory, \(\widetilde{d}(\sigma(X))\) can be further divided into \(d_{r}(\sigma(X))\), i.e., the set of roughly definable sets, and \(\widetilde{d}_{r}(\sigma(X))\), i.e., the set of roughly or totally undefinable sets.
**Definition II.2**.: _For any \(A\in\sigma(X)\), the lower and upper approximations of \(A\) with respect to \(R\) can be defined as: for
every \(B\in d(\sigma(X))\),_
\[\underline{R}(A) =\bigcup\{A\cap B\mid A\supseteq B\},\] \[\overline{R}(A) =\bigcap\{A\cup B\mid B\supseteq A\}. \tag{1}\]
Obviously, for any \(A\in\sigma(X)\), its upper approximation is to find its least upper bound in \(d_{0}(\sigma(X))\), and its lower approximation is to find the greatest lower bound in \(d_{0}(\sigma(X))\). \((\underline{R}(A),\overline{R}(A))\) is called an approximation space of \(A\).
When \(I\) is complete, i.e., all \(R_{i}(i=1,\cdots,n)\) are equivalence relations on \(X\), every atomic granule in \(d(\sigma(X))\) can be obtained from the atomic granules of \(P\), and then we can replace \(d(\sigma(X))\) with \(P\). However, we should examine two extreme cases: \(\forall B\in P,A\supset B\) and \(\forall B\in P,B\supset A\). We can define its upper approximation as \(A\) in the first case and define its lower approximation as \(A\) in the second case. When \(I\) is incomplete, not all of the atomic granules in \(d(\sigma(X))\) can be obtained from the atomic granules of \(P\). Therefore, we cannot replace \(d(\sigma(X))\) with \(P\). It can be seen that the classical rough set model is only for complete information systems while the above model is not only for complete information systems but also for incomplete information systems. When \(X\) a domain, we can divide it into \(n\) subdomains, which can be regarded as \(n\) objects, and the above model is also applicable. All the extended models developed from the classical rough set model can be accordingly defined by \(d(\sigma(X))\) so as to be applicable to any information system, which will be discussed in another paper.
### _Rough granule Model in Macro Knowledge Space_
Assume \(\Pi(\sigma(X))\) is the family of all equivalence granules on \(X\) and \(\Pi_{0}(\sigma(X))\) is the family of the empty granule and all equivalence granules on \(X\). Then viewing \(I\) from the macro perspective, the whole space is \(\Pi_{0}(\sigma(X))\). There is no set inclusion relation between two granules, and we must define new relation.
**Definition II.3**.: _For any two equivalence relations \(R_{A},R_{B}\) over subsets of \(X\), assume that their corresponding equivalence granules are \(A\) and \(B\), respectively._
1. _If_ \(x,y\in X,xR_{A}y\to xR_{B}y,\) _then_ \(B\) _is coarser than_ \(A\) _(or_ \(A\) _is finer than_ \(B\)_), denoted by_ \(B\succeq A\) _(or_ \(A\preceq B\)_);_
2. _If_ \(B\succeq A\) _and_ \(R_{A}\subset R_{B}\)_, then_ \(B\) _is strictly coarser than_ \(A\) _(or_ \(A\) _is strictly finer than_ \(B\)_), denoted by_ \(B\succ A\) _(or_ \(A\prec B\)_);_
3. _If_ \(B\succeq A\) _and_ \(A\succeq B\)_, then two granules_ \(A\) _and_ \(B\) _are equal, denoted by_ \(A=B\)_._
\((\Pi_{0}(\sigma(X)),\succeq)\) is a complete bounded lattice [45], all the elements in \(\Pi_{0}(\sigma(X))\) and the vertices of the unit \(n\)-dimensional hypercube are one-to-one corresponding, and \(\Pi_{0}(\sigma(X))\) can be hierarchical by coarse-fine relation. For any granule \(A\in\Pi(\sigma(X))\), it is called \(R\)-definable under this information system if \(A\succ P\). Assume \(d(\Pi(\sigma(X)))\) is a family of all definable granules in \(\Pi(\sigma(X))\) and \(d_{0}(\Pi(\sigma(X)))\) is a family of \(P\) and all definable granules. Then \((d_{0}(\Pi(\sigma(X))),\succeq)\) is a complete bounded sublattice of \((\Pi_{0}(\sigma(X)),\succeq)\), and \(d_{0}(\Pi(\sigma(X)))\), which is closed under under quotient meet and quotient join operations, is called the macro knowledge space generated from \(I\). Therefore, \(\Pi_{0}(\sigma(X))\) can be divided into two categories: \(d(\Pi(\sigma(X)))\) and \(\tilde{d}(\Pi_{0}(\sigma(X)))\), i.e., the family of all undefinable granules. While \(\tilde{d}(\Pi_{0}(\sigma(X)))\) can be further divided into \(d_{r}(\Pi(\sigma(X)))\), i.e., the set of roughly definable granules, and \(\tilde{d}_{r}(\Pi_{0}(\sigma(X)))\), i.e., the set of roughly or totally undefinable granules.
For any granule \(A\) in \(\Pi(\sigma(X))\), its upper approximation is to find its lowest upper bound in \(d_{0}(\Pi(\sigma(X)))\), and its lower approximation is to find the greatest lower bound in it.
**Definition II.4**.: _The upper and lower approximations of granule \(A\) with respect to \(R\) can be defined as follows: for every \(B\in d(\Pi(\sigma(X)))\)_
\[\underline{R}(A) =\bigvee_{t}\{A\wedge_{t}B\mid A\succeq B\},\] \[\overline{R}(A) =\bigwedge_{t}\{A\vee_{t}B\mid B\succeq A\}. \tag{2}\]
The upper and lower approximations in the above model are not obtained from one of its tangent planes but from the \(n\)-dimensional space. Therefore, the model is also called the spatial rough granule model which can be applied to any structural information system and non-structural information system as well. In particular, we have \(\underline{R}(A)=A\wedge_{t}P\) and \(\overline{R}(A)=A\vee_{t}P\) when \(I\) is complete.
## III Subsethood Measures of Two Granules
Measurement is the most important foundation of all computational theories and measurement of information granules is naturally the keystone of granular computing. Many measures of information granules have been discussed in different areas in isolation, and most of them focus on the measures of sets. We divide the measures into two classes: granularity or coarseness and fineness, where granularity is to measure the coarse degree of a granule and fineness is to measure the fineness degree of a granule [45, 47]. People mainly discuss granularity, to the extent that many people confuse the concepts of granularity and granule, and, in fact, entropy is a kind of fineness. Measurement of granules is not just to know the granularity or the fineness of each granule, but to know the coarse-fine relation, similarity and difference between two granules. The conditional granularity and conditional fineness defined in [45, 47] are to show the coarse-fine relation to some degree between two granules, and conditional granularity and conditional fineness clearly reflects the monotonically increasing and the monotonically decreasing respectively. While subshethood, in general, discuss monotonically increasing. In [47], we also show the conditional granularity is a generalization of subshethood measure, and it holds the axiomatic properties of subshethood measures that Yao and Deng discussed in [24]. Conditional granularity and conditional fineness are named from the point of view of probability distribution, while subshethood is named from the point of view of set inclusion. We can extend subshethood function to discuss monotonically decreasing so as to be generalized to denote conditional fineness. We can use any one to express the coarse-fine relation.
### _Subsethood of Two Atomic Granules_
Subsethood measures should satisfy some axioms to make them to be meaningful. Sinha and Dougherty [48] presented nine axioms for subsethood and the last five ones further restrict subsethood measures, and Young [12] mainly discussed the first four. Different scholars may define different axioms in different fields [9, 11, 15, 24, 49]. However, we can divided these axioms into two classes: basic axioms and extended axioms. Basic axioms are similar, and extended axioms may be different by the properties of empirical objects.
In many situations, it is more convenient to consider a normalized measure for which the maximum value is \(1\) and the minimum is \(0\). For any two atomic granules \(a,b\in\sigma(X)\), the basic axioms of a subsethood measure should satisfy: a subsethood measure must reach the maximum value if and only if \(a\subseteq b\), it reaches the minimum value if and only if \(a\cap b=\emptyset\), and it belongs to \([0,1]\); it should show the monotonicity because the set inclusion is a partial order relation.
**Definition III.1**.: _For any atomic granules \(a,b\in\sigma(X)\), a function \(sh:\sigma(X)\times\sigma(X)\longrightarrow[0,1]\) is called a normalized measure of subsethood if it satisfies the following two axioms (boundary conditions):_
1. \(sh(b,a)=1\Longleftrightarrow a\subseteq b\)_;_
2. \(sh(b,a)=0\Longleftrightarrow a\cap b=\emptyset\)_,_
_where the value \(sh(b,a)\) is the degree of \(a\) being a subset of \(b\)._
For the classical set inclusion, a set \(a\) is either a subset of another set \(b\) or not, i.e., \(sh(b,a)\) is either 1 or 0, and the conditions (A1) and (A2) are dual each other. Some authors [50, 51] used a single implication:
\[a\subseteq b\Longrightarrow sh(b,a)=1.\]
That is, \(sh(b,a)\) reaches the maximum value if \(a\subseteq b\). However we may still have \(sh(b,a)=1\) even though \(\neg(a\subseteq b)\). Gomolinska [52, 53] considered the other single implication:
\[sh(b,a)=1\Longrightarrow a\subseteq b.\]
In this case, we can get \(a\subseteq b\) from \(sh(b,a)=1\), and the other way around is not true. None of the two single implications can faithfully reflect whether a set is a subset of another besides the double implication.
For the general set inclusion, one set can be a subset of another one to some degree, that is, \(sh(b,a)\), the degree of the inclusion, can be any value between 0 and 1. When researching on subsethood measure, (A1) is the only condition for normalized measure, which is to extend subsethood function. If our purpose is to measure the degree of coarse-fine relation of two granules and the boundary conditions defined in Definition III.1 are the minimum requirements that subsethood measures can truthfully reflect the basic properties of inclusion degree or coarse-fine degree unless we do not consider the special case \(a\cap b=\emptyset\). If our purpose is to judge whether a granule is coarser than or finer than another granule, then the axiom (A1) is enough for normalized measure, i.e. boundary condition, and the focus is on monotonicity.
**Definition III.2**.: _For any three atomic granules \(a,b,c\in\sigma(X)\) on a universe \(X\), a measure of subsethood \(sh:\sigma(X)\times\sigma(X)\longrightarrow[0,1]\) is called a monotonically increasing measure if it satisfies the following monotone properties:_
1. \(b\subseteq c\Rightarrow sh(b,a)\leq sh(c,a)\)_;_
2. \(b\subseteq c\Rightarrow sh(a,c)\leq sh(a,b)\)_._
In [24], Yao and Deng discussed four monotone properties of subsethood measures among three sets \(a,b,c\in\sigma(X)\) as follows.
1. \(b\subseteq c\Rightarrow sh(b,a)\leq sh(c,a)\);
2. \(b\subseteq c\land(b\cap a=c\cap a)\Rightarrow sh(a,c)\leq sh(a,b)\);
3. \(b\subseteq c\Rightarrow sh(a,c)\leq sh(a,b)\);
4. \(a\subseteq b\subseteq c\Rightarrow sh(a,c)\leq sh(a,b)\);
Comparing with the conditions (M1) and (M3), we know the monotonicity of function \(sh(a,b)\) is reversed with that of function \(sh(b,a)\), and we have (A3) \(\Rightarrow\) (A4) and (A4) \(\Rightarrow\) (A3). Therefore, (A3) or (A4) alone can be thought as the monotonically increasing condition of subsethood. In condition (M2), \(b\cap a=c\cap a\) is the greatest lower bound of \(a,b\) and \(c\), which reminds us to think about its dual question, that is, their corresponding least upper bound \(b\cup a=c\cup a\). Therefore, we have the following monotone properties.
1. \(b\subseteq c\land(b\cap a=c\cap a)\Rightarrow sh(b,a)\leq sh(c,a)\);
2. \(b\subseteq c\land(b\cap a=c\cap a)\Rightarrow sh(a,c)\leq sh(a,b)\);
3. \(b\subseteq c\land(b\cup a=c\cup a)\Rightarrow sh(b,a)\leq sh(c,a)\);
4. \(b\subseteq c\land(b\cup a=c\cup a)\Rightarrow sh(b,a)\leq sh(c,a)\);
5. \(b\subseteq c\land(b\cup a=c\cup a)\Rightarrow sh(a,c)\leq sh(a,b)\);
6. \(a\subseteq b\subseteq c\Rightarrow sh(b,a)\leq sh(c,a)\);
7. \(a\subseteq b\subseteq c\Rightarrow sh(a,c)\leq sh(a,b)\).
The axioms (A5), (A7), (A9) and (A11) are weaker versions of (A3), i.e., (A3) \(\Rightarrow\) (A5), (A7), (A9) and (A11); the axioms (A6), (A8), (A10) and (A12) are weaker versions of (A4), i.e., (A4)\(\Rightarrow\) (A6), (A8), (A10) and (A12). Therefore, we can only discuss the axioms (A1), (A2), (A3) and (A4). The axioms (A5) and (A6) are the dual questions of (A7) and (A8) respectively, and the axioms (A9) and (A10) are the dual questions of (A11) and (A12) respectively.
Yao and Deng [24] reviewed existing subsethood measures including \(sh_{l}\)[1, 2, 3, 5, 6, 9, 12, 15, 18, 19, 20, 52, 54, 55], \(sh_{\cap}\)[52, 56, 57, 58, 56], \(sh_{\cup}\)[5, 5, 5, 57, 58], \(sh_{\cap}^{c}\)[15, 51, 58] that have been considered in many studies. Most of them focus on fuzzy sets, but not on crisp sets. Yao and Deng gives the five subsethood measures of two crisp sets and have the corresponding probabilistic interpretations as follows.
\[sh_{1}(b,a)=sh_{l}(b,a)=\frac{|a^{e}\cup b|}{|X|}=Pr(a^{c}\cup b);\]
\[sh_{2}(b,a)=sh_{\cap}(b,a)=\frac{|a\cap b|}{|a|}=Pr(b|a);\]
\[sh_{3}(b,a)=sh_{\cup}(b,a)=\frac{|b|}{|a\cup b|}=Pr(b|a\cup b);\]
\[sh_{4}(b,a)=sh_{\cup}^{c}(b,a)=\frac{|a^{c}|}{|a^{c}\cup b^{c}|}=Pr(a^{c}|a^{c} \cup b^{c});\]
\[sh_{5}(b,a)=sh_{\cap}^{c}(b,a)=\frac{|a^{c}\cap b^{c}|}{|b^{c}|}=Pr(a^{c}|b^{c}).\]
If any of the value of subsethood measures is equal to 1, and we can judge the atomic \(a\) is a subset of \(b\). It can be seen that only \(sh_{\cap}\) satisfies both (A1) and (A2).
**Definition III.3**.: _For any three atomic granules \(a,b,c\in\sigma(X)\), a measure of subsethood \(sh:\sigma(X)\times\sigma(X)\longrightarrow[0,1]\) is called a monotonically decreasing measure if it satisfies the following monotone properties:_
* \(b\subseteq c\Rightarrow sh(c,a)\leq sh(b,a)\)_;_
* \(b\subseteq c\Rightarrow sh(a,b)\leq sh(a,c)\)_._
Then these \(sh^{\prime}_{i}(\cdot,\cdot)=1-sh_{i}(\cdot,\cdot)(i=1,\cdots,5)\), which can be called supsethood, are the monotonically decreasing measures corresponding to \(sh_{i}(b,a)(i=1,\cdots,5)\), respectively, and every \(sh^{\prime}_{i}(b,a)(i=1,\cdots,5)\) can be used to define conditional fineness. For these \(sh^{\prime}_{i}(i=1,\cdots,5)\), we have
* \(sh^{\prime}_{i}(b,a)=0\Longleftrightarrow a\subseteq b\).
For \(sh^{\prime}_{2}\), we also have
* \(sh^{\prime}_{2}(b,a)=1\Longleftrightarrow a\cap b=\emptyset\).
### _Subsethood of Two Equivalence Granules_
A subsethood measure of two sets is a quantitative generalization of the set inclusion relation, and a subsethood measure of two granules should be a quantitative generalization of the coarse-fine relation.
**Definition III.4**.: _For any two equivalence granules \(A,B\) on \(X\),_
1. _a function_ \(sh(B,A)\rightarrow[0,1]\) _is called a normalized measure of conditional granularity or subsethood if it satisfies the following two axioms:_ 1. \(sh(B,A)=\frac{m}{n}\Longleftrightarrow B\succeq A\); 2. \(sh(B,A)=0\Longleftrightarrow A\wedge B=\emptyset\)_._
2. _a function_ \(sh(B,A)\rightarrow[0,1]\) _is called a normalized measure of conditional fineness or subsethood if it satisfies the following two axioms:_ 1. \(sh(B,A)=0\Longleftrightarrow B\succeq A\)_;_ 2. \(sh(B,A)=\frac{m}{n}\Longleftrightarrow A\wedge B=\emptyset,\)__
_where \(n\) is the cardinality of \(X\) and \(m\) is the smaller one of the cardinalities of the sets \(A\) and \(B\)._
The monotonically increasing and monotonically decreasing measures corresponding to conditional granularity and conditional fineness respectively can be defined as follows.
**Definition III.5**.: _For any three equivalence granules \(A,B,C\) on \(X\),_
1. _a measure of subsethood_ \(sh:\Pi(\sigma(X))\times\Pi(\sigma(X))\longrightarrow[0,1]\) _is called a monotonically increasing measure if it satisfies the following monotone properties:_ 1. \(C\succeq B\Rightarrow sh(B,A)\leq sh(C,A)\)_;_ 2. \(C\succeq B\Rightarrow sh(A,C)\leq sh(A,B)\)_._
2. _a measure of subsethood_ \(sh:\Pi(\sigma(X))\times\Pi(\sigma(X))\longrightarrow[0,1]\) _is called a monotonically decreasing measure if it satisfies the following monotone properties:_ 1. \(C\succeq B\Rightarrow sh(C,A)\leq sh(B,A)\)_;_ 2. \(c\) _is called a monotonically decreasing measure if it satisfies the following monotone properties:_ 1. \(C\succeq B\Rightarrow sh(C,A)\leq sh(B,A)\)_;_ 2. \(c\) _is called a monotonically decreasing measure if it satisfies the following monotone properties:_ 2. \(C\succeq B\Rightarrow sh(A,B)\leq sh(A,C)\)_._
We also have (A3) \(\Rightarrow\) (A4) and (A4) \(\Rightarrow\) (A3), and (A3\({}^{\prime}\)) \(\Rightarrow\) (A4\({}^{\prime}\)) and (A4\({}^{\prime}\)) \(\Rightarrow\) (A3\({}^{\prime}\)). Therefore, (A3) or (A4) alone can be the monotonically increasing condition, and (A3\({}^{\prime}\)) or (A4\({}^{\prime}\)) alone can be the monotonically decreasing condition.
For any equivalence granules \(A,B,C\) on \(X\), the conditions (A5), \(\cdots\), (A12) and the conditions (A5\({}^{\prime}\)), \(\cdots\),(A12\({}^{\prime}\)) are as follows.
1. \(C\succeq B\wedge(B\wedge A=C\wedge A)\Rightarrow sh(B,A)\leq sh(C,A)\);
2. \(C\succeq B\wedge(B\wedge A=C\wedge A)\Rightarrow sh(A,C)\leq sh(A,B)\);
3. \(C\succeq B\wedge(B\lor A=C\lor A)\Rightarrow sh(B,A)\leq sh(C,A)\);
4. \(C\succeq B\wedge(B\lor A=C\lor A)\Rightarrow sh(A,C)\leq sh(A,B)\);
5. \(C\succeq B\succeq A\Rightarrow sh(B,A)\leq sh(C,A)\);
6. \(C\succeq B\wedge(A\succeq C\wedge A)\Rightarrow sh(A,C)\leq sh(B,A)\);
7. \(C\succeq B\wedge(B\wedge A=C\wedge A)\Rightarrow sh(A,B)\leq sh(A,C)\);
8. \(C\succeq B\wedge(B\lor A=C\lor A)\Rightarrow sh(C,A)\leq sh(B,A)\);
9. \(C\succeq B\succeq A\Rightarrow sh(A,B)\leq sh(A,C)\);
10. \(C\succeq B\succeq A\Rightarrow sh(A,C)\leq sh(B,A)\);
11. \(A\succeq C\succeq B\Rightarrow sh(A,B)\leq sh(C,A)\);
12. \(A\succeq C\succeq B\Rightarrow sh(C,A)\leq sh(B,A)\);
13. \(C\succeq B\wedge(B\wedge A=C\wedge A)\Rightarrow sh(C,A)\leq sh(B,A)\);
14. \(C\succeq B\Rightarrow sh(A,B)\leq sh(A,C)\).
The conditions (A5), (A7), (A9) and (A11) are weaker versions of (A3), i.e., (A3) \(\Rightarrow\) (A5), (A7), (A9) and (A11); the axioms (A6), (A8), (A10) and (A12) are weaker versions of (A4), i.e., (A4) \(\Rightarrow\) (A6), (A8), (A10) and (A12). The conditions (A5), (A6), (A7) and (A8) are a special case of (A9), (A10), (A11) and (A12), respectively. The conditions (A5) and (A6) are the dual questions of (A7) and (A8) respectively, and the conditions (A9) and (A10) are the dual questions of (A11) and (A12) respectively. While the axiom (Ai) is reversed with (Ai\({}^{\prime}\)) (\(i=1,\cdots,12\)). The first four are their basic properties.
Given two equivalence granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) on \(X\). Then there are \(|a_{i}\cap b_{j}|(a_{i}\cap b_{j})(i=1,\cdots,k,j=1,\cdots,l)\) in \(A\wedge B\), where \(|a_{i}\cap b_{j}|\) is the cardinality of \(a_{i}\cap b_{j}\). We can normalize these \(|a_{i}\cap b_{j}|(i=1,\cdots,k,j=1,\cdots,l)\) and get a probability distribution which is called a probability distribution of the granule \(A\wedge B\) denoted as \(P_{A\wedge B}\).
\[P_{A\wedge B} =(p(a_{1}\cap b_{1}),\cdots,p(a_{i}\cap b_{j}),\cdots,p(a_{k}\cap b _{l}))\] \[=\left(\frac{|a_{1}\cap b_{1}|}{|X|},\cdots,\frac{|a_{i}\cap b_{j }|}{|X|},\cdots,\frac{|a_{k}\cap b_{l}|}{|X|}\right), \tag{3}\]
where \(p(a_{i}\cap b_{j})\) indicates the probability of the intersection of \(a_{i}\) and \(b_{j}\) contained in \(X\). We have the following result.
**Theorem III.1**.: \[\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})\leq\frac{m}{n},\]
_where \(n\) is the cardinality of the universe \(X\) and \(m\) is the smaller one of the cardinalities of the sets \(A\) and \(B\)._
Proof.: Let us assume that the cardinality of \(A\) is the smaller one and \(|A|=m\), then, we have
\[\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j}) =\sum_{i=1}^{k}\frac{1}{|X|}(|a_{i}\cap b_{1}|+\cdots+|a_{i}\cap b_{ l}|)\] \[=\frac{1}{n}\sum_{i=1}^{k}|a_{i}\cap(b_{1}\cup\cdots\cup b_{l})|\] \[=\frac{1}{n}\sum_{i=1}^{k}|a_{i}\cap B|\] \[\leq\frac{1}{n}\sum_{i=1}^{k}|a_{i}|=\frac{m}{n}.\]
Given two equivalence granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) on \(X\). Then, for each \(sh_{m}(m=1,\cdots,5)\), the conditional granularity of \(B\) with respect to \(A\) is defined by the expectations of \(sh_{m}(m=1,\cdots,5)\) with respect to the probability distribution of \(A\wedge B\).
**Definition III.6**.: \[G_{m}(B|A) =sh_{m}(B,A)=E_{P_{A\wedge B}}(sh_{m}(\cdot,\cdot))\] \[=\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})sh_{m}(b_{i},a_{i}).\] (4)
In general, we can take \(sh_{m}^{\prime}(\cdot,\cdot)=1-sh_{m}(\cdot,\cdot)(m=1,\cdots,5)\). Then, the expectations of \(sh_{m}^{\prime}(\cdot,\cdot)=1-sh_{m}(\cdot,\cdot)(m=1,\cdots,5)\) with respect to the probability distribution of \(A\wedge B\) is \(E_{P_{A\wedge B}}(sh_{m}^{\prime}(\cdot,\cdot))\)
\[=\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})sh_{m}^{\prime}(b_ {j},a_{i})\] \[=\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})(1-sh_{m}(b_{j},a_{ i}))\] \[=\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})-\sum_{i=1}^{k}\sum _{j=1}^{l}p(a_{i}\cap b_{j})sh_{m}(b_{j},a_{i})\] \[\leq\frac{m}{n}-G_{m}(B|A). \tag{5}\]
Given two equivalence granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) on \(X\). The conditional fineness of \(B\) with respect to \(A\) can be defined by
**Definition III.7**.:
1. \(F_{i}(B|A)=\frac{m}{n}-G_{i}(B|A)(i=1,\cdots,5);\)__
2. \(F_{i}(B|A)(i=1,\cdots,5)\) _satisfies the axiom (A2'), namely,_ \(F_{i}(B|A)(i=1,\cdots,5)=\frac{m}{n}\Longleftrightarrow A\wedge B=\emptyset\)_._
**Theorem III.3**.: _For any equivalence granules \(A\) and \(B\) on \(X\), we have_
1. \(G_{i}(B|A)(i=1,\cdots,5)\) _satisfies the axiom (A2),_ \(G_{i}(B|A)(i=1,\cdots,5)=0\Longleftrightarrow A\wedge B=\emptyset\)_;_
2. \(F_{i}(B|A)(i=1,\cdots,5)\) _satisfies the axiom (A2'), namely,_ \(F_{i}(B|A)(i=1,\cdots,5)=\frac{m}{n}\Longleftrightarrow A\wedge B=\emptyset\)_._
**Theorem III.2**.: _For any equivalence granules \(A\) and \(B\) on \(X\), we have_
1. \(0\leq G_{i}(B|A)\leq 1(i=1,\cdots,5)\)_;_
2. \(0\leq F_{i}(B|A)\leq 1(i=1,\cdots,5)\)_._
**Theorem III.4**.: _For any equivalence granule \(A\) on \(X\), we have_
1. \(G_{i}(A|\{X\})=G_{i}(A)(i=1,\cdots,5)\)_;_
2. \(F_{i}(A|\{X\})=F_{i}(A)(i=1,\cdots,5)\)_._
**Definition III.8**.: _Given two equivalence granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) on \(X\). For any \(i,j(i=1,\cdots,k,j=1,\cdots,l)\), we have all \(p(a_{i}\cap b_{j})=0\), then \(A\) and \(B\) are independent, and, particularly, \(B\) is called the quotient complement of \(A\) if \(B\) has only one atomic granule._
**Theorem III.5**.: _For any two equivalence granules \(A\) and \(B\) on \(X\), we have_
1. \(A\) _and_ \(B\) _is independent if and only if_ \(G_{i}(B|A)=G_{i}(A|B)=0(i=1,\cdots,5)\)_;_
2. \(A\) _and_ \(B\) _is independent if and only if_ \(F_{i}(B|A)=F_{i}(A|B)=\frac{m}{n}(i=1,\cdots,5),\)__
_where \(n\) is the cardinality of the universe \(X\) and \(m\) is the smaller one of the cardinalities of the sets \(A\) and \(B\)._
Now we start to prove \(G_{i}(B|A)(i=1,\cdots,5)\) satisfies the axiom (A1), (A3) and (A4), and \(F_{i}(B|A)(i=1,\cdots,5)\) satisfies the axiom (A1'), (A3') and (A4').
**Theorem III.6**.: _Assume that \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) are two equivalence granules on \(X\). Then_
1. \(A\) _is finer than_ \(B\) _if and only if_ \(G_{i}(B|A)=\frac{m}{n}(i=1,\cdots,5)\)_;_
2. \(A\) _is finer than_ \(B\) _if and only if_ \(F_{i}(B|A)=1-\frac{m}{n}(i=1,\cdots,5)\)_,_
_where \(n\) is the cardinality of the universe \(X\) and \(m\) is the smaller one of the cardinalities of the sets \(A\) and \(B\)._
The proofs are seen in Appendix. By the above theorem, we can get the following corollary.
**Corollary 1**.: _Assume that \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) are two quotient granules on \(X\). Then_
1. \(A\) _is finer than_ \(B\) _if and only if_ \(G_{i}(B|A)=1(i=1,\cdots,5)\)_;_
2. \(A\) _is finer than_ \(B\) _if and only if_ \(F_{i}(B|A)=0(i=1,\cdots,5)\)_._
**Lemma III.7**.: _For any two equivalence granules \(B=\{b_{1},\cdots,b_{l+1}\}\) and \(C=\{c_{1},\cdots,c_{l}\}\) on \(X\). If \(b_{l}\cup b_{l+1}\subseteq c_{l},b_{i}=c_{i}(i=1,\cdots,l-1),\) then for any equivalent granule \(A=\{a_{1},\cdots,a_{k}\}\) on \(X\), we have_
1. \(G_{i}(B|A)\leq G_{i}(C|A);\)__
2. \(G_{i}(A|C)\leq G_{i}(A|B).\)__
The proofs are seen in Appendix. Accordingly, we have the following result.
**Lemma III.8**.: _For any two equivalence granules \(B=\{b_{1},\cdots,b_{l+1}\}\) and \(C=\{c_{1},\cdots,c_{l}\}\) on \(X\). If \(b_{l}\cup b_{l+1}\subseteq c_{l},b_{i}=c_{i}(i=1,\cdots,l-1),\) then, for any equivalent granule \(A=\{a_{1},\cdots,a_{k}\}\) on \(X\), we have
1. \(F_{i}(A|C)\leq F_{i}(A|B)\);
2. \(F_{i}(B|A)\leq F_{i}(A|C)\).
For any two equivalence granules \(B=\{b_{1},\cdots,b_{l}\}\) and \(C=\{c_{1},\cdots,c_{m}\}\) on \(X\) and \(B\) is finer than \(C\). For any \(c_{j}\) in \(C\), there are two cases: either there exists some \(b_{i}\) subjecting to \(b_{i}=c_{j}\) or there exist some \(b_{i}\)s which satisfy that the union of these \(b_{i}\) is equal to \(c_{j}\). By repeating the above Lemmas, we can easily get the following two theorems.
**Theorem III.9**.: _For any three equivalence granules \(A=\{a_{1},\cdots,a_{k}\},B=\{b_{1},\cdots,b_{l}\}\) and \(C=\{c_{1},\cdots,c_{m}\}\) on \(X\), we have, for \(i=1,\cdots,5\),_
1. \(C\succeq B\Rightarrow G_{i}(B|A)\leq G_{i}(C|A)\)_;_
2. \(C\succeq B\Rightarrow G_{i}(A|C)\leq G_{i}(A|B)\)_._
**Theorem III.10**.: _For any three equivalence granules \(A=\{a_{1},\cdots,a_{k}\},B=\{b_{1},\cdots,b_{l}\}\) and \(C=\{c_{1},\cdots,c_{m}\}\) on \(X\), we have, for \(i=1,\cdots,5\),_
1. \(C\succeq B\Rightarrow F_{i}(C|A)\leq F_{i}(B|A)\)_;_
2. \(C\succeq B\Rightarrow F_{i}(A|B)\leq F_{i}(A|C)\)_._
All \(sh_{i}(i=1,\cdots,5)\) satisfy the axioms (A1), (A2), (A3) and (A4) or the axioms (A1\({}^{\prime}\)), (A2\({}^{\prime}\)), (A3\({}^{\prime}\)) and (A4\({}^{\prime}\)). The axioms (A1) and (A2) (or (A1\({}^{\prime}\)) and (A2\({}^{\prime}\))) are the two normalized boundary conditions. However, there does not exist the special case \(A\wedge B=\emptyset\) when the granules are in a complete information system or subsystem. Therefore, it is reasonable to think (A1) (or (A1\({}^{\prime}\))) as the normalized boundary condition. The axioms (A3) and (A4) (or (A3\({}^{\prime}\)) and (A4\({}^{\prime}\))) are monotone conditions which can be replaced by their weak axioms (A5) and (A6) (or (A7) and (A8) or (A9) and (A10) or (A11) and (A12)) or the axioms (A5\({}^{\prime}\)) and (A6\({}^{\prime}\)) (or (A7\({}^{\prime}\)) and (A8\({}^{\prime}\)) or (A9\({}^{\prime}\)) and (A10\({}^{\prime}\)) or (A11\({}^{\prime}\)) and (A12\({}^{\prime}\))), and any one of monotone conditions alone can also be regarded as the monotone condition because they imply each other. Thus, the boundary condition (A1) and any one of the monotone conditions constitute the basic axioms.
### _Subsethood Entropy_
Entropy, an important concept of thermodynamics, was introduced by German physicist Rudolph Clausius in 1865 [59]. The term of entropy has been used in various areas like chemistry, physics, biology, cosmology, economics, statistics, sociology, weather science, and information science. Information entropy as a concept was introduced by C. E. Shannon who was the founder of information theory in 1948 [60]. Information entropy was introduced to measure the granularity of each partition [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. After that, many other entropies have been introduced, and Hartley entropy, collision entropy, Renyi entropy, and min-entropy. have been introduced to measure granularity or fineness of equivalence granules. Accordingly, the subsethood measures \(sh_{i}(i=1,\cdots,5)\) can also be generalized to their corresponding subsethood entropies by the probability distribution of the meet of two granules in Equation (3).
Assume \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) are two equivalence granules on \(X\). For each \(sh_{i}(i=1,\cdots,5)\), its corresponding subsethood entropy can be defined by.
**Definition III.9**.: \[H_{i}^{\prime}(B|A) =H_{sh_{i}}^{\prime}(B|A)=E_{P_{A\wedge B}}(\log sh_{i}(\cdot, \cdot))\] \[=-\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})\log sh_{i}(b_{j},a_{i}).\] (6)
\(H_{i}^{\prime}(B|A)\) is a monotonically decreasing function, and it is also called the conditional fineness entropy of \(B\) with respect to \(A\). Then, the expectations of logarithm of \(\log sh_{i}^{\prime}(\cdot,\cdot)=\log nsh_{i}(\cdot,\cdot)(i=1,\cdots,5)\) with respect to the probability distribution of \(A\wedge B\) is \(E_{P_{A\wedge B}}(sh_{i}^{\prime}(\cdot,\cdot))\)
\[=\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})\log sh_{i}^{\prime }(b_{j},a_{i})\] \[=\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})(\log n+\text{log} sh_{i}(b_{j},a_{i}))\] \[=\log n\sum_{i=1}^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})+\sum_{i=1} ^{k}\sum_{j=1}^{l}p(a_{i}\cap b_{j})\text{log}sh_{i}(b_{j},a_{i})\] \[\leq\frac{m}{n}\log n-H_{i}(B|A). \tag{7}\]
Therefore, for any two equivalence granules \(A\) and \(B\) on \(X\), the conditional granularity entropy of \(B\) with respect to \(A\) can also be defined by
**Definition III.10**.: \[H_{i}(B|A) =H_{sh_{i}}(B|A)\] \[=\frac{m}{n}\log n-H_{i}^{\prime}(B|A)(i=1,\cdots,5).\]
In those conditional granularities and conditional finenesses, for any equivalence granule \(A\) on \(X\), we have \(G(A|\{X\})=G(A)\) and \(F(A|\{X\})=F(A)\), and thus we can define
**Definition III.11**.: \[H_{i}(A)=H_{i}(A|\{X\})(i=1,\cdots,5);\]
2. \(H_{i}^{\prime}(A)=H_{i}^{\prime}(A|\{X\})(i=1,\cdots,5)\)_._
By the above definitions, we can easily get the following theorems.
**Theorem III.11**.: _For any two equivalence granules \(A\) and \(B\) on \(X\), we have_
1. \(0\leq H_{i}(B|A)\leq\log n(i=1,\cdots,5)\)_;_
2. \(0\leq H_{i}^{\prime}(B|A)\leq\log n(i=1,\cdots,5)\)_._
**Theorem III.12**.: _Given a universe \(X\). For any two granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) on \(X\), we have_
1. \(B\succeq A\Rightarrow H_{i}^{\prime}(B|A)=0\)_;_
2. _if_ \(H_{i}^{\prime}(B|A)=0\)_, then_ \(A\) _is finer than_ \(B\) _or_ \(A\) _and_ \(B\) _are independent._
Proof.:
1) If \(A\) is finer than \(B\), that is, for any \(a_{i}(i=1,\cdots,k)\), there exists only one \(b_{j}(j\in\{1,\cdots,l\})\), which subjects \(a_{i}\subseteq b_{j}\). That is, \(\log sh(b_{j},a_{i})=0\) because all \(sh_{i}(b,a)(i=1,\cdots,5)\) reach the maximum 1 when \(a\subseteq b\), i.e., \(a\cap b=a\). For other \(h\neq j\in\{1,\cdots,l\}\), we have \(p(a_{i}\cap b_{j})=0\). Therefore, \(p(a_{i}\cap b_{j})\log sh(b_{j},a_{i})=0(i=1,\cdots,k,j=1,\cdots,l)\). Thus \(H_{i}^{\prime}(B|A)=0\).
2. Every item of \(-\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}p(a_{i}\cap b_{j})\log sh(b_{j},a_{i})\) is more than or equal to \(0\) if \(H_{i}^{\prime}(B|A)=0\), and thus, we have \(p(a_{i}\cap b_{j})\log sh(b_{j},a_{i})=0(i=1,\cdots,k,j=1,\cdots,l)\). There are two cases: For any \(i,j(i=1,\cdots,k,j=1,\cdots,l)\), all \(p(a_{i}\cap b_{j})=0\), that is, \(A\) and \(B\) are independent; For each \(a_{i}(i\in\{1,\cdots,k\})\), for \(j=1,\cdots,l)\), either \(p(a_{i}\cap b_{j})=0\) or \(sh(b_{j},a_{i})=1\), that is, \(|a_{i}\cap b_{j}|=0\) or \(a_{i}\subseteq b_{j}\). By the definitions of equivalence granules, for each \(a_{i}(i\in\{1,\cdots,k\})\), there exists only one \(j\in\{1,\cdots,l\}\) which subjects to \(a_{i}\subseteq b_{j}\), and so \(A\) is finer than \(B\).
**Theorem III.13**.: _Given a universe \(X\). For any two equivalence granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) on \(X\), we have_
1. \(B\succeq A\Rightarrow H_{i}(B|A)=\frac{m}{n}\log n\)_;_
2. _if_ \(H_{i}(B|A)=\frac{m}{n}\log n\)_, then_ \(A\) _is finer than_ \(B\) _or_ \(A\) _and_ \(B\) _are independent,_
_where \(n\) is the cardinality of \(X\) and \(m\) is the smaller one of the cardinalities of \(A\) and \(B\)._
It can be seen that \(H_{i}(B|A)\) does not satisfy axiom (A1) and \(H_{i}^{\prime}(B|A)\) does not satisfy axiom A1\({}^{\prime}\) even if they are normalized. For any two equivalence granules \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) in a complete information system on \(X\), and we have the following result.
**Corollary 2**.:
1. \(B\succeq A\Longleftrightarrow H_{i}(B|A)=\frac{m}{n}\log n\)_;_
2. \(B\succeq A\Longleftrightarrow H_{i}^{\prime}(B|A)=0\)_,_
_where \(n\) is the cardinality of \(X\) and \(m\) is the smaller one of the cardinalities of \(A\) and \(B\)._
That means \(H_{i}(B|A)\) satisfies axiom (A1) and \(H_{i}^{\prime}(B|A)\) satisfies axiom A1\({}^{\prime}\) if they are normalized. However, \(H_{i}(B|A)\) does not satisfy axiom (A2) and \(H_{i}^{\prime}(B|A)\) does not satisfy axiom A2\({}^{\prime}\).
**Corollary 3**.: _Assume that \(A=\{a_{1},\cdots,a_{k}\}\) and \(B=\{b_{1},\cdots,b_{l}\}\) are two quotient granules on \(X\). Then_
1. \(A\) _is finer than_ \(B\) _if and only if_ \(H_{i}(B|A)=\log n(i=1,\cdots,5)\)_;_
2. \(A\) _is finer than_ \(B\) _if and only if_ \(H_{i}^{\prime}(B|A)=0(i=1,\cdots,5)\)_._
Because \(H_{i}\) and \(H_{i}^{\prime}\) keep the same monotonicity of \(G_{i}\) and \(F_{i}\) respectively, we have the following result.
**Lemma III.14**.: _For any two equivalence granules \(B=\{b_{1},\cdots,b_{l+1}\}\) and \(C=\{c_{1},\cdots,c_{l}\}\) on \(X\). If \(b_{l}\cup b_{l+1}\subseteq c_{l},b_{i}=c_{i}(i=1,\cdots,l-1),\) then for any equivalent granule \(A=\{a_{1},\cdots,a_{k}\}\) on \(X\), we have_
1. \(H_{i}(B|A)\leq H_{i}(C|A)\) _and_ \(H_{i}(A|C)\leq H_{i}(A|B);\)__
2. \(H_{i}^{\prime}(C|A)\leq H_{i}^{\prime}(B|A)\) _and_ \(H_{i}^{\prime}(A|B)\leq H_{i}^{\prime}(A|C).\)__
For any two equivalence granules \(B=\{b_{1},\cdots,b_{l}\}\) and \(C=\{c_{1},\cdots,c_{m}\}\) on \(X\) and \(B\) is finer than \(C\). For any \(c_{j}\) in \(C\), there are two cases: either there exists some \(b_{i}\) subjecting to \(b_{i}=c_{j}\) or there exist some \(b_{i}\)s which satisfy that the union of these \(b_{i}\) is equal to \(c_{j}\). By repeated use of above Lemma, we can easily get the following two theorems.
**Theorem III.15**.: _For any three equivalence granules \(A=\{a_{1},\cdots,a_{k}\},B=\{b_{1},\cdots,b_{l}\}\) and \(C=\{c_{1},\cdots,c_{m}\}\) on \(X\), we have, for \(i=1,\cdots,5,\)_
1. \(C\succeq B\Rightarrow H_{i}(B|A)\leq H_{i}(C|A)\)_;_
2. \(C\succeq B\Rightarrow H_{i}(A|C)\leq H_{i}(A|B)\)_._
**Theorem III.16**.: _For any three equivalence granules \(A=\{a_{1},\cdots,a_{k}\},B=\{b_{1},\cdots,b_{l}\}\) and \(C=\{c_{1},\cdots,c_{m}\}\) on \(X\), we have, for \(i=1,\cdots,5,\)_
1. \(C\succeq B\Rightarrow H_{i}^{\prime}(C|A)\leq H_{i}^{\prime}(B|A)\)_;_
2. \(C\succeq B\Rightarrow H_{i}^{\prime}(A|B)\leq H_{i}^{\prime}(A|C)\)_._
## IV Conclusion
GrC is to imitate two types of granulation process in human recognition: micro granular analysis process and macro granular analysis processes. Micro granular analysis focuses on the parts while macro granular analysis focuses on the whole. All the knowledge generated in the process of micro granular analysis constitute a micro knowledge space, and all the knowledge generated in the process of macro granular analysis constitute a macro knowledge space. Viewing an information system from micro perspective, we can get a micro knowledge space, and, viewing it from macro perspective, we can get a macro knowledge space, from which we obtain the rough set model and the spatial rough granule model respectively. The classical rough set model can only be used for complete information systems, while the rough set model obtained from micro knowledge space can also be used for incomplete information systems, what's more, the universe of discourse can be any domain. The spatial rough granule model will play a pivotal role in the problem solving of structures like graph partition, image processing, face recognition, 3D technologies, etc.
Subsethood measures have been well studied and generally accepted in many fields other than fuzzy sets and rough sets. Subsethood measures which is used to measure the set-inclusion relation between two sets are generalized to measure the coarse-fine relation between two granules. This paper defines conditional granularity, conditional fineness, conditional granularity entropy and conditional fineness entropy and discuss their properties including coarse-fine relation determination theorem, and all of these are very important foundations for learning and reasoning of structural problems. These measures can be used for fuzzy granules, and they have a close relation with similarity and difference, which will be studied in the future.
## Proof of the Theorem iii.6
We only prove \(G_{1}(B|A)=\frac{m}{n}\), the others are similar
Proof.: The sufficiency is obvious. Now we prove its necessity. We may assume that \(|A|=m\leq|B|\). By Definition III.6, we have \(sh_{l}(B,A)\)
\[=\sum_{i=1}^{k}\sum_{j=1}^{l}\frac{|a_{i}\cap b_{j}|}{|X|}\times \frac{|a_{i}^{c}\cup b_{j}|}{|X|}\] \[=\sum_{i=1}^{k}\frac{1}{|X|^{2}}\sum_{j=1}^{l}|a_{i}\cap b_{j}|(|X- a_{i}|+|a_{i}\cap b_{j}|)\] \[=\sum_{i=1}^{k}\frac{1}{|X|^{2}}\left(\sum_{j=1}^{l}|a_{i}\cap b_{ j}||X-a_{i}|+\sum_{j=1}^{l}|a_{i}\cap b_{j}|^{2}\right)\]
Assume the union of all \(b_{j}\) is the set \(B\). Then \(|a_{i}\cap b_{1}|+\cdots+|a_{i}\cap b_{l}|=|a_{i}\cap(b_{1}\cup\cdots\cup b_{l} )|=|a_{i}\cap B|=|a_{i}|\). Thus
\[\sum_{i=1}^{k}\frac{1}{|X|^{2}}\left(\sum_{j=1}^{l}|a_{i}\cap b_{ j}||X-a_{i}|+\sum_{j=1}^{l}|a_{i}\cap b_{j}|^{2}\right)\] \[\leq\sum_{i=1}^{k}\frac{(|a_{i}||X-a_{i}|+|a_{i}|^{2})}{|X|^{2}}\] \[=\sum_{i=1}^{k}\frac{|a_{i}|}{|X|}\frac{|X-a_{i}|+|a_{i}|}{|X|}\] \[=\sum_{i=1}^{k}\frac{|a_{i}|}{|X|}=\frac{m}{n}\]
When there exists some one such that \(|a_{i}\cap b_{h}|=|a_{i}|,|a_{i}\cap b_{j}|=0(j\neq h,j\in I),\sum_{j}|a_{i} \cap b_{j}|^{2}=|a_{i}|^{2}\) reaches the maximum, that is, for any \(a_{i}\) there must exist some \(b_{h}\) which satisfies \(a_{i}\cap b_{h}=a_{i}\) and \(a_{i}\cap b_{j}=\emptyset(j\neq h,j\in I)\). Therefore \(A\) is finer than \(B\).
## Proof of the Lemma iii.14
We only prove \(sh_{1}\), and the others are similar
Proof.: Here only prove (2)
Suppose there are \(h(0\leq h\leq|c|)\) equivalence classes intersecting with \(c_{l}\) in \(A\). When \(h=0\) we have
\[sh_{l}(A,B) =\sum_{i=1}^{l+1}\frac{1}{|X|^{2}}\sum_{j=1}^{k}|b_{i}\cap a_{j}|| b_{i}^{c}\cup a_{j}|\] \[=\sum_{i=1}^{l-1}\frac{1}{|X|^{2}}\sum_{j=1}^{k}|b_{i}\cap a_{j}|| b_{i}^{c}\cup a_{j}|\] \[=\sum_{i=1}^{l-1}\frac{1}{|X|^{2}}\sum_{j=1}^{k}|c_{i}\cap a_{j}|| c_{i}^{c}\cup a_{j}|=sh_{l}(A,C)\]
When \(1\leq h\leq|c_{l}|\), let them be \(a_{1},\cdots,a_{h}\), respectively, we have
\[sh_{l}(A,B) =\sum_{i=1}^{l+1}\frac{1}{|X|^{2}}\sum_{j=1}^{k}|b_{i}\cap a_{j}|| b_{i}^{c}\cup a_{j}|\] \[=\frac{\sum_{j=1}^{h}\left(|b_{l}\cap a_{j}||b_{l}^{c}\cup a_{j}|+ |b_{l+1}\cap a_{j}||b_{l+1}^{c}\cup a_{j}|\right)}{|X|^{2}}\] \[\quad+\frac{\sum_{i=1}^{l-1}\sum\limits_{j=1}^{k}|b_{i}\cap a_{j} ||b_{i}^{c}\cup a_{j}|}{|X|^{2}}\] \[sh_{l}(A,C) =\sum_{i=1}^{l}\frac{1}{|X|^{2}}\sum_{j=1}^{k}|c_{i}\cap a_{j}|| c_{i}^{c}\cup a_{j}|\] \[=\frac{\sum\limits_{j=1}^{h}|c_{l}\cap a_{j}||c_{l}^{c}\cup a_{j}| }{|X|^{2}}+\frac{\sum\limits_{i=1}^{l-1}\sum\limits_{j=1}^{k}|c_{i}\cap a_{j}|| c_{i}^{c}\cup a_{j}|}{|X|^{2}}\]
While \(b_{i}=c_{i}(i=1,\cdots,l-1)\), and \(|c_{l}\cap a_{j}||c_{l}^{c}\cup a_{j}|\)
\[=|(b_{l}\cup b_{l+1})\cap a_{j}||(b_{l}\cup b_{l+1})^{c}\cup a_{j}|\] \[=(|b_{l}\cap a_{j}|+|b_{l+1}\cap a_{j}||)(b_{l}^{c}\cup a_{j}) \cap(b_{l+1}^{c}\cup a_{j})|\] \[\leq|b_{l}\cap a_{j}||b_{l}^{c}\cup a_{j}|+|b_{l+1}\cap a_{j}||b_{ l+1}^{c}\cup a_{j}|\]
Thus we have \(sh_{l}(A,C)\leq sh_{l}(A,B)\).
## Acknowledgments
This work is partially supported by a Discovery Grant from NSERC Canada.
|
2304.02846 | Synthetic Sample Selection for Generalized Zero-Shot Learning | Generalized Zero-Shot Learning (GZSL) has emerged as a pivotal research
domain in computer vision, owing to its capability to recognize objects that
have not been seen during training. Despite the significant progress achieved
by generative techniques in converting traditional GZSL to fully supervised
learning, they tend to generate a large number of synthetic features that are
often redundant, thereby increasing training time and decreasing accuracy. To
address this issue, this paper proposes a novel approach for synthetic feature
selection using reinforcement learning. In particular, we propose a
transformer-based selector that is trained through proximal policy optimization
(PPO) to select synthetic features based on the validation classification
accuracy of the seen classes, which serves as a reward. The proposed method is
model-agnostic and data-agnostic, making it applicable to both images and
videos and versatile for diverse applications. Our experimental results
demonstrate the superiority of our approach over existing feature-generating
methods, yielding improved overall performance on multiple benchmarks. | Shreyank N Gowda | 2023-04-06T03:22:43Z | http://arxiv.org/abs/2304.02846v1 | # Synthetic Sample Selection for Generalized Zero-Shot Learning
###### Abstract
Generalized Zero-Shot Learning (GZSL) has emerged as a pivotal research domain in computer vision, owing to its capability to recognize objects that have not been seen during training. Despite the significant progress achieved by generative techniques in converting traditional GZSL to fully supervised learning, they tend to generate a large number of synthetic features that are often redundant, thereby increasing training time and decreasing accuracy. To address this issue, this paper proposes a novel approach for synthetic feature selection using reinforcement learning. In particular, we propose a transformer-based selector that is trained through proximal policy optimization (PPO) to select synthetic features based on the validation classification accuracy of the seen classes, which serves as a reward. The proposed method is model-agnostic and data-agnostic, making it applicable to both images and videos and versatile for diverse applications. Our experimental results demonstrate the superiority of our approach over existing feature-generating methods, yielding improved overall performance on multiple benchmarks.
## 1 Introduction
In recent years, deep learning [17, 19, 21, 43] has made remarkable strides in recognition accuracy, approaching human levels. However, the practical implementation of deep learning models is limited by the need for a significant number of labeled samples and the high cost of large-scale datasets [4]. It is often infeasible to collect sufficient labeled data for all classes, presenting a significant challenge for traditional supervised learning methods. To address this challenge, several approaches have been proposed, including semi-supervised learning, transfer learning, and few-shot learning. Zero-shot learning (ZSL) [36] is a subset of these methods, which refers to tasks where there is no training data available for unseen classes and disjoint label sets between training and test data.
ZSL is a unique form of cross-modal retrieval learning that relies on knowledge transfer from known classes to unknown classes through attribute sharing. The most common ZSL techniques use an intermediate semantic representation, such as visual features or semantic word vectors [7, 24, 44, 61], shared between the labeled auxiliary dataset and the unlabeled target dataset. The projection from the low-level feature space to the semantic space is learned from the auxiliary dataset, without adapting to the target dataset. The Generalized ZSL scenario, where both seen and unseen class samples are available at test time, is considered to be more realistic than the standard ZSL setup [51].
In Generalized ZSL, classifiers tend to be biased towards seen classes, leading to the misclassification of test samples from unseen classes as seen classes. To address the problem of the lack of visual data for unseen classes, researchers have proposed the use of Generative Adversarial Networks (GAN) [12] to generate synthetic visual features
Figure 1: Comparison of feature-generating frameworks pipelines (a) standard pipeline where features that look “real” are used to train the generator (b) proposed pipeline where the generator is trained based on the performance of the seen class classifier.
by leveraging attribute databases. However, while GANs have helped in zero-shot learning [6, 48, 52, 63], they do not explicitly learn to represent data in a structured way that is easily interpretable by humans or other models. They also suffer from the problems of mode collapse [22], class imbalance [1], and computational expense [18] when generating high-dimensional data. But what we care about most is that a lot of synthetic samples generated are used directly for training a classifier without studying if these samples actually help the classifier learn. Instead, these samples are chosen based on "realness". Figure 1 shows a comparison of the standard pipeline and our proposed pipeline for feature-generating approaches.
To address the limitations of GANs in synthetic feature selection, we propose a novel reinforcement learning-based approach that automatically selects generated features that improve model performance. Specifically, we use a transformer model [47] for synthetic sample selection and use validation classification accuracy as the reward for RL training. We employ the proximal policy optimization (PPO) [42] algorithm to update the model weights during training. Our proposed approach aims to pick samples that help classification and not just generate real-looking samples. We dub our synthetic sample selection method as "**SPOT**" for **S**election using **P**roximal policy **Op**T**imization.
Furthermore, our proposed approach is model-agnostic and data-agnostic, as we evaluate our method on multiple benchmark datasets in images and videos and various feature-generating models. Our comprehensive experiments demonstrate that our approach consistently improves model performance across different datasets and models, highlighting the effectiveness and versatility of our proposed method. By leveraging RL-based synthetic feature selection, we can more effectively generate synthetic data that captures the underlying structure of the data, improving the generalization performance of downstream models.
## 2 Related Work
Zero-Shot Learning in ImagesZero-shot learning (ZSL) is a challenging problem in computer vision, where the task is to recognize object categories without any training examples for them. Various approaches have been proposed to solve this problem in images. One of the early works [26] in this field used attributes, such as color and shape, to describe the object categories and mapped them to a visual space. They then used a nearest-neighbor classifier to recognize unseen object categories. However, this approach suffers from the semantic gap problem, where the attributes do not always correlate well with the visual features.
To address this problem, more recent works have explored the use of deep learning techniques to learn a joint embedding space for the visual and semantic features. One such approach is proposed by Frome et al. (2013) [7], where they used a deep neural network to learn a joint embedding space for the visual and textual features of the objects. They then used a nearest-neighbor classifier to recognize unseen object categories. Another approach is proposed by Socher et al. (2013) [44], where they used a recursive neural network to learn a compositional representation of the textual descriptions of the object categories.
More recently, there has been a trend towards using generative models to solve the ZSL problem. One such approach is proposed by Xian et al. [52], where they used a generative adversarial network (GAN) to generate visual features for unseen object categories. They then used a joint embedding space to match the generated features with the semantic features and recognize unseen object categories. Another approach is proposed by Schonfeld et al. (2019) [40], where they used a GAN to generate visual features conditioned on the textual descriptions of the object categories. ZeroGen [59] uses pre-trained language models to synthesize a dataset for a zero-shot task and then trains a small task model on it. NereNet [29] generates unseen samples by combining noise from similar seen classes with unseen class attributes using GAN. CMC-GAN [58], performs data hallucination of unseen classes by performing semantics-guided intra-category knowledge transfer across image categories.
We add our proposed module of synthetic sample selection to multiple feature-generating frameworks and show that this leads to improved performance for all these models across all datasets.
Zero-Shot Learning in VideosThe initial study by Rohrbach et al. [39] utilized script data from cooking activities to facilitate the transfer of knowledge to unseen categories. Gan et al. [10] considered each action class as a domain and tackled the problem of identifying semantic representations as a multisource domain generalization task. To extract semantic embeddings of class labels, popular approaches employ label embeddings such as word2vec [31], which solely requires class names. Several methods have used a shared embedding space between video features and class labels [56, 57], error-correcting codes [38], pairwise relationships between classes [8], interclass relationships [9], out-of-distribution detectors [30], synthetic features [32, 33] and graph neural networks [11].
Recently, it has been observed that clustering joint visual-semantic features results in better representations for zero-shot action recognition [15]. Similar to CIASTER, ReST [28] jointly encodes video data and textual labels for zero-shot action recognition. In ReST, transformers are utilized to conduct modality-specific attention. On the other hand, JigSawNet [37] models visual and textual features jointly but disassembles videos into atomic actions in an unsupervised manner and establishes group-to-group relation
ships between visual and semantic representations instead of the one-to-one relationships that CLASTER and ReST establish.
However, since we only evaluate on feature generating approaches, we compare directly to OD [30], CLSWGAN [52], GGM [33] and Bi-dir GAN [32] and show that using our synthetic feature selection approach all methods can be improved significantly.
Reinforcement Learning for Data ValuationThe quantification of data value for a specific machine learning task, known as data valuation [14, 23, 50], has numerous applications such as domain adaptation, corrupted sample detection, and robust learning. Various techniques have been proposed to estimate data values based on different criteria, including influence functions, Shapley values, leave-one-out errors, and data deletion. However, these methods are computationally expensive, necessitate model perturbations or retraining, and do not consider the interactions among data points. Recently, an adaptive approach to data valuation using reinforcement learning [62], in which data values are jointly learned with the predictor model using a data value estimator that is trained using a reinforcement signal reflecting task performance. Similarly, Learn2Augment [13] performs data valuation of augmented samples created by combining foreground and background videos using reinforcement learning to quantify the value of an augmented sample.
Synthetic sample selection [60] for medical image segmentation is an under-investigated research area that focuses on the quality control of synthetic images for data augmentation purposes. Synthetic images are not always realistic and may contain misleading features that distort data distribution when mixed with real images. As a result, the effectiveness of synthetic images in medical image recognition systems cannot be ensured when they are randomly added without quality assurance. A reinforcement learning-based synthetic sample selection approach is proposed in which synthetic images containing reliable and informative features are chosen.
However, none of the above approaches consider the extreme case of zero-shot learning. In the case of zero-shot learning, synthetic features are often biased towards seen classes, and the generated synthetic features do not represent the true distribution. Training a model to generate realistic-looking features will produce features similar to the training distribution without any guarantee on the effect on zero-shot evaluation. Therefore, we propose a data valuation method for synthetic features based on classification performance rather than their realism.
## 3 Methodology
The overall framework of the proposed method is visually depicted in Figure 2, which provides an illustrative overview of the various components employed. In this section, we delve deeper into the individual constituents of the model and explore in detail the novel SPOT selector that has been put forth. It is imperative to note that the proposed pipeline is model and data-agnostic, which implies that the
Figure 2: Overall pipeline of our proposed SPOT. The feature generator generates features that the selector module ranks based on the seen class classifier’s performance. The selector is updated based on the performance of the classifier on the selected features. The proposed pipeline is model and data-agnostic.
choice of classifier model and network backbone is dependent solely on the feature-generating framework itself.
The development of the SPOT selector draws significant inspiration from the synthetic sample selector introduced in [60]. However, our approach seeks to tackle the more challenging task of zero-shot learning, where data from unseen classes is limited. Furthermore, we demonstrate that training the selector using data from seen classes can enhance the selection of better features for data from unseen classes. This approach represents a significant contribution and serves to bridge the gap between the seen and unseen class data.
### Feature Generating Network (FGN)
As previously highlighted, it is worth noting that the proposed pipeline is entirely independent of the feature-generating approach itself. This aspect renders the framework highly versatile and adaptable to a diverse range of feature-generating models, which may be employed in place of the FGN utilized in this study.
Examples of alternate feature-generating models that could be employed include the WGAN [52] and Cycle-WGAN [6]. The utilization of such models would permit the proposed pipeline to be seamlessly integrated into a broader range of applications and extend the reach and scope of the framework. The flexibility afforded by this design decision represents a critical feature of the proposed pipeline and enables the framework to be readily adapted to a range of diverse use cases as shown with consistent improvements in multiple image and video datasets.
### Selector
The reasoning behind selecting the particular selector is rooted in the interdependence of features among the potential images. We hypothesise that the sequence in which the features are generated is not completely autonomous, as the later additions must differentiate themselves from the earlier ones in order to ensure diversity across the entire set of augmented training data. We use a transformer-based architecture [5] to be our selector. The input is a feature vector of a dimension dependent on the FGN used. The goal of the selector is to tell us if the generated feature vector is good for classification performance. The selector takes in a feature vector and outputs a score that tells us how good that feature vector is for classification. To do this, the selector outputs a binary action: select or not select. However, we do not have ground truth to tell us how good the generated feature is and hence optimizing the selector is not trivial.
To address the possibility of a relationship between augmented images without relying heavily on sequential assumptions, we utilize the self-attention mechanism through the implementation of the transformer [5] model as our selector. The transformer architecture eliminates all recurrent structures, requiring feature vectors to be combined with their positional embeddings using sinusoidal functions prior to being input into the encoder layer of the transformer. The primary component of the transformer encoder is the multi-head attention block, comprised of \(n\) self-attention layers, where \(n\) denotes the number of heads. In each self-attention layer, input features are projected to three separate feature spaces - query \(Q\), key \(K\), and value \(V\)- by multiplying learnable weight matrices. The resulting attention map is obtained through the following process:
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{1}\]
Each head of the multi-head attention block represents a distinct projected feature space for the input, achieved by multiplying the same input embedding with different weight matrices. These separate outputs are then concatenated to form the final attention map, which is expressed as:
\[MultiHead(Q,K,V)=Concat(head_{1},...,head_{h})W^{O} \tag{2}\]
\[head_{i}=Attention(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}) \tag{3}\]
Once we obtain the attention map, the context vector is then fed to the feed-forward layer as follows:
\[F(x)=max(((0,xW_{1}+b_{1})W_{2}+b_{2})W_{3}+b_{3}...)W_{n}+b_{n} \tag{4}\]
Given that the objective of the selector is to produce a binary action for every input feature vector, the decoder for the transformer model is a linear layer that functions as the policy network. Overall, the use of the transformer as the selector within our image selection framework, based on reinforcement learning, is beneficial due to its self-attention mechanism that effectively captures the interdependencies among the input feature vectors. We conducted a thorough ablation study regarding the selection of the selector and this can be seen in Section 4.2.
To optimize the selector, we turn to reinforcement learning as this is a common solution [13, 62]. In particular, we use proximal policy optimization [42]. Details are explained in the next section.
### Proximal Policy Optimization
As previously stated, we turn to reinforcement learning approach to update the selector model. A proficient policy gradient method is fundamental to effectively utilize reward feedback as input to the selector in the reinforcement learning process. Among various policy gradient algorithms, Proximal Policy Optimization (PPO) [42] has
gained popularity due to its computational efficiency and satisfactory performance, surpassing previous approaches like TRPO [41]. Additionally, PPO alleviates the instability encountered during RL training. PPO achieves comparable performance with reduced complexity by replacing the KL convergence constraint enforced in TRPO with a clipped probability ratio between the current and previous policy within a small interval around 1. At each time step t, with \(A_{\theta}\) representing the advantage function, the objective function is defined as follows:
\[L(\theta)=E[min(\gamma_{\theta}(t)A_{\theta}(s_{t},a_{t}),\\ clip(\gamma_{\theta}(t),1-\epsilon,1+\epsilon)A_{\theta}(s_{t},a_{t}))] \tag{5}\]
Here, \(A_{\theta}(s_{t},a_{t})=Q_{\theta}(s_{t},a_{t})-V_{\theta}(s_{t},a_{t})\). As a component of the transformer output, the learned state-value \(V_{\theta}(s_{t},a_{t})\) serves as a baseline for the q-value to mitigate the variance of rewards during the training process. The probability of actions is denoted by \(\pi\). The q-value at time t, \(Q_{\theta}(s_{t},a_{t})\), is defined as a smoothed version of the maximum validation accuracy observed among the last five epochs in the classification task. As our target tasks are trained on the seen class data and needs to generalize to the unseen class data, it is crucial to obtain a robust estimation of the reward's changing pattern. To achieve this, we employ the Exponential Moving Average (EMA) algorithm to smooth the original reward curve. Thus, the final reward at time \(t\) is obtained as follows:
\[\hat{Q}_{\theta}\left(s_{t},a_{t}\right)=\left\{\begin{array}{cc}\mathrm{Q} _{\theta}\left(s_{t},a_{t}\right),&t=1\\ \alpha\hat{Q}_{\theta}\left(s_{t-1},a_{t-1}\right)+&\\ (1-\alpha)\mathrm{Q}_{\theta}\left(s_{t},a_{t}\right),&t>1\end{array}\right.. \tag{6}\]
Drawing inspiration from the concept of importance sampling, the weight assigned to the current policy is influenced by earlier policies. The probability ratio between the previous and current policies, denoted by \(\gamma_{\theta}(t)\), is mathematically defined as:
\[\gamma_{\theta}(t)=\frac{\pi_{\theta}\left(a_{t}\mid s_{t}\right)}{\pi_{ \theta}\left(a_{t-1}\mid s_{t-1}\right)} \tag{7}\]
Here, \(a_{t}\in\mathbb{R}^{N\times 2}\) refers to the number of synthetic samples in the candidate pool. If at any given timestep \(t\), \(a_{i}(t)=0\) then \(i\) is discarded. Else, it is added to the original training set.
## 4 Experimental Analysis
### Implementation Details
Since we propose a plug-and-play component to feature-generating networks, the backbone and technical details follow the exact same implementation that the feature-generating model uses. Here, we talk about the technical details of running SPOT.
The candidate features generated by the feature-generating framework are passed into the selector network which is a 8-layer encoder having a 8-head multi-head attention block. The output is a vector \(A_{\theta}\) which is the action vector and a value vector \(V_{\theta}\) that is used together to calculate the reward for the policy gradient algorithm.
The classifier network depends on the feature-generating network being used since our proposed method is model agnostic. Similar to [13, 60] we use the EMA-smoothed validation accuracy obtained from the last 5 epochs as the reward with \(\alpha\)=0.5 when using the classifier on the validation set. As long as this average is increasing, we continue updating our policy. The policy function \(\pi\) is obtained from the softmax layer of the seen-class classifier model.
\(\epsilon\) in Eq. 4 is set to 0.15 (see Ablations for empirical comparison), this helps to set an upper and lower bound at the current time step \(t\) and previous one \(t-1\) for the ratio of the policy function. The number of selected synthetic features are dependent on the selector and varies according to model and dataset. However, we set the learning rate to be fixed for the PPO at 2e - 04.
### Ablation Study
We have made a few choices with regards to the hyperparameter selection and choice of RL optimization algorithms and in this section, we show empirical reasons why the choices were made. Figure 3 shows the performance differences when using different RL optimization algorithms to modify the selector. Similar to [60], we also consider alternative choices such as GRU and LSTMs for the selector. In terms of RL algorithms, we compare to REINFORCE [46] and TRPO [41].
Figure 3: Ablation comparison of different combinations of RL algorithms with selector choices (GRU or Transformer). ‘G’ = GRU, ‘Tr’ = Transformer, ‘R’ = REINFORCE, ‘T’ = TRPO and ‘P’ = PPO.
### Images
#### 4.3.1 Datasets and Evaluation Protocol
Our method is evaluated on four challenging benchmark datasets, namely AWA [27], CUB (Caltech UCSD Birds 200) [49], SUN (SUN Attribute) [55], and FLO [35]. CUB and SUN are fine-grained datasets, while AWA and FLO are coarse-grained datasets. We adopt the same seen/unseen splits and class embeddings to ensure consistency with previous work as in [53]. AWA1 contains 30,475 instances across 50 categories. CUB comprises 11,788 images of 200 bird classes (150/50 for seen/unseen classes) with 312 attributes. SUN contains 14,340 images from 717 scene classes (645/72 for seen/unseen classes) with 102 attributes. FLO consists of 8,189 images from 102 flower classes with an 82/20 class split for seen and unseen classes, respectively. These datasets are widely used in the literature, enabling a direct comparison of our results with those of previous studies.
#### 4.3.2 Zero-Shot Learning
We compare strictly with recent state-of-the-art feature-generating approaches and as such compare to WGAN [52], Cycle-WGAN [6], f-VAEGAN [54], NereNet [29] and CMC-GAN [58]. Table 1 shows the results. We see consistent gains with increase of up to 4.1% when we add the proposed SPOT to any of the models.
#### 4.3.3 Generalized Zero-Shot Learning
We perform a much more extensive comparison in the generalized setting as this is where most feature generating frameworks perform experiments. We compare against WGAN [52], Cycle-WGAN [6], f-VAEGAN [54], CMC-GAN [58], NereNet [29], FREE [3] and DAA [64]. We see consistent improvements on the unseen class accuracies as this is where selected features make a difference. We see gains of up to 3.3% on the unseen class accuracies. As a result, there is consistent improvement on the harmonic mean of the seen and unseen class accuracies as well.
### Videos
#### 4.4.1 Datasets and Evaluation Protocol
For videos, we use the widely adopted Olympic Sports [34], HMDB-51 [25], and UCF-101 [45] datasets to evaluate our method for zero-shot action recognition and compare it against recent state-of-the-art feature generating models [20, 30, 52]. The aforementioned datasets comprise 783, 6766, and 13320 videos and are associated with 16, 51, and 101 classes, respectively. To enable comparison with existing works [20, 30, 32, 33, 52], we adopt the widely used 50/50 splits proposed by Xu et al. [56], where half of the classes are considered as seen and the other half as unseen. We report the average accuracy and standard deviation over 10 independent runs, following previous approaches.
Moreover, we extend our evaluation to include TruZe [16], which was recently introduced to address the issue of overlapping classes between the pre-training dataset (Kinetics [2]) and the unseen classes in zero-shot settings. The TruZe split acknowledges the presence of such overlapping classes, which contradicts the fundamental assumption that the unseen classes have not been previously seen.
#### 4.4.2 Zero-Shot Learning
Table 3 shows the effect of using our proposed SPOT selector to enhance the performance of state-of-the-art feature-generating frameworks on the zero-shot setting. We compare with the most recent best-performing methods, which include the Bi-Dir GAN [32], GGM [33], OD [30], WGAN [52] and FFG [20] (fine-grained feature generation framework).
We observe that the proposed method consistently outperforms all approaches across all datasets by gains of up to 4.5%.
#### 4.4.3 Generalized Zero-Shot Learning
We evaluate SPOT on the generalized setting where at test time both seen and unseen class samples are used. Table 4 shows the results, with the harmonic mean of the seen and unseen class accuracies. The proposed SPOT selector consistently improves all approaches across all datasets by gains of up to 4.2%.
### Results on TruZe
We also evaluate on the stricter TruZe [16] split that ensures no overlap between the pre-trained model and test classes. Results are shown in Table 5. We only evaluate on OD and WGAN as these are the two feature-generating approaches that have results reported on the TruZe split. Again, we see that using SPOT for selection consistently
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Model & CUB & AWA & SUN & FLO \\ \hline \hline WGAN & 57.3 & 68.2 & 60.8 & 67.2 \\ WGAN + **SPOT** & **60.7** & **71.1** & **63.3** & **69.9** \\ \hline Cycle-WGAN & 57.8 & 65.6 & 59.7 & 68.6 \\ Cycle-WGAN + **SPOT** & **61.1** & **69.7** & **62.5** & **70.9** \\ \hline f-VAEGAN & 61.0 & 71.1 & 64.7 & 67.7 \\ f-VAEGAN + **SPOT** & **62.8** & **72.7** & **66.0** & **69.2** \\ \hline CMC-GAN & 61.4 & 71.4 & 63.7 & 69.8 \\ CMC-GAN + **SPOT** & **62.9** & **73.1** & **65.1** & **71.9** \\ \hline \end{tabular}
\end{table}
Table 1: Results on zero-shot image classification using recent feature-generating frameworks.
improves the performance of the feature-generating framework.
## 5 Conclusion
In conclusion, although generative techniques have made significant progress in transforming traditional GZSL to fully supervised learning, they often generate redundant synthetic features, which can lead to reduced accuracy. To overcome this limitation, we have proposed an approach for synthetic feature selection using reinforcement learning, which involves training a transformer-based selector using proximal policy optimization (PPO) to select synthetic features based on the validation classification accuracy of seen classes as the reward. Our proposed method is model-agnostic and data-agnostic and hence is suitable for images and videos. The experimental results of our approach demonstrate its superiority over existing feature-generating methods, with improved overall performance observed across multiple benchmarks. Overall, our approach represents a significant contribution towards addressing the issue of synthetic feature redundancy in GZSL, and we believe that it has the potential to be widely applied in real-world scenarios.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Method & Olympics & HMDB51 & UCF101 \\ \hline \hline Bi-Dir GAN [32] & 53.2 \(\pm\) 10.5 & 21.3 \(\pm\) 3.2 & 24.7 \(\pm\) 3.7 \\ Bi-Dir GAN [32] + **SPOT** & **56.6 \(\pm\) 10.1** & **25.1 \(\pm\) 3.4** & **27.7 \(\pm\) 3.5** \\ \hline GGM [33] & 57.9 \(\pm\) 14.1 & 20.7 \(\pm\) 3.1 & 24.5 \(\pm\) 2.9 \\ GGM [33] + **SPOT** & **62.4 \(\pm\) 12.4** & **25.1 \(\pm\) 2.8** & **27.4 \(\pm\) 2.5** \\ \hline OD [30] & 65.9 \(\pm\) 8.1 & 30.2 \(\pm\) 2.7 & 38.3 \(\pm\) 3.0 \\ OD [30] + **SPOT** & **68.7 \(\pm\) 7.5** & **34.4 \(\pm\) 2.2** & **40.9 \(\pm\) 2.6** \\ \hline WGAN [52] & 64.7 \(\pm\) 7.5 & 29.1 \(\pm\) 3.8 & 37.5 \(\pm\) 3.1 \\ WGAN [52] + **SPOT** & **68.1 \(\pm\) 7.1** & **33.8 \(\pm\) 2.4** & **40.6 \(\pm\) 2.4** \\ \hline FFG [20] & - & 32.4 \(\pm\) 2.3 & 27.6 \(\pm\) 2.4 \\ FFG [20] + **SPOT** & - & **35.9 \(\pm\) 2.5** & **30.9 \(\pm\) 2.2** \\ \hline \end{tabular}
\end{table}
Table 2: Results on generalized zero-shot image classification on 4 challenging benchmarks.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline Model & \multicolumn{4}{c|}{CUB} & \multicolumn{4}{c|}{AWA1} & \multicolumn{4}{c|}{SUN} & \multicolumn{4}{c|}{FLOO} \\ \cline{2-13} & S & U & H & S & U & H & S & U & H & S & U & H \\ \hline WGAN & 43.7 & 57.7 & 49.7 & 57.9 & 61.4 & 59.6 & 42.6 & 36.6 & 39.4 & 59.0 & 73.8 & 65.6 \\ WGAN+**SPOT** & **44.1** & **60.9** & **51.1** & **58.6** & **64.9** & **61.6** & **42.8** & **39.1** & **40.9** & **59.3** & **75.9** & **66.6** \\ \hline Cycle-WGAN & 46.0 & 60.3 & 52.2 & 56.4 & 63.5 & 59.7 & **48.3** & 33.1 & 39.2 & 59.1 & 71.1 & 64.5 \\ Cycle-WGAN+**SPOT** & **46.5** & **62.9** & **53.5** & **56.9** & **66.1** & **61.1** & 48.1 & **36.2** & **41.3** & **59.4** & **74.4** & **66.1** \\ \hline f-VAEGAN & 48.4 & 60.1 & 53.6 & 57.6 & 70.6 & 63.5 & 45.1 & 38.0 & 41.3 & 56.8 & 74.9 & 64.6 \\ f-VAEGAN+**SPOT** & **48.8** & **62.8** & **54.9** & **57.9** & **73.3** & **64.7** & **45.5** & **41.1** & **43.2** & **57.0** & **77.2** & **65.6** \\ \hline CMC-GAN & 52.6 & 65.1 & 58.2 & 63.2 & 70.6 & 66.7 & 48.2 & 40.8 & 44.2 & 64.5 & 80.2 & 71.5 \\ CMC-GAN+**SPOT** & **53.1** & **66.7** & **59.1** & **63.3** & **73.8** & **68.1** & **48.9** & **44.1** & **46.4** & **64.6** & **82.8** & **72.6** \\ \hline NereNET & 51.0 & 56.5 & 53.6 & - & - & - & 45.7 & 38.1 & 41.6 & - & - & - \\ NereNET+**SPOT** & **51.3** & **58.4** & **54.6** & - & - & - & **45.9** & **40.4** & **43.0** & - & - & - \\ \hline FREE & **55.7** & 59.9 & 57.7 & 62.9 & 69.4 & 66.0 & 47.4 & 37.2 & 41.7 & 67.4 & 84.5 & 75.0 \\ FREE+**SPOT** & 55.5 & **62.2** & **58.6** & **63.1** & **72.1** & **67.3** & **47.8** & **39.9** & **43.5** & **67.8** & **86.3** & **75.9** \\ \hline DAA & 66.1 & 65.5 & 65.8 & 64.3 & 76.6 & 69.9 & 47.8 & 38.7 & 42.8 & - & - & - \\ DAA+**SPOT** & **66.3** & **67.7** & **67.0** & **64.6** & **77.9** & **70.6** & **48.1** & **40.3** & **43.8** & - & - & - \\ \hline \end{tabular}
\end{table}
Table 4: Results on generalized zero-shot setting. Reported results are the harmonic mean of the seen and unseen class accuracies.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & UCF101 & HMDB51 \\ \hline \hline Bi-Dir GAN [32] & 53.2 \(\pm\) 10.5 & 21.3 \(\pm\) 3.2 & 24.7 \(\pm\) 3.7 \\ Bi-Dir GAN [32] + **SPOT** & **56.6 \(\pm\) 10.1** & **25.1 \(\pm\) 3.4** & **27.7 \(\pm\) 3.5** \\ \hline GGM [33] & 57.9 \(\pm\) 14.1 & 20.7 \(\pm\) 3.1 & 24.5 \(\pm\) 2.9 \\ GGM [33] + **SPOT** & **62.4 \(\pm\) 12.4** & **25.1 \(\pm\) 2.8** & **27.4 \(\pm\) 2.5** \\ \hline OD [30] & 65.9 \(\pm\) 8.1 & 30.2 \(\pm\) 2.7 & 38.3 \(\pm\) 3.0 \\ OD [30] + **SPOT** & **68.7 \(\pm\) 7.5** & **34.4 \(\pm\) 2.2** & **40.9 \(\pm\) 2.6** \\ \hline WGAN [52] & 64.7 \(\pm\) 7.5 & 29.1 \(\pm\) 3.8 & 37.5 \(\pm\) 3.1 \\ WGAN [52] + **SPOT** & **68.1 \(\pm\) 7.1** & **33.8 \(\pm\) 2.4** & **40.6 \(\pm\) 2.4** \\ \hline FFG [20] & - & 32.4 \(\pm\) 2.3 & 27.6 \(\pm\) 2.4 \\ FFG [20] + **SPOT** & - & **35.9 \(\pm\) 2.5** & **30.9 \(\pm\) 2.2** \\ \hline \end{tabular}
\end{table}
Table 3: Results on zero-shot action recognition on the Olympics, HMDB51 and UCF101 datasets.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & UCF101 & HMDB51 \\ \hline \hline WGAN & 22.5 & 36.3 & 21.1 & 31.8 \\ WGAN + **SPOT** & **25.3** & **39.1** & **23.8** & **33.3** \\ \hline OD & 22.9 & 42.4 & 21.7 & 35.5 \\ OD + **SPOT** & **25.5** & **44.1** & **24.0** & **37.1** \\ \hline \end{tabular}
\end{table}
Table |
2310.18759 | Compatible Poisson Brackets Associated with Elliptic Curves in $G(2,5)$ | We prove that a pair of Feigin-Odesskii Poisson brackets on ${\mathbb P}^4$
associated with elliptic curves given as linear sections of the Grassmannian
$G(2,5)$ are compatible if and only if this pair of elliptic curves is
contained in a del Pezzo surface obtained as a linear section of $G(2,5)$. | Nikita Markarian, Alexander Polishchuk | 2023-10-28T17:02:18Z | http://arxiv.org/abs/2310.18759v3 | # Compatible Poisson brackets associated with elliptic curves in \(G(2,5)\)
###### Abstract.
We prove that a pair of Feigin-Odesskii Poisson brackets on \(\mathbb{P}^{4}\) associated with elliptic curves given as linear sections of the Grassmannian \(G(2,5)\) are compatible if and only if this pair of elliptic curves is contained in a del Pezzo surface obtained as a linear section of \(G(2,5)\).
A.P. is supported in part by the NSF grant DMS-2001224, and within the framework of the HSE University Basic Research Program.
Introduction
Let \(\mathcal{U}\) be a compact Lie group and let \(\mathcal{U}\) be a Lie group and let \(\mathcal{U}\) be a Lie group. Let \(\mathcal{U}\) be a Lie group and let \(\mathcal{U}\) be a Lie group.
**Corollary C**. _The maximal dimension of a linear subspace of Poisson brackets on \(\mathbb{P}(V)\), where \(\dim V=5\), spanned by some FO brackets \(\Pi_{W}\) of type \(q_{5,2}\), is \(6\)._
Theorems A and B suggest the following
**Conjecture D**. _Let \(W\subset\bigwedge^{2}V\) be a \(5\)-dimensional subspace such that \(E_{W}\) is an elliptic curve. Consider the subspace_
\[T_{W}:=(\bigwedge^{4}\!W)\wedge(\bigwedge^{2}\!V)\subset\bigwedge^{5}(\bigwedge^ {2}\!V)\]
_(the quotient of the latter subspace by \(\bigwedge^{5}\!W\) is exactly the image of the tangent space to the Grassmannian \(G(5,\bigwedge^{2}\!V)\) under Plucker embedding). Then the subspace of \(\xi\in\bigwedge^{5}(\bigwedge^{2}\!V)\) satisfying \([\pi_{5,2}(\xi),\Pi_{W}]=0\) coincides with \(T_{W}+\ker(\pi_{5,2})\)._
Note that we know the inclusion one way: the subspace \(T_{W}\) is spanned by \(\bigwedge^{5}(W^{\prime})\) such that \(\dim(W^{\prime}\cap W)\geq 4\) and \(E_{W^{\prime}}\) is an elliptic curve, and by Theorems A and B, \([\pi_{5,2}(\bigwedge^{5}(W^{\prime}))\wedge\Pi_{W}]=0\).
_Acknowledgments._ We are grateful to Volodya Rubtsov for useful discussions. N.M. would like to thank the Max Planck Institute for Mathematics for hospitality and perfect work conditions.
## 2. Generalities
### Feigin-Odesskii Poisson brackets of type \(q_{n,k}\)
Let \(E\) be an elliptic curve, with a fixed trivialization \(\eta:\mathcal{O}_{E}\to\omega_{E}\), \(\mathcal{V}\) a stable bundle on \(E\) of rank \(k\) and degree \(n>0\). We consider the corresponding Feigin-Odesskii Poisson bracket \(\Pi=\Pi_{E,\mathcal{V}}\) of type \(q_{n,k}\) on the projective space \(\mathbb{P}H^{1}(E,\mathcal{V}^{\vee})\) defined as in [10].
We will need the following definition of \(\Pi\) in terms of triple Massey products. For nonzero \(\phi\in H^{1}(E,\mathcal{V}^{\vee})\), we denote by \(\langle\phi\rangle\) the corresponding line, and we use the identification of the cotangent space to \(\langle\phi\rangle\) with \(\langle\phi\rangle^{\perp}\subset H^{0}(E,\mathcal{V})\) (where we use the Serre duality \(H^{0}(E,\mathcal{V})\simeq H^{1}(E,\mathcal{V}^{\vee})^{*}\)).
**Lemma 2.1.1**.: _([3, Lem. 2.1]) For \(s_{1},s_{2}\in\langle\phi\rangle^{\perp}\) one has_
\[\Pi_{\phi}(s_{1}\wedge s_{2})=\langle\phi,MP(s_{1},\phi,s_{2})\rangle,\]
_where \(MP\) denotes the triple Massey product for the arrows_
### Formula for a family of complete intersections
Let \(X\) be a smooth projective variety of dimension \(n\), \(C\subset X\) a connected curve given as the zero locus of a regular section \(F\) of a vector bundle \(N\) of rank \(n-1\), such that \(\det(N)^{-1}\simeq\omega_{X}\). Then the normal bundle to \(C\) is isomorphic to \(N|_{C}\), so by the adjunction formula, \(\omega_{C}\) is trivial, so \(C\) is an elliptic curve. Assume that \(P\) is a vector bundle on \(X\), such that the following cohomology vanishing holds:
\[H^{i}(X,\bigwedge^{i}\!N^{\vee}\otimes P)=H^{i-1}(X,\bigwedge^{i}\!N^{\vee} \otimes P)\ \ \text{for}\ 1\leq i\leq n-1. \tag{2.1}\]
We have the following Koszul resolution for \(\mathcal{O}_{C}\):
\[0\to\bigwedge^{n-1}N^{\vee}\to\ldots\to\bigwedge^{2}N^{\vee}\xrightarrow{\delta_{2} (F)}N^{\vee}\xrightarrow{\delta_{1}(F)}\mathcal{O}_{X}\to\mathcal{O}_{C}\to 0,\]
which induces a map \(e_{C}:\mathcal{O}_{C}\to\bigwedge^{n-1}N^{\vee}[n-1]\) in the derived category of \(X\). Here the differential \(\delta_{i}(F)\) is given by the contraction with \(F\in H^{0}(X,N)\), so it depends linearly on \(F\).
**Lemma 2.2.1**.: _(i) The natural restriction map \(H^{0}(X,P)\to H^{0}(C,P|_{C})\) and the map_
\[\operatorname{Ext}^{1}(P,\mathcal{O}_{C})\xrightarrow{e_{C}}\ \operatorname{Ext}^{n}(P,\bigwedge^{n-1}N^{\vee})\simeq \operatorname{Ext}^{n}(P,\omega_{X})\]
_are isomorphisms. These maps are dual via the Serre duality isomorphisms_
\[\operatorname{Ext}^{1}(P|_{C},\mathcal{O}_{C})\simeq H^{0}(C,P|_{C})^{*},\ \ \operatorname{Ext}^{n}(P,\omega_{X})\simeq H^{0}(X,P)^{*}.\]
_(ii) Assume in addition that \(\operatorname{End}(P)=\mathbf{k}\) and we have the following vanishing:_
\[\operatorname{Ext}^{i}(P,\bigwedge^{i}N^{\vee}\otimes P)= \operatorname{Ext}^{i-1}(P,\bigwedge^{i}N^{\vee}\otimes P)=0\ \text{ for }1\leq i\leq n-1. \tag{2.2}\]
_Then the bundle \(P|_{C}\) is stable._
Proof.: (i) This is obtained from the Koszul resolution of \(\mathcal{O}_{C}\).
(ii) Computing \(\operatorname{Hom}(P|_{C},P|_{C})=\operatorname{Hom}(P,P|_{C})\) using the Koszul resolution of \(P|_{C}=P\otimes\mathcal{O}_{C}\), we get that it is \(1\)-dimensional. Hence, \(P|_{C}\) is stable.
Now we can rewrite the formula of Lemma 2.1.1 for the FO-bracket \(\Pi_{C,P|_{C}}\) on \(\mathbb{P}H^{1}(C,P^{\vee}|_{C})\simeq\mathbb{P}\operatorname{Ext}^{n}(P, \omega_{X})\) in terms of higher products on \(X\) (obtained by the homological perturbation from a dg-enhancement of \(D^{b}(\operatorname{Coh}(X))\)).
**Proposition 2.2.2**.: _For nonzero \(\phi\in\operatorname{Ext}^{n}(P,\omega_{X})\simeq\operatorname{Ext}^{1}_{C}(P| _{C},\mathcal{O}_{C})\), and \(s_{1},s_{2}\in\langle\phi\rangle^{\perp}\subset H^{0}(X,P)\), one has_
\[\Pi_{C,P|_{C},\phi}(s_{1}\wedge s_{2})=\pm\langle\phi,\sum_{i=1}^ {n}(-1)^{i}m_{n+2}(\delta_{1}(F),\ldots,\delta_{i-1}(F),s_{1},\delta_{i}(F), \ldots,\delta_{n-1}(F),\phi,s_{2})\rangle.\]
Proof.: The computation is completely analogous to that of [8, Prop. 3.1], so we will only sketch it. First, one shows that our Massey product can be computed as the triple product \(m_{3}\) for the arrows
\[\mathcal{O}_{X}\to P\xrightarrow{[1]}\mathcal{O}_{C}\to P|_{C}\]
given by \(s_{2}\), \(\phi\) and \(s_{1}\). Then we use resolutions \(\bigwedge^{\bullet}N^{\vee}\to\mathcal{O}_{C}\) and \(\bigwedge^{\bullet}N^{\vee}\otimes P\to P|_{C}\). Thus, we have to calculate the following triple product in the category of twisted complexes:
where we view \(\phi\) as a morphism of degree \(1\) from \(P\) to the twisted complex \(\bigoplus\bigwedge^{i}N^{\vee}[i]\). Now, the result follows from the formula for \(m_{3}\) on twisted complexes (see [5, Sec. 7.6]).
### Conormal Lie algebra
Let \(\mathcal{V}\) be a stable bundle of positive degree on an elliptic curve \(E\), with a fixed trivialization of \(\omega_{E}\), and consider the corresponding FO bracket \(\Pi\) on the projective space \(X=\mathbb{P}H^{0}(\mathcal{V})^{*}=\mathbb{P}\operatorname{Ext}^{1}(\mathcal{ V},\mathcal{O})\). Recall that for every point \(x\) of a smooth Poisson variety \((X,\Pi)\) there is a natural Lie algebra structure on
\[\mathfrak{g}_{x}:=(\operatorname{im}\Pi_{x})^{\perp}\subset T_{x}^{*}X,\]
where we consider \(\Pi_{x}\) as a map \(T_{x}^{*}X\to T_{x}X\). We call \(\mathfrak{g}_{x}\) the _conormal Lie algebra_. In the case when \(\Pi\) vanishes on \(x\), we have \(\mathfrak{g}_{x}=T_{x}^{*}\).
Let us consider a nontrivial extension
with the class \(\phi\in\operatorname{Ext}^{1}(\mathcal{V},\mathcal{O})\). By Serre duality, we have the corresponding hyperplane \(\langle\phi\rangle^{\perp}\subset H^{0}(\mathcal{V})\), and we have an identification \(\langle\phi\rangle^{\perp}\simeq T_{\phi}^{*}\mathbb{P}H^{0}(\mathcal{V})^{*}\).
Consider a natural map
\[\operatorname{End}(\widetilde{\mathcal{V}})/\langle\operatorname{id}\rangle \to\langle\phi\rangle^{\perp}\simeq T_{\phi}^{*}\mathbb{P}H^{0}(\mathcal{V})^ {*}:A\mapsto p\circ A\circ i. \tag{2.3}\]
The following result was proved in [2].
**Theorem 2.3.1**.: _The above map induces an isomorphism of Lie algebras from \(\operatorname{End}(\widetilde{\mathcal{V}})/\langle\operatorname{id}\rangle\) to the conormal Lie algebra of \(\Pi\) at the point \(\phi\)._
Note that in particular, the subspace \((\operatorname{im}\Pi_{x})^{\perp}\subset\langle\phi\rangle^{\perp}\) is equal to the image of the map (2.3).
## 3. FO brackets associated with elliptic curves in \(G(2,5)\)
### Proof of Theorem A
**Lemma 3.1.1**.: _The subset \(Z\subset\operatorname{Gr}(5,\bigwedge^{2}V)\) of \(5\)-dimensional subspaces \(W\subset\bigwedge^{2}V\) such that \(\dim(\mathbb{P}W\cap G(2,V))\geq 2\) has codimension \(>1\)._
Proof.: Let us denote by \(F\) the variety of flags \(L\subset W\subset\bigwedge^{2}V\), where \(\dim(L)=3\), \(\dim(W)=5\), such that \(\mathbb{P}L\cap G(2,V)\neq\emptyset\). We claim that \(F\) is irreducible of dimension \(\leq 30\). Note that we have a proper closed subset \(\widetilde{Z}\subset F\) consisting of \((L,W)\) such that \(\dim(\mathbb{P}W\cap G(2,V))\geq 2\) (as an example of a point in \(F\setminus\widetilde{Z}\), we can take \(W\) such that \(E_{W}=\mathbb{P}W\cap G(2,V)\) is an elliptic curve and pick \(\mathbb{P}L\subset\mathbb{P}W\) intersecting \(E_{W}\)). Since \(\widetilde{Z}\) fibers over \(Z\) with fibers \(\operatorname{Gr}(3,5)\), our claim would imply that \(\dim(\widetilde{Z})=\dim Z+6<30\), i.e., \(\dim Z<24\), as required.
To estimate the dimension of \(F\), we observe that we have a fibration \(F\to Y\) with fibers \(G(2,7)\), where \(Y\subset\operatorname{Gr}(3,\bigwedge^{2}V)\) is the subvariety of \(3\)-dimensional subspaces \(L\) such that \(\mathbb{P}L\cap G(2,V)\neq\emptyset\). Thus, it is enough to prove that \(Y\) is irreducible of dimension \(\leq 20\). Now we use a surjective map \(\widetilde{Y}\to Y\), where \(\widetilde{Y}\) is the variety of flags \(\ell\subset L\subset\bigwedge^{2}V\), where \(\dim(\ell)=1\), \(\dim(L)=3\), such that \(\ell\in G(2,V)\). We have a fibration \(\widetilde{Y}\to G(2,V)\) with fibers \(G(2,9)\), hence \(\widetilde{Y}\) is irreducible of dimension \(6+14=20\). Hence, \(Y\) is irreducible of dimension \(\leq 20\).
Proof of Theorem A.: First, we can apply Proposition 2.2.2 to an elliptic curve \(E_{W}\subset X=G(2,V)\). Namely, as a bundle \(P\) on \(X\) we take \(\mathcal{U}^{\vee}\), the dual of the universal subbundle. We can view the embedding
\[R:=W^{\perp}\to\bigwedge^{2}V^{*}=H^{0}(X,\mathcal{O}(1)),\]
where \(\mathcal{O}(1)=\det(\mathcal{U}^{\vee})\), as a regular section \(F\in H^{0}(X,N)\), where \(N=R^{*}\otimes\mathcal{O}(1)\). It is easy to see that we have a \(\operatorname{GL}(V)\)-invariant identification
\[\omega_{X}\simeq\det(V)^{-2}\otimes\mathcal{O}(-5).\]
Thus, by adjunction we get an isomorphism
\[\omega_{E_{W}}\simeq\det(N)\otimes\omega_{X}|_{E_{W}}\simeq\det(R^{*})\otimes \det(V)^{-2}\otimes\mathcal{O}_{E_{W}}.\]
Since \(\det(R^{*})\simeq\det(\bigwedge^{2}V)\otimes\det(W^{*})\simeq\det(V)^{4} \otimes\det(W^{*})\), we can rewrite this as
\[\omega_{E_{W}}\simeq\det(W^{*})\otimes\det(V)^{2}\otimes\mathcal{O}_{E_{W}}. \tag{3.1}\]
The vanishings (2.1) and (2.2) in this case follow from the well known vanishings
\[H^{*}(X,\mathcal{U}^{\vee}(-i))=0,\ \ \text{for}\ 1\leq i\leq 5,\]
\[\operatorname{Ext}^{*}(\mathcal{U}^{\vee},\mathcal{U}^{\vee}(-i))=0,\ \text{for}\ 1\leq i\leq 3,\ \operatorname{Ext}^{<6}(\mathcal{U}^{\vee},\mathcal{U}^{\vee}(-4))= \operatorname{Ext}^{<6}(\mathcal{U}^{\vee},\mathcal{U}^{\vee}(-5))=0\]
(see [4]). Thus, Proposition 2.2.2 gives a formula for \(\Pi_{W}\).
This shows that the association \(W\mapsto\Pi_{W}\) gives a regular morphism
\[f:\operatorname{Gr}(5,\bigwedge^{2}V)\to\mathbb{P}H^{0}(\mathbb{P}V,\bigwedge^{2} T).\]
Furthermore, we claim that
\[f^{*}\mathcal{O}(1)\simeq\mathcal{O}_{\operatorname{Gr}(5,\bigwedge^{2}V)}(1) \otimes\det(V)^{-2}.\]
Indeed, we have a family of Gorenstein curves \(\pi:\mathcal{C}\to B=\operatorname{Gr}(5,\bigwedge^{2}V)\setminus Z\), where \(Z\) was defined in Lemma 3.1.1, such that
\[\omega_{\mathcal{C}/B}\simeq\pi^{*}(\mathcal{O}(1)\otimes\det(V)^{2}).\]
Indeed, this is implied by the argument leading to (3.1), which works for any curve (not necessarily smooth) cut out by \(\mathbb{P}W\) in \(G(2,V)\). Now [3, Prop. 4.1] implies that the relation \(f^{*}\mathcal{O}(1)=\mathcal{O}(1)\otimes\det(V)^{-2}\) holds over \(\operatorname{Gr}(5,\bigwedge^{2}V)\setminus Z\). Since \(Z\) has codimension \(\geq 1\), it holds over the entire \(\operatorname{Gr}(5,\bigwedge^{2}V)\).
Next, since \(H^{0}(\operatorname{Gr}(5,\bigwedge^{2}V),\mathcal{O}(1))\simeq\bigwedge^{5} (\bigwedge^{2}V)^{*}\), the map \(f\) is given by a \(\operatorname{GL}(V)\)-invariant linear map
\[\bigwedge^{5}(\bigwedge^{2}V)\to H^{0}(\mathbb{P}V,\bigwedge^{2}T)\otimes\det( V)^{2}.\]
To show that this map coincides with \(\pi_{5,2}\), up to a constant factor, it remains to show that the space \(\operatorname{Hom}_{\operatorname{GL}(V)}(\bigwedge 5(\bigwedge^{2}V),H^{0}( \mathbb{P}V,\bigwedge^{2}T)\otimes\det(V)^{2})\) is \(1\)-dimensional.
The representation of \(\operatorname{GL}(V)\) on \(H^{0}(\mathbb{P}V,\bigwedge^{2}T)\) is easy to identify due to the exact sequence
\[0\to\mathbf{k}\to V\otimes V^{*}\otimes\bigwedge^{2}V\otimes S^{2}V^{*}\to H^ {0}(\mathbb{P}V,\bigwedge^{2}T)\to 0.\]
Using the Littlewood-Richardson rule, we deduce
\[H^{0}(\mathbb{P}V,\bigwedge^{2}T)\otimes\det(V^{*})\simeq\Sigma^{3,1,1}(V^{*}),\]
where \(\Sigma^{\lambda}\) denotes the Schur functor associated with a partition \(\lambda\). It follows that
\[H^{0}(\mathbb{P}V,\bigwedge^{2}T)\otimes\det(V)^{2}\simeq\Sigma^{3,3,2,2}(V).\]
On the other hand, the decomposition of the plethysm \(e_{5}\circ e_{2}\) (see [6, Ex. I.8.6]) shows that \(\Sigma^{3,3,2,2}(V)\) appears with multiplicity \(1\) in the \(\operatorname{GL}(V)\)-representation \(\bigwedge^{5}(\bigwedge^{2}V)\). This implies the claimed assertion about \(\operatorname{GL}(V)\)-maps.
### Rank stratification for a bracket of type \(q_{5,2}\)
Let \(E\) be an elliptic curve, \(\mathcal{V}\) be a stable vector bundle of rank \(2\) and degree \(5\). We consider the FO bracket \(\Pi\) on the projective space \(\mathbb{P}\operatorname{Ext}^{1}(\mathcal{V},\mathcal{O})\simeq\mathbb{P}H^ {0}(\mathcal{V})^{*}\). We want to describe the corresponding rank stratification of \(\mathbb{P}H^{0}(\mathcal{V})^{*}=\mathbb{P}^{4}\). For every point \(p\in E\), we consider the subspace \(L_{p}:=\mathcal{V}|_{p}^{*}\subset H^{0}(\mathcal{V})^{*}\) and the corresponding projective line \(\mathbb{P}L_{p}\subset\mathbb{P}H^{0}(\mathcal{V})^{*}\).
Recall that the rank of \(\Pi\) at a point corresponding to an extension \(\widetilde{\mathcal{V}}\) is equal to \(5-\dim\operatorname{End}(\widetilde{V})\) (see [3, Prop. 2.3]).
**Lemma 3.2.1**.: _(i) The bracket \(\Pi\) vanishes at the point of \(\mathbb{P}\operatorname{Ext}^{1}(\mathcal{V},\mathcal{O})\) corresponding to an extension_
\[0\to\mathcal{O}\to\widetilde{\mathcal{V}}\to\mathcal{V}\to 0\]
_if and only if this extension splits under \(\mathcal{O}\to\mathcal{O}(p)\) for some point \(p\in E\), which happens if and only if \(\widetilde{\mathcal{V}}\simeq\mathcal{O}(p)\oplus\mathcal{V}^{\prime}\), where \(\mathcal{V}^{\prime}\) is semistable of rank \(2\) and degree \(4\). Furthermore, in this case \(\dim\operatorname{End}(\mathcal{V}^{\prime})=2\), so \(\mathcal{V}^{\prime}\) is either indecomposable, or \(\mathcal{V}^{\prime}\simeq L_{1}\oplus L_{2}\), where \(L_{1}\) and \(L_{2}\) are nonisomorphic line bundles of degree \(2\)._
_(ii) The bracket_ \(\Pi\) _has rank_ \(\leq 2\) _if and only the corresponding extension_ \(\widetilde{\mathcal{V}}\) _is unstable, or equivalently, there exists a line bundle_ \(L_{2}\) _of degree_ \(2\) _such that the extension splits over the unique embedding_ \(L_{2}\hookrightarrow\mathcal{V}\)_. In other words, the extension class comes from a subspace of the form_
\[W_{L_{2}}:=H^{0}(L_{2})^{\perp}\subset H^{0}(\mathcal{V})^{*}=V, \tag{3.2}\]
_where we use the unique embedding \(L_{2}\to\mathcal{V}\) and consider the induced embedding \(H^{0}(L_{2})\hookrightarrow H^{0}(\mathcal{V})\)._
_(iii) Each plane_ \(\mathbb{P}W_{L_{2}}\subset\mathbb{P}V\) _is a Poisson subvariety, and there is an embedding of the curve_ \(E\) _into_ \(\mathbb{P}W_{L_{2}}\) _by a degree_ \(3\) _linear system, so that_ \(\mathbb{P}W_{L_{2}}\setminus E\) _is a symplectic leaf._
Proof.: (i) Suppose a nontrivial extension
\[0\to\mathcal{O}\to\widetilde{\mathcal{V}}\to\mathcal{V}\to 0\]
splits under \(\mathcal{O}\to\mathcal{O}(p)\). Then \(\widetilde{\mathcal{V}}\) is an extension of \(\mathcal{O}(p)\) by \(\mathcal{V}^{\prime}\) where \(\mathcal{V}^{\prime}\subset\mathcal{V}\) is the kernel of the corresponding surjective map \(\mathcal{V}\to\mathcal{O}_{p}\). Hence, \(\mathcal{V}^{\prime}\) is semistable of slope \(2\), which implies that
\[\widetilde{\mathcal{V}}\simeq\mathcal{O}(p)\oplus\mathcal{V}^{\prime}.\]
It follows that \(\dim\operatorname{End}(\mathcal{V}^{\prime})\geq 2\), and so
\[\dim\operatorname{End}(\widetilde{\mathcal{V}})=3+\dim\operatorname{End}( \mathcal{V}^{\prime})\geq 5.\]
Hence, \(\Pi_{E}\) vanishes on the points of the line \(\mathbb{P}L_{p}\subset\mathbb{P}V\), and we have \(\dim\operatorname{End}(\mathcal{V}^{\prime})=2\), which means that either \(\mathcal{V}^{\prime}\) is indecomposable or \(\mathcal{V}^{\prime}\simeq L_{1}\oplus L_{2}\), for two nonisomorphic line bundles \(L_{1}\), \(L_{2}\) of degree \(2\).
Conversely, assume \(\Pi\) vanishes at the point corresponding to \(\widetilde{\mathcal{V}}\), so \(\dim\operatorname{End}(\widetilde{\mathcal{V}})=5\). Then HN-components of \(\widetilde{\mathcal{V}}\) cannot be three line bundles (since they would have to have different positive degrees that add up to \(5\)), so \(\widetilde{\mathcal{V}}=L\oplus\mathcal{V}^{\prime}\) where \(L\) is a line bundle and \(\mathcal{V}^{\prime}\) is semistable of rank \(2\), \(\deg(L)>0\), \(0<\deg(\mathcal{V}^{\prime})\), \(\deg(L)+\deg(\mathcal{V}^{\prime})=5\).
The case \(\deg(L)=1\) leads to the locus discussed above. If \(\deg(L)=2\) and \(\deg(\mathcal{V}^{\prime})=3\) then \(\dim\operatorname{Hom}(\mathcal{V}^{\prime},L)=1\), so we get \(\dim\operatorname{End}(\mathcal{V}^{\prime})=3\) which is impossible. If \(\deg(L)\geq 3\), then \(\deg(\mathcal{V}^{\prime})\leq 2\) and \(\dim\operatorname{Hom}(\mathcal{V}^{\prime},L)\geq 4\), so \(\dim\operatorname{End}(\mathcal{V})>5\), a contradiction.
(ii) The rank of \(\Pi\) is \(\leq 2\) at \(\widetilde{\mathcal{V}}\) if and only if \(\dim\operatorname{End}(\widetilde{\mathcal{V}})\geq 3\). Clearly, such \(\widetilde{\mathcal{V}}\) has to be unstable. Conversely, any unstable \(\widetilde{\mathcal{V}}\) would have form \(L\oplus\mathcal{V}^{\prime}\) with either \(\operatorname{Hom}(L,\mathcal{V}^{\prime})\neq 0\) or \(\operatorname{Hom}(\mathcal{V}^{\prime},L)\neq 0\), hence \(\dim\operatorname{End}(\widetilde{\mathcal{V}})\geq 3\).
Note that \(\mu(\widetilde{\mathcal{V}})=5/3\). Hence, if the extension splits over some \(L_{2}\subset\mathcal{V}\), then \(\widetilde{\mathcal{V}}\) is unstable. Conversely, if \(\widetilde{\mathcal{V}}\) is unstable then either it has a line subbundle of degree \(2\), or a
semistable subbundle \(\mathcal{V}^{\prime}\) of rank \(2\) and degree \(\geq 4\). But any such \(\mathcal{V}^{\prime}\) has a line subbundle of degree \(\geq 2\).
(iii) We can identify \(H^{0}(L_{2})^{\perp}\) with \(H^{0}(L_{3})^{*}\subset H^{0}(\mathcal{V})^{*}\), where \(L_{3}:=\mathcal{V}/L_{2}\). It is easy to see that the intersection of \(\mathbb{P}W_{L_{2}}\) with the zero locus of \(\Pi\) is exactly the image of \(E\) under the map given by \(|L_{3}|\).
Given an extension \(\widetilde{\mathcal{V}}\to\mathcal{V}\), split over \(L_{2}\subset\mathcal{V}\), the splitting \(L_{2}\to\widetilde{\mathcal{V}}\) is unique, and the quotient \(\widetilde{\mathcal{V}}/L_{2}\) is an extension of \(L_{3}=\mathcal{V}/L_{2}\) by \(\mathcal{O}\). It is well known that for points of \(\mathbb{P}W_{L_{2}}\setminus E\) the latter extension is stable, so \(\mathcal{V}_{L_{3}}=\widetilde{\mathcal{V}}/L_{2}\) is a stable bundle of rank \(2\) with determinant \(L_{3}\). Since \(\operatorname{Ext}^{1}(\mathcal{V}_{L_{3}},L_{2})=0\), we deduce that \(\widetilde{\mathcal{V}}=\mathcal{V}_{L_{3}}\oplus L_{2}\). Now we can calculate the image of the map (2.3). The space \(\operatorname{End}(\widetilde{\mathcal{V}})/\langle\operatorname{id}\rangle\) has a basis \(\langle\operatorname{id}_{L_{2}},e\rangle\), where \(e\) is a generator of \(\operatorname{Hom}(\mathcal{V}_{L_{3}},L_{2})\). Their images under (2.3) both factor through \(L_{2}\to E\), hence the image of (2.3) (which is \(2\)-dimensional) is \(H^{0}(L_{2})\subset H^{0}(\mathcal{V})\). But this is exactly the conormal subspace to the projective plane \(\mathbb{P}W_{L_{2}}\). This shows that \(\mathbb{P}W_{L_{2}}\setminus E\) (and hence \(\mathbb{P}W_{L_{2}}\)) is a Poisson subvariety. Since the rank of \(\Pi\) on \(\mathbb{P}W_{L_{2}}\setminus E\) is equal to \(2\) and \(\Pi|_{E}=0\), we deduce that \(\mathbb{P}W_{L_{2}}\setminus E\) is a symplectic leaf.
By Lemma 3.2.1(i) the vanishing locus of \(\Pi\) corresponds to extensions \(\mathcal{V}\) by \(\mathcal{O}\), which split over \(\mathcal{O}(p)\). This is the union \(S_{E}\) of the lines \(\mathbb{P}L_{p}\), where \(L_{p}=\mathcal{V}|_{p}^{*}\subset\mathbb{P}H^{0}(\mathcal{V})^{*}\), over \(p\in E\). The surface \(S_{E}\) is the image of the natural map \(\mathbb{P}(\mathcal{V}^{\vee})\to\mathbb{P}(V)\), associated with the embedding of bundles \(\mathcal{V}^{\vee}\to V\otimes\mathcal{O}_{E}\). We will prove that in fact this map induces an isomorphism of the projective bundle \(\mathbb{P}(\mathcal{V}^{\vee})\) with \(S_{E}\).
**Lemma 3.2.2**.: _Let \(\mathcal{E}\) be a vector bundle over a smooth curve \(C\) and let \(W\to H^{0}(C,\mathcal{E})\) be a linear map from a vector space \(W\), such that for any \(x\in C\) the composition \(p_{x}:W\to H^{0}(C,\mathcal{E})\to\mathcal{E}|_{x}\) is surjective, so that we have a morphism_
\[f:\mathbb{P}(\mathcal{E}^{\vee})\to\mathbb{P}(W^{*}).\]
_Assume that we have a closed subset \(Z\subset\mathbb{P}(\mathcal{E}^{\vee})\) with the following properties._
* _For every_ \(x,y\in C\)_,_ \(x\neq y\)_, consider_ \(p_{x}(\ker(p_{y}))\subset\mathcal{E}|_{x}\)_. Then any_ \(\ell\in\mathbb{P}(\mathcal{E}^{\vee}|_{x})\)_, which is orthogonal to_ \(p_{x}(\ker(p_{y}))\)_, is contained in_ \(Z\)_._
* _For every_ \(x\in C\)_, consider the map_ \(W\to H^{0}(\mathcal{E}|_{2x})\) _and the induced map_ \[K_{x}:=\ker(W\to\mathcal{E}|_{x})\to T_{x}^{*}C\otimes\mathcal{E}|_{x}\] _(where we use the identification_ \(T_{x}^{*}C\otimes\mathcal{E}|_{x}=\ker(H^{0}(\mathcal{E}|_{2x})\to\mathcal{E}| _{x})\)_). Then any_ \(\ell\in\mathbb{P}(\mathcal{E}^{\vee}|_{x})\)_, which is orthogonal to the image of_ \(K_{x}\otimes T_{x}C\)_, is contained in_ \(Z\)_._
_Then the map \(\mathbb{P}(\mathcal{E}^{\vee})\setminus Z\to\mathbb{P}(W^{*})\) is a locally closed embedding._
Proof.: Assume that for \(x\neq y\), we have two nonzero functionals \(\phi_{x}:\mathcal{E}|_{x}\to k\), \(\phi_{y}:\mathcal{E}|_{y}\to k\) such that \(\phi_{x}\circ p_{x}=\phi_{y}\circ p_{y}\). Then \((\phi_{x}\circ p_{x})|_{\ker(\phi_{y})}=0\). Hence, \(\phi_{x}\) vanishes on \(p_{x}(\ker(p_{y}))\). By assumption, this can happen only when \(\phi_{x}\) is in \(Z\). Thus, the map from \(\mathbb{P}(\mathcal{E}^{\vee})\setminus Z\) is set-theoretically one-to-one.
Next, we need to check that our map is injective on tangent spaces. The tangent space to \(\mathbb{P}(\mathcal{E}^{\vee})\) at a point corresponding to \(\ell\subset\mathcal{E}^{\vee}|_{x}\) can be described as follows. Consider the
canonical extension
\[0\to T_{x}^{*}C\otimes\mathcal{E}|_{x}\to H^{0}(\mathcal{E}|_{2x})\to\mathcal{E}|_{ x}\to 0.\]
Passing to the dual extension of \(T_{x}C\otimes\mathcal{E}^{\vee}|_{x}\) by \(\mathcal{E}^{\vee}|_{x}\), and restricting it to \(T_{x}C\otimes\ell\subset T_{x}C\otimes\mathcal{E}^{\vee}|_{x}\), we get an extension
\[0\to\mathcal{E}^{\vee}|_{x}\to H_{\ell}\to T_{x}C\otimes\ell\to 0\]
Now the quotient \((\ell^{-1}\otimes H_{\ell})/\mathbf{k}\), where we use the natural embedding
\[k=\ell^{-1}\otimes\ell\to\ell^{-1}\otimes\mathcal{E}^{\vee}|_{x}\to\ell^{1} \otimes H_{\ell},\]
is identified with the tangent space \(T_{\ell}\mathbb{P}(\mathcal{E}^{\vee})\).
The restriction of the map \(H^{0}(\mathcal{E}|_{2x})^{\vee}\to W^{*}\), dual to the natural map \(W\to H^{0}(\mathcal{E}|_{2x})\), to \(H_{\ell}\), induces a map
\[(\ell^{-1}\otimes H_{\ell})/\mathbf{k}\to W^{*}/\ell,\]
which is exactly the tangent map to \(f\). It is injective if and only if the map \(H_{\ell}\to W^{*}\) is injective. Equivalently, the dual map \(W\to H^{*}_{\ell}\) should be surjective. The latter map is compatible with (surjective) projections to \(\mathcal{E}|_{x}\), so this is equivalent to surjectivity of the map
\[K_{x}=\ker(W\to\mathcal{E}|_{x})\to\ker(H^{*}_{\ell}\to\mathcal{E}|_{x})=T_{x} ^{*}C\otimes\ell^{-1}.\]
The latter map factors as a composition
\[K_{x}\to T_{x}^{*}C\otimes\mathcal{E}|_{x}\to T_{x}^{*}C\otimes\ell^{-1},\]
so it is surjective (equivalently, nonzero) if and only if \(\ell\) is not orthogonal to the image of \(K_{x}\to T_{x}^{*}C\otimes\mathcal{E}|_{x}\). By assumption, this never happens for points of \(\mathbb{P}(\mathcal{E}^{\vee})\setminus Z\).
**Lemma 3.2.3**.: _The map \(\mathbb{P}(\mathcal{V}^{\vee})\to S_{E}\) is an isomorphism._
Proof.: We will check the conditions of Lemma 3.2.2. It suffices to check surjectivity of the maps \(H^{0}(\mathcal{V})\to\mathcal{V}|_{x}\oplus\mathcal{V}|_{y}\) for \(x\neq y\) and of \(H^{0}(\mathcal{V})\to H^{0}(\mathcal{V}|_{2x})\). But this follows from the exact sequence
\[0\to\mathcal{V}(-D)\to\mathcal{V}\to\mathcal{V}|_{D}\to 0\]
for any effective divisor \(D\) of degree \(2\) and from the vanishing of \(H^{1}(\mathcal{V}(-D))\) by stability of \(\mathcal{V}\).
By Lemma 3.3.3 the degeneracy locus \(\mathcal{D}_{E}\) of our Poisson bracket (which is a quintic hypersurface) is the union of planes \(\mathbb{P}W_{L_{2}}\subset\mathbb{P}V\) over \(L_{2}\in\operatorname{Pic}^{2}(E)\) (see (3.2)). Let us consider the vector bundle \(\mathcal{W}\) over \(\widetilde{E}:=\operatorname{Pic}^{2}(E)\), such that the fiber of \(\mathcal{W}\) over \(L_{2}\) is \(W_{L_{2}}\). Note that we have a natural identification \(\widetilde{E}\simeq\operatorname{Pic}^{3}(E):L_{2}\mapsto L_{3}:=\det( \mathcal{V})\otimes L_{2}^{-1}\). In terms of \(L_{3}\) we have \(W_{L_{2}}=H^{0}(L_{3})^{*}\subset H^{0}(\mathcal{V})^{*}\), where we use a surjection \(\mathcal{V}\to L_{3}\). To define the vector bundle \(\mathcal{W}\) precisely, we consider the universal line bundle \(\mathcal{L}_{3}\) of degree \(3\) over \(E\times\widetilde{E}\simeq E\times\operatorname{Pic}^{3}(E)\), normalized so that the line bundle \(p_{2*}\underline{\operatorname{Hom}}(p_{1}^{*}\mathcal{V},\mathcal{L}_{3})\) is trivial. We set
\[\mathcal{W}:=p_{2*}(\mathcal{L}_{3})^{\vee}.\]
Note that applying \(p_{2*}\) to the natural surjection \(p_{1}^{*}\mathcal{V}\to\mathcal{L}_{3}\) we get a surjection \(H^{0}(\mathcal{V})\otimes\mathcal{O}\to p_{2*}(\mathcal{L}_{3})\). Passing to the dual, we get a morphism \(\mathbb{P}(\mathcal{W})\to\mathbb{P}V\), whose image is \(\mathcal{D}_{E}\).
**Lemma 3.2.4**.: _The morphism \(\mathbb{P}(\mathcal{W})\to\mathcal{D}_{E}\) is an isomorphism over \(\mathcal{D}_{E}\setminus S_{E}\)._
Proof.: We need to check two conditions of Lemma 3.2.2 for the morphism \(H^{0}(\mathcal{V})\otimes\mathcal{O}\to\mathcal{W}^{\vee}\) over \(\widetilde{E}\), with \(Z\subset\mathbb{P}(\mathcal{W})\) being the preimage of \(S\). Note that the intersection of \(Z\) with each plane \(\mathbb{P}H^{0}(L_{3})^{*}\subset H^{0}(\mathcal{V})^{*}\) is the elliptic curve \(E\) embedded by the linear system \(|L_{3}|\).
To check the first condition, we use the exact sequence
\[0\to H^{0}(L_{2})\to H^{0}(\mathcal{V})\to H^{0}(L_{3})\to 0\]
where \(L_{2}\otimes L_{2}\simeq\mathcal{V}\). If \(L_{3}^{\prime}\) is different from \(L_{3}^{\prime}\) then the composed map \(L_{2}\to\mathcal{V}\to L_{3}^{\prime}\) is nonzero, hence, it identifies \(L_{2}\) with the subsheaf \(L_{3}^{\prime}(-x)\) for some point \(p\in E\). Hence, the image of \(H^{0}(L_{2})\) is precisely the plane \(H^{0}(L_{3}^{\prime}(-p))\subset H^{0}(L_{3}^{\prime})\). Hence, the only point of \(\mathbb{P}H^{0}(L_{3}^{\prime})^{*}\) orthogonal to this plane is the point \(p\in E\subset\mathbb{P}H^{0}(L_{3}^{\prime})^{*}\), which lies in \(Z\).
To check the second condition, we need to understand the map \(H^{0}(\mathcal{V})\to H^{0}(\mathcal{W}^{\vee}|_{2x})\) for \(x\in\widetilde{E}\simeq\operatorname{Pic}^{3}(E)\). For this we observe that this map is equal to the composition
\[H^{0}(\mathcal{V})\to H^{0}(E\times\{2x\},p_{1}^{*}\mathcal{V}|_{E\times\{2x \}})\to H^{0}(E\times\{2x\},\mathcal{L}_{3}|_{E\times\{2x\}}),\]
which is the map induced on \(H^{0}\) by the morphism of sheaves on \(E\),
\[\alpha:\mathcal{V}\to\mathcal{V}\otimes H^{0}(\mathcal{O}_{2x})=p_{1*}(p_{1}^ {*}\mathcal{V}|_{E\times\{2x\}})\to p_{1*}(\mathcal{L}_{3}|_{E\times\{2x\}}).\]
Note that for \(x=L_{3}\), the bundle \(F_{x}:=p_{1*}(\mathcal{L}_{3}|_{E\times\{2x\}})\) on \(E\) is an extension of \(L_{3}\) by \(T_{x}^{*}\widetilde{E}\otimes L_{3}\), which gives the Kodaira-Spencer map for the family \(\mathcal{L}_{3}\), so this extension is nontrivial. The composition
is the canonical surjection with the kernel \(L_{2}\subset\mathcal{V}\). Hence, \(\alpha\) fits into a morphism of exact sequences
Note that the map \(\alpha|_{L_{2}}\) is nonzero, since otherwise we would get a splitting of the extension \(F_{x}\to L_{3}\).
Now the kernel of the map \(H^{0}(\mathcal{V})\to\mathcal{W}^{\vee}|_{x}=H^{0}(L_{3})\) is identified with \(H^{0}(L_{2})\), and the induced map \(H^{0}(L_{2})\to T_{x}^{*}\widetilde{E}\otimes H^{0}(L_{3})\) is given by a nonzero map
\[\alpha|_{L_{2}}:L_{2}\to T_{x}^{*}\widetilde{E}\otimes L_{3}\simeq L_{3}.\]
Hence, its image is the subspace of the form \(H^{0}(L_{3}(-p))\), and we again deduce that any point of \(\mathbb{P}H^{0}(L_{3})^{*}\) orthogonal to it lies in \(Z\).
**Corollary 3.2.5**.: _(i) There is a regular map \(\mathcal{D}_{E}\setminus S_{E}\to\widetilde{E}\) such that the fiber over \(L_{2}\) is the symplectic leaf \(\mathbb{P}W_{L_{2}}\setminus E\)._
_(ii) Any line contained in \(\mathcal{D}_{E}\) is either contained in \(S_{E}\) or in some plane \(\mathbb{P}W_{L_{2}}\), where \(L_{2}\in\operatorname{Pic}^{2}(E)\)._
Proof.: For (ii) we observe that given a line \(L\subset\mathcal{D}_{E}\) not contained in \(S_{E}\), the restriction of the map \(\mathcal{D}_{E}\setminus S\to\widetilde{E}\) to \(L\setminus S_{E}\to\widetilde{E}\) is necessarily constant. Hence, \(L\) is contained in some plane \(\mathbb{P}W_{L_{2}}\).
### Two-dimensional distribution on \(G(2,5)\) associated with the elliptic curve
Let \(E\subset G(2,V)\) be the elliptic curve obtained as the intersection with the linear subspace \(\mathbb{P}W\subset\mathbb{P}(\bigwedge^{2}V)\) in the Plucker embedding, where \(\dim W=5\). Equivalently, \(E\) is cut out by the linear subspace of sections \(W^{\perp}\subset\bigwedge^{2}V^{*}\simeq H^{0}(G(2,V),\mathcal{O}(1))\). As before, we denote by \(\mathcal{V}\) the restriction of \(\mathcal{U}^{\vee}\), the dual of the universal bundle. Then \(\bigwedge^{2}(\mathcal{V})\) is the restriction of \(\mathcal{O}(1)\), and we have an exact sequence
\[0\to W^{\perp}\to\bigwedge^{2}V^{*}\to H^{0}(E,\bigwedge^{2}(\mathcal{V}))\to 0.\]
In other words, we can identify the dual map to the embedding \(W\hookrightarrow\bigwedge^{2}V\) with the natural map
\[\bigwedge^{2}H^{0}(\mathcal{V})\to H^{0}(\bigwedge^{2}\mathcal{V}).\]
We have a regular map
\[f:G(2,V)\setminus E\to\mathbb{P}^{4}\]
given by the linear system \(|W^{\perp}|\subset|\mathcal{O}(1)|\).
Then for every point \(p\in G(2,V)\setminus E\), we define the subspace \(D_{p}\subset T_{p}G(2,V)\) as the kernel of the tangent map to \(f\) at \(p\). Note that for generic \(p\), one has \(\dim D_{p}=2\).
We have the following characterization of \(D_{p}\).
**Lemma 3.3.1**.: _Let \(L_{p}\subset V\) denote the \(2\)-dimensional subspace corresponding to \(p\in G(2,V)\setminus E\)._
_(i) Under the identification_ \(T_{p}G(2,V)\otimes\det(L_{p})\simeq L_{p}\otimes V/L\)_, we have_
\[D_{p}\otimes\det(L_{p})=W\cap(L_{p}\wedge V)=W\cap(L_{p}\otimes V/L_{p}),\]
_where the second intersection is taken in_ \(\bigwedge^{2}V/\bigwedge^{2}L_{p}\)_._
_(ii) For each_ \(v\in L_{p}\)_, let us denote by_ \(\pi_{v}:T_{p}G(2,V)\to V/L_{p}\) _the natural projection. Assume that_ \(\Pi_{E,v}\) _has rank_ \(4\)_, for some nonzero_ \(v\in L_{p}\)_. Then_ \(D_{p}\) _is_ \(2\)_-dimensional, and_ \(\pi_{v}(D_{p})\) _is the_ \(2\)_-dimensional subspace of_ \(V/L_{p}\) _given as follows:_
\[\pi_{v}(D_{p})=\{x\in V/L_{p}\ |x\wedge\Pi_{E,v}^{norm}=0\},\]
_where_ \(\Pi_{E,v}^{norm}\in\bigwedge^{2}(V/L_{p})\) _is the image of_ \(\Pi_{E,v}\in\bigwedge^{2}(V/v)\)_._
Proof.: (i) The map \(d_{L}f\) is the composition of the Plucker embedding \(G(2,V)\to\mathbb{P}(\bigwedge^{2}V)\) with the linear projection
\[\mathbb{P}(\bigwedge^{2}V)\setminus\mathbb{P}(W)\to\mathbb{P}(\bigwedge^{2}V/ W).\]
Thus, the tangent map to \(f\) at \(L\subset W\) is the composition
\[\operatorname{Hom}(L,V/L)\xrightarrow{\alpha}\ \operatorname{Hom}(\bigwedge^{2}L, \bigwedge^{2}V/\bigwedge^{2}L)\to\operatorname{Hom}(\bigwedge^{2}L,\bigwedge^{2} V/(\bigwedge^{2}L+W)),\]
where \(\alpha(A)(l_{1}\wedge l_{2})=Al_{1}\wedge l_{2}+l_{1}\wedge Al_{2}\operatorname{ mod}\bigwedge^{2}L\). Equivalently, the map \(\alpha\) is the natural map
\[\operatorname{Hom}(L,V/L)\simeq L^{*}\otimes V/L\simeq\det^{-1}(L)\otimes L \otimes V/L\to\det^{-1}(L)\otimes\bigwedge^{2}V/\bigwedge^{2}L,\]
given by \(l\otimes(v\operatorname{mod}L)\mapsto l\wedge v\operatorname{mod}\bigwedge^{2}L\).
Now the assertion follows from the identification
\[W=\ker\bigl{(}\bigwedge^{2}V/\bigwedge^{2}L\to\bigwedge^{2}V/(\bigwedge^{2}L+W) \bigr{)}.\]
(ii) Our identification of \(\Pi_{W}\) from Theorem A implies the following property of the bivector \(\Pi_{W,v}\in\bigwedge^{2}(V/v)\). Consider the natural map \(\phi_{v}:W\to\bigwedge^{2}(V/v)\). Let \(S=S_{E}\subset\mathbb{P}V\) denote the surface, obtained as the union of lines corresponding to \(E\subset G(2,V)\). We claim that the map \(\phi_{v}\) is injective if and only if \(\langle v\rangle\) is not in \(S\). Indeed, an element in the kernel of \(\phi_{v}\) is an element \(v\wedge v^{\prime}\) contained in \(W\), so the plane \(\langle v,v^{\prime}\rangle\) corresponds to a point of \(E\). Hence, this is true when \(\Pi_{W,v}\) is nonzero.
Now assume the rank of \(\Pi_{W,v}\) is \(4\). We have a nondegenerate symmetric pairing on \(\bigwedge^{2}(V/v)\) with values in \(\det(V/v)\), given by the exterior product. Now our description of \(\Pi_{W}\) implies that for \(\langle v\rangle\not\in S\), \(\Pi_{W,v}\) is nonzero and
\[\phi_{v}(W)=\langle\Pi_{W,v}\rangle^{\perp}.\]
Since \(\Pi_{W,v}\) has maximal rank, the skew-symmetric form \((x_{1},x_{2})=x_{1}\wedge x_{2}\wedge\Pi_{W,v}\) on \(V/v\) is nondegenerate. Hence, the subspace \((L_{p}/\langle v\rangle)\otimes(V/L_{p})\) cannot be contained in \(\langle\Pi_{W,v}\rangle^{\perp}\) (this would mean that \(L_{p}/\langle v\rangle\) lies in the kernel of \((\cdot,\cdot)\)). Hence, the intersection
\[I:=(L_{p}/\langle v\rangle)\otimes(V/L_{p})\cap\langle\Pi_{W,v}\rangle^{\perp}\]
is \(2\)-dimensional. Since the subspace \(\phi_{v}(W\cap(L_{p}\wedge V))\) is contained in \(I\), we deduce that its dimension is \(\leq 2\), and so \(\dim D_{p}\leq 2\). But we also know that \(\dim D_{p}\geq 2\), hence in fact, we have \(\dim D_{p}=2\) and \(\phi_{v}(W\cap(L_{p}\wedge V))=I\).
The last assertion follows from the fact that under trivialization of \(L_{p}/\langle v\rangle\), the subspace \(I\subset V/L_{p}\) coincides with \(\pi_{v}(D_{p})\).
**Definition 3.3.2**.: We define \(\Sigma_{E}\subset G(2,V)\) as the closed locus of points \(p\in G(2,V)\) such that \(\dim W\cap(L_{p}\wedge V)\geq 3\).
**Lemma 3.3.3**.: _One has \(\Sigma_{E}\subset G(2,V)\setminus E\)._
Proof.: Let \(L=H^{0}(\mathcal{V}|_{p})^{*}\subset H^{0}(\mathcal{V})^{*}=V\) for some \(p\in E\). We have to prove that \(\dim W\cap(L\wedge V)\leq 2\). We have, \(L^{\perp}=H^{0}(\mathcal{V}(-p))\subset H^{0}(\mathcal{V})\) and so,
\[V/L\simeq H^{0}(\mathcal{V}(-p))^{*}.\]
The intersection \(W\cap(L\wedge V)\) is the kernel of the composed map
\[W\hookrightarrow\bigwedge^{2}\!V\to\bigwedge^{2}(V/L).\]
The dual map can be identified with the composition
\[\bigwedge^{2}\!H^{0}(\mathcal{V}(-p))\to\bigwedge^{2}\!H^{0}(\mathcal{V})\to H^{ 0}(\det\mathcal{V})\]
which also factors as the composition
\[\bigwedge^{2}\!H^{0}(\mathcal{V}(-p))\to H^{0}(\bigwedge^{2}(\mathcal{V}(-p)))= H^{0}((\det\mathcal{V})(-2p))\subset H^{0}(\det\mathcal{V}).\]
We need to check that this map has corank 2, or equivalently the first arrow is an isomorphism.
Set \(\mathcal{V}^{\prime}=\mathcal{V}(-p)\). This is a stable bundle of rank 2 and degree 3. We need to check that the map
\[\bigwedge^{2}\!H^{0}(\mathcal{V}^{\prime})\to H^{0}(\det\mathcal{V})\]
is surjective. For any point \(p\in E\), we have an exact sequence
\[0\to H^{0}(\mathcal{O}(p))\to H^{0}(\mathcal{V}^{\prime})\to H^{0}((\det \mathcal{V}^{\prime})(-p))\to 0\]
and it is easy to see that the restriction of the above map to \(H^{0}(\mathcal{O}(p))\wedge H^{0}(\mathcal{V}^{\prime})\) surjects onto the subspace \(H^{0}((\det\mathcal{V}^{\prime})(-p))\subset H^{0}(\det\mathcal{V}^{\prime})\). Varying the point \(p\), we get the needed surjectivity.
Thus, by Lemma 3.3.1(i), \(\Sigma_{E}\) is exactly the set of points \(p\in G(2,V)\setminus E\) where \(\dim D_{p}\geq 3\). We have the following geometric description of \(\Sigma_{E}\). Recall that we have a collection of 3-dimensional subspaces \(W_{q}\subset V\), associated with points of \(\widetilde{E}=\operatorname{Pic}^{2}(E)\) (see (3.2)).
**Proposition 3.3.4**.: _For \(p\in G(2,V)\), we have \(p\in\Sigma_{E}\) if and only if the corresponding line \(L_{p}\) is contained in some plane \(\mathbb{P}W_{q}\), where \(q\in\widetilde{E}\). In other words, \(\Sigma_{E}=\cup_{q\in\widetilde{E}}G(2,W_{q})\)._
Proof.: Assume first that \(p\in\Sigma_{E}\). As we have seen above, this means that \(p\in G(2,V)\setminus E\) and \(\dim D_{p}\geq 3\). By Lemma 3.3.1(ii), this implies that the rank of the Poisson bracket \(\Pi_{W}\) on points of \(L_{p}\) is \(\leq 2\). Hence, by Lemma 3.2.1(ii), \(L_{p}\) is contained in the quintic \(\mathcal{D}_{E}\). By Corollary 3.2.5, this implies that \(L_{p}\) is contained in some plane \(\mathbb{P}W_{q}\).
Conversely, assume that we have a 2-dimensional subspace \(L\subset H^{0}(M)^{*}\subset H^{0}(\mathcal{V})^{*}=V\), where \(\mathcal{V}\to M\) is a surjection to a degree 3 line bundle \(M\). Then \(L=\langle s\rangle^{\perp}\subset H^{0}(M)^{*}\) for some 1-dimensional subspace \(\langle s\rangle\subset H^{0}(M)\). Set \(P=L^{\perp}\subset H^{0}(\mathcal{V})\). Then \(P\) is the preimage of \(\langle s\rangle\subset H^{0}(M)\) under the projection \(H^{0}(\mathcal{V})\to H^{0}(M)\).
By Lemma 3.3.1, the space \(D_{p}\) (where \(L=L_{p}\) for \(p\in G(2,V)\)) is isomorphic to the kernel of the composed map
\[W\to\bigwedge^{2}\!V\to\bigwedge^{2}(V/L).\]
Hence, \(\dim(D_{p})\) is equal to the corank of the dual map
\[\bigwedge^{2}\!(P)\to\bigwedge^{2}\!H^{0}(\mathcal{V})\to H^{0}(\bigwedge^{2} \!\mathcal{V}). \tag{3.3}\]
Let \(B\) denote the divisor of zeroes of \(s\). We claim that the image of (3.3) is contained in the subspace \(H^{0}(\bigwedge^{2}\mathcal{V}(-B))\subset H^{0}(\bigwedge^{2}\mathcal{V})\). Indeed, we have an exact sequence
\[0\to N\to\mathcal{V}\to M\to 0\]
where \(N\) is a line bundle of degree \(2\). It is easy to see that the composed map
\[H^{0}(N)\wedge H^{0}(\mathcal{V})\hookrightarrow\bigwedge^{2}H^{0}(\mathcal{V} )\to H^{0}(\bigwedge^{2}\mathcal{V})\]
coincides with the natural multiplication map
\[H^{0}(N)\wedge H^{0}(\mathcal{V})/\bigwedge^{2}H^{0}(N)\simeq H^{0}(N)\otimes H ^{0}(M)\to H^{0}(N\otimes M)\simeq H^{0}(\bigwedge^{2}\mathcal{V}).\]
The exact sequence
\[0\to H^{0}(N)\to P\to\langle s\rangle\to 0\]
shows that \(\bigwedge^{2}P\subset H^{0}(N)\wedge H^{0}(\mathcal{V})\) and its image in \(H^{0}(N)\otimes H^{0}(M)\) is contained in \(H^{0}(N)\otimes\langle s\rangle\). This proves our claim about the image of the map (3.3). It follows that the corank of this map is \(\geq 3\), so \(p\in\Sigma_{E}\).
**Lemma 3.3.5**.: _Let \(L_{p}\subset V\) denote the \(2\)-dimensional subspace corresponding to \(p\in G(2,V)\setminus E\)._
_(i) For any_ \(3\)_-dimensional subspace_ \(M\subset V\) _containing_ \(L_{p}\)_, one has_ \(W\cap\bigwedge^{2}M=\bigwedge^{2}L_{p}\)_._
_(ii) Assume that for generic_ \(v\in L_{p}\)_, the rank of_ \(\Pi_{E,v}\) _is_ \(4\)_. Then the map_ \(D_{p}\otimes\mathcal{O}\to V/L_{p}\otimes\mathcal{O}(1)\) _over the projective line_ \(\mathbb{P}L_{p}\) _is an embedding of a rank_ \(2\) _subbundle._
Proof.: (i) Since all elements of \(\bigwedge^{2}M\) are decomposable, the intersection \(Q:=W\cap\bigwedge^{2}M\) is a linear subspace consisting of decomposable elements. But all decomposable elements of \(W\) are of the form \(\bigwedge^{2}L_{q}\) for some point \(q\in E\). Hence, we would get an embedding \(\mathbb{P}(Q)\to E\), which implies that \(Q\) is \(1\)-dimensional, so \(Q=\bigwedge^{2}L_{p}\).
(ii) From part (i) and from Lemma 3.3.1 we get that for any \(3\)-dimensional subspace \(M\subset V\) containing \(L_{p}\), one has \(D_{p}\cap L_{p}\otimes M/L_{p}=0\). Let us set \(P=V/L_{p}\), and let us consider the exact sequence
\[0\to D_{p}\otimes\mathcal{O}(-1)\to P\otimes\mathcal{O}\to Q\to 0.\]
We want to prove that the rank \(1\) sheaf \(Q\) on \(\mathbb{P}^{1}\) has no torsion. Since \(\deg(Q)=2\) and \(Q\) is generated by global sections, we only have to exclude the possibilities \(Q\simeq\mathcal{O}_{p}\oplus\mathcal{O}(1)\) and \(Q\simeq T\oplus\mathcal{O}\), where \(T\) is a torsion sheaf of length \(2\).
Assume first that \(Q\simeq\mathcal{O}_{p}\oplus\mathcal{O}(1)\). Consider the composed surjection \(f:P\otimes\mathcal{O}\to Q\to\mathcal{O}(1)\). It is induced by a surjection \(P\to H^{0}(\mathcal{O}(1))\), which has \(1\)-dimensional kernel \(\langle v\rangle\). It follows that the inclusion of \(D_{p}\otimes\mathcal{O}(-1)\) into \(P\otimes\mathcal{O}\) factors as
\[D_{p}\otimes\mathcal{O}(-1)\to\langle v\rangle\otimes\mathcal{O}\oplus \mathcal{O}(-1)\to P\otimes\mathcal{O}.\]
It follows that \(D_{p}\) has a nontrivial intersection with \(H^{0}(\mathcal{O}(1))\otimes\langle v\rangle=L_{p}\otimes M/L_{p}\subset L _{p}\otimes V/L_{p}\), for some \(3\)-dimensional \(M\subset V\), containing \(L_{p}\). This is a contradiction, as we proved that there could be no such \(M\).
In the case \(Q\simeq T\oplus\mathcal{O}\), we get that \(D_{p}\otimes\mathcal{O}(-1)\) is contained in the kernel of a surjection \(P\otimes\mathcal{O}\to\mathcal{O}\), i.e., \(D_{p}\otimes\mathcal{O}(-1)\) is contained in \(\mathcal{O}^{2}\subset P\otimes\mathcal{O}\). But any embedding \(\mathcal{O}(-1)^{2}\to\mathcal{O}^{2}\)
factors through some \(\mathcal{O}(-1)\oplus\mathcal{O}\to\mathcal{O}^{2}\) (occurring as kernel of the surjection \(\mathcal{O}^{2}\to\mathcal{O}_{p}\), for some point \(p\) in the support of the quotient). Hence, we can finish again as in the previous case.
**Remark 3.3.6**.: The rational map \(f\) from \(G(2,V)\) to \(\mathbb{P}^{4}\) has the following interpretation, which can be proved using projective duality. Start with a generic line \(L\subset\mathbb{P}(V)\). Then the intersection \(L\cap\mathcal{D}_{E}\) with the degeneration quintic of \(\Pi_{E}\) consists of \(5\) points. Taking the images of these points under the projection \(\mathcal{D}_{E}\setminus S_{E}\to\widetilde{E}\) (see Cor. 3.2.5) we get a divisor \(D_{L}\) of degree \(5\) on \(\widetilde{E}\). All these divisors will belong to a certain linear system \(\mathbb{P}^{4}\) of degree \(5\), and the map \(L\mapsto D_{L}\) is exactly our map \(f\).
### Calculation of the Schouten bracket and proof of Theorem B
**Lemma 3.4.1**.: _(i) Let \(E\subset G(2,V)\) be the elliptic curve defined by \(W\subset\bigwedge^{2}V\). Then for each point \(p\in E\), the bivector \(\Pi_{E}\) vanishes on the projective line \(\mathbb{P}L_{p}\subset\mathbb{P}V\), where \(L_{p}\subset V\) is the \(2\)-dimensional subspace corresponding to \(p\). For a generic point \(v\) of \(L_{p}\) the Lie algebra \(\mathfrak{g}=T_{v}^{*}\mathbb{P}V\) has a basis \((h_{1},h_{2},e_{1},e_{2})\) such that_
\[[h_{1},h_{2}]=[e_{1},e_{2}]=0,\]
\[[h_{i},e_{i}]=2e_{i},\ \ [h_{j},e_{i}]=-e_{i}\ \ \text{for $i\neq j$}.\]
_Equivalently, the linearization of \(\Pi_{E}\) takes form_
\[\Pi_{E}^{lin}=2e_{1}\partial_{h_{1}}\wedge\partial_{e_{1}}-e_{1}\partial_{h_{2 }}\wedge\partial_{e_{1}}+2e_{2}\partial_{h_{2}}\wedge\partial_{e_{2}}-e_{2} \partial_{h_{1}}\wedge\partial_{e_{2}}.\]
_Furthermore, the conormal subspace \(N_{\mathbb{P}L_{p},v}^{\vee}\subset\mathfrak{g}^{*}\) is spanned by \(e_{1},e_{2},h_{1}+h_{2}\) (dually the tangent space to \(T_{\mathbb{P}L_{p}}\) is spanned by \(\partial_{h_{1}}-\partial_{h_{2}}\))._
_(ii) We have an identification_
\[H^{0}(\mathbb{P}L_{p},N_{\mathbb{P}L_{p}})\simeq H^{0}(\mathbb{P}L_{p},V/L_{p }\otimes\mathcal{O}(1))\simeq L_{p}^{*}\otimes V/L_{p}\simeq T_{p}G(2,V).\]
_Under this identification, the line \(T_{p}E\subset T_{p}G(2,V)\) has the property that the corresponding global section of \(N_{\mathbb{P}L_{p}}\) evaluated at generic \(v\in\mathbb{P}L_{p}\) spans the line_
\[\langle\partial_{h_{1}},\partial_{h_{2}}\rangle/\langle\partial_{h_{1}}- \partial_{h_{2}}\rangle\subset N_{\mathbb{P}L_{p},v}\simeq V/L_{p}.\]
_Equivalently, the tangent space at \(v\) to the surface \(S_{E}\subset\mathbb{P}V\) is \(\langle\partial_{h_{1}},\partial_{h_{2}}\rangle\subset T_{v}\mathbb{P}V\)._
_(iii) Let \(\Pi^{\prime}\) be a Poisson bracket compatible with \(\Pi_{E}\). Then for \(p\in E\) and a generic \(v\in L_{p}\), one has_
\[\Pi^{\prime}_{v}\in\langle(2\partial_{h_{1}}-\partial_{h_{2}})\wedge\partial _{e_{1}},(2\partial_{h_{2}}-\partial_{h_{1}})\wedge\partial_{e_{2}},\partial _{h_{1}}\wedge\partial_{h_{2}}\rangle. \tag{3.4}\]
Proof.: (i) Extensions \(\widetilde{\mathcal{V}}\) of \(\mathcal{V}\) by \(\mathcal{O}\), corresponding to the line \(\mathbb{P}L_{p}\), are exactly the extensions that split under \(\mathcal{O}\to\mathcal{O}(p)\). We claim that for a generic point of \(\mathbb{P}L_{p}\) we have \(\widetilde{\mathcal{V}}\simeq\mathcal{O}(p)\oplus L_{1}\oplus L_{2}\), where \(L_{1}\) and \(L_{2}\) are nonisomorphic line bundles of degree \(2\). Indeed, by Lemma 3.2.1(ii), the only other possibility is \(\widetilde{\mathcal{V}}\simeq\mathcal{O}(p)\oplus\mathcal{V}^{\prime}\), where \(\mathcal{V}^{\prime}\) is a nontrivial extension of \(M\) by \(M\), where \(M^{2}\simeq\det(\mathcal{V})\). Since the corresponding extension splits over the unique embedding \(M\to\mathcal{V}\), this gives one point on the line \(\mathbb{P}L_{p}\) for each of the four possible line bundles \(M\).
We can compute the Lie algebra \(\mathfrak{g}\) for the point corresponding to \(\widetilde{\mathcal{V}}\simeq\mathcal{O}(p)\oplus L_{1}\oplus L_{2}\) using the isomorphism of Theorem 2.3.1,
(3.5)
We consider the following basis in \(\operatorname{End}(\widetilde{\mathcal{V}})/\langle\operatorname{id}\rangle\):
\[h_{i}=\operatorname{id}_{L_{i}}-\operatorname{id}_{\mathcal{O}(p)},\ e_{i}\in \operatorname{Hom}(\mathcal{O}(p),L_{i}),\ i=1,2.\]
Then it is easy to check the claimed commutator relations between these elements.
The conormal subspace to \(\mathbb{P}L_{p}\) is identified with \(L_{p}^{\perp}=H^{0}(\mathcal{V}(-p))\). The image of the subspace \(\operatorname{Hom}(\mathcal{O}(p),L_{1}\oplus L_{2})\) under the map (3.5) will consist of compositions
\[\mathcal{O}\to\mathcal{O}(p)\to L_{1}\oplus L_{2}\to\mathcal{V},\]
which vanish at \(p\), so they are contained in \(H^{0}(\mathcal{V}(-p))\). We have
\[h_{1}+h_{2}=\operatorname{id}_{L_{1}}\oplus\operatorname{id}_{L_{2}}-2 \operatorname{id}_{\mathcal{O}(p)}\equiv-3\operatorname{id}_{\mathcal{O}(p)} \operatorname{mod}\langle\operatorname{id}_{\widetilde{\mathcal{V}}}\rangle,\]
and the element \(\operatorname{id}_{\mathcal{O}(p)}\) is mapped under (3.5) to the composition
\[\mathcal{O}\to\mathcal{O}(p)\to\mathcal{V},\]
which also vanishes at \(p\). This proves our claim about the conormal subspace.
(ii) To identify the direction corresponding to \(T_{p}E\), we first recall that the map \(E\to G(2,V)\) is associated with the subbundle \(\mathcal{V}^{\vee}\hookrightarrow V\otimes\mathcal{O}\) over \(E\). We have an exact sequence
\[0\to T_{p}^{*}E\otimes\mathcal{V}|_{p}\to H^{0}(\mathcal{V}|_{2p})\to\mathcal{ V}|_{p}\to 0.\]
The dual of the natural map \(V^{*}\to H^{0}(\mathcal{V}|_{2p})\) fits into a morphism of exact sequences
and the map \(\beta\) corresponds to a map \(T_{p}E\to\operatorname{Hom}(\mathcal{V}^{\vee}|_{p},V/L_{p})=\operatorname{ Hom}(L_{p},V/L_{p})\) which is the tangent map to \(E\to G(2,V)\). Note that the dual to \(\beta\) is the natural linear map
\[(V/L_{p})^{*}=\ker(H^{0}(\mathcal{V})\to\mathcal{V}|_{p})\to\ker(H^{0}( \mathcal{V}|_{2p})\to\mathcal{V}|_{p})\simeq T_{p}^{*}E\otimes\mathcal{V}|_{p}. \tag{3.6}\]
Now, given a functional \(v:\mathcal{V}|_{p}\to k\), the image of \(T_{p}E\) under \(\pi_{v}:L_{p}^{*}\otimes V/L_{p}\to V/L_{p}\) corresponds to the composition of (3.6) with \(v\). In other words, it is given by the composition
\[L_{p}^{\perp}=H^{0}(\mathcal{V}(-p))\to\mathcal{V}(-p)|_{v}\simeq\mathcal{V}| _{p}\rTo k\]
(here we use a trivialization of \(T_{p}E\)).
Let \(\widetilde{\mathcal{V}}\to\mathcal{V}\) be the extension corresponding to \(v\). As we have seen in (i), for a generic \(v\), we have \(\widetilde{V}\simeq\mathcal{O}(p)\oplus L_{1}\oplus L_{2}\), where \(L_{i}\) are as above. As we have seen in (i), under the isomorphism (3.5), \(L_{p}^{\perp}=H^{0}(\mathcal{V}(-p))\) is the image of the subspace \(\langle h_{1}+h_{2},e_{1},e_{2}\rangle\).
Hence, it remains to check that under the composition
\[\langle e_{1},e_{1}\rangle\to H^{0}(\mathcal{V}(-p))\to\mathcal{V}(-p)|_{p}\simeq \mathcal{V}|_{p}\xrightarrow{v}k,\]
is zero (where the first arrow is induced by (3.5)). Let us consider the element \(e_{1}\) (the case of \(e_{2}\) is similar). It maps to the element of \(H^{0}(\mathcal{V}(-p))\) given by the embedding
\[\mathcal{O}\to L_{1}(-p)\to\mathcal{V}(-p),\]
where we use the composed map \(L_{1}\to\widetilde{\mathcal{V}}\to\mathcal{V}\). Thus, we need to check that the composition \(L_{1}\to\mathcal{V}\xrightarrow{v}k\) is zero. But this follows from the fact that the extension \(\widetilde{\mathcal{V}}\) is the pull-back of the standard extension \(\mathcal{O}(p)\to\mathcal{O}_{p}\) via \(v\), so that we have a commutative diagram
(iii) This is obtained by a straightforward computation using the vanishing of \([\Pi_{E},\Pi_{E^{\prime}}]\) and the formula for \(\Pi_{E}^{lin}\) from part (i).
**Lemma 3.4.2**.: _Let \(E,E^{\prime}\subset G(2,V)\) be a pair of elliptic curves obtained as linear sections, such that \([\Pi_{E},\Pi_{E^{\prime}}]=0\). Then \(E\) is not contained in \(\Sigma_{E^{\prime}}\subset G(2,V)\)._
Proof.: Assume \(E\subset\Sigma_{E^{\prime}}\). Then, by the description of \(\Sigma_{E^{\prime}}\) in Proposition 3.3.4, for every \(p\in E\) there exists a line bundle \(L_{2}\) of degree \(2\) on \(E^{\prime}\) such that the image of \(H^{0}(\mathcal{V}|_{p})^{*}\to H^{0}(E,\mathcal{V})^{*}=V\) is contained in \(H^{0}(E^{\prime},L_{2})^{\perp}\subset H^{0}(E^{\prime},\mathcal{V}^{\prime}) ^{*}=V\). In other words, each line \(\mathbb{P}L_{p}\subset\mathbb{P}V\), for \(p\in E\), is contained in the projective plane \(\mathbb{P}H^{0}(E^{\prime},L_{2})^{\perp}\subset\mathbb{P}V\). This plane intersects the zero locus of \(\Pi_{E^{\prime}}\) in a smooth cubic (see Lemma 3.2.1(iii)), hence, for a generic point \(v\in L_{p}\) the rank of \(\Pi_{E^{\prime}}|_{v}\) is \(2\).
Hence, \(\Pi_{E^{\prime}}|_{v}=w_{1}\wedge w_{2}\), where \(\langle w_{1},w_{2}\rangle\) is the tangent plane to the leaf of \(\Pi_{E^{\prime}}\) (i.e., to the projective plane \(\mathbb{P}H^{0}(E^{\prime},L_{2})^{\perp}\)). Furthermore, the plane \(\langle w_{1},w_{2}\rangle\) contains the tangent line to \(\mathbb{P}L_{p}\) at \(v\). In the notation of Lemma 3.4.1(i), the latter tangent line is spanned by \(\partial_{h_{1}}-\partial_{h_{2}}\). So, \(\Pi_{E^{\prime}}|_{v}=(\partial_{h_{1}}-\partial_{h_{2}})\wedge w\) for some tangent vector \(w\). But we also know by Lemma 3.4.1(iii) that \(\Pi_{E^{\prime}}|_{v}\) is a linear combination of \((2\partial_{h_{1}}-\partial_{h_{2}})\wedge\partial_{e_{1}}\), \((2\partial_{h_{2}}-\partial_{h_{1}})\wedge\partial_{e_{2}}\) and \(\partial_{h_{1}}\wedge\partial_{h_{2}}\). This is possible only when \(w\in\langle\partial_{h_{1}},\partial_{h_{2}}\rangle\), which is the tangent plane to the surface \(S_{E}\) (see Lemma 3.4.1(ii)).
This implies that \(S_{E}\) is tangent to the corresponding projective plane \(\mathbb{P}H^{0}(E^{\prime},L_{2})^{\perp}\subset\mathcal{D}_{E^{\prime}}\). Assume first that \(S_{E}\not\subset S_{E^{\prime}}\). Then we get that the regular morphism
\[S_{E}\setminus S_{E^{\prime}}\rightarrow\mathcal{D}_{E^{\prime}}\setminus S_{ E^{\prime}}\rightarrow\operatorname{Pic}^{2}(E^{\prime})\]
(see Corollary 3.2.5) has zero tangent map at every point. Hence, \(S_{E}\) is contained in a projective plane, which is a contradiction (since the map \(\mathbb{P}(\mathcal{V}^{\vee})\rightarrow\mathbb{P}H^{0}(\mathcal{V})^{*}= \mathbb{P}V\) induces an isomorphism on sections of \(\mathcal{O}(1)\)).
Finally, if \(S_{E}\subset S_{E^{\prime}}\) then \(E=E^{\prime}\subset G(2,V)\) and, we get a contradiction by Lemma 3.3.3.
Proof of Theorem B.: (i) We can assume that \(E\neq E^{\prime}\). We will check that for a generic point \(p\in E\), one has
\[T_{p}E\subset D_{E^{\prime},p}\subset T_{p}G(2,V). \tag{3.7}\]
By Lemma 3.4.2, for a generic \(p\in E\), we have \(p\not\in\Sigma_{E}\), hence, the line \(\mathbb{P}L_{p}\) is not contained in the degeneracy locus \(\mathcal{D}_{E}\) of \(\Pi_{E^{\prime}}\). Let us pick a generic point \(v\) of \(L_{p}\), so that the rank of \(\Pi_{E^{\prime},v}\) is \(4\). We want to study the normal projection
\[\Pi_{E^{\prime},v}^{norm}\in\wedge^{2}(T_{v}\mathbb{P}V/T_{v}\mathbb{P}L_{p}) \simeq\wedge^{2}(V/L_{p})\]
(see Lemma 3.3.1).
Recall that in the notation of Lemma 3.4.1, the tangent space to \(\mathbb{P}L_{p}\) at \(v\) is spanned by \(\partial_{h_{1}}-\partial_{h_{2}}\). Hence, the inclusion (3.4) implies that \(\Pi_{E^{\prime},v}^{norm}\) is proportional to a bivector of the form \(\partial_{h_{1}}\wedge\xi\). By Lemma 3.4.1(ii), we can reformulate this as
\[\Pi_{E^{\prime},v}^{norm}\in\pi_{v}(T_{p}E)\wedge V/L_{p}\subset\wedge^{2}(V/ L_{p}).\]
By Lemma 3.3.1(ii), the subspace \(\pi_{v}(D_{E^{\prime},p})\subset V/L_{p}\) consists of \(x\) such that \(x\wedge\Pi_{E^{\prime},v}^{norm}=0\). Thus, we deduce the inclusion
\[\pi_{v}(T_{p}E)\subset\pi_{v}(D_{E^{\prime},p})\subset V/L_{p}\]
for generic \(v\in L_{p}\).
In other words, the section \(s\) generating
\[T_{p}E\subset T_{L_{p}}G(2,V)\simeq\operatorname{Hom}(L_{p},V/L_{p})\simeq H^ {0}(\mathbb{P}L_{p},V/L_{p}\otimes\mathcal{O}(1))\]
has the property that for generic point \(v\in\mathbb{P}L_{p}\) the evaluation \(s(v)\) belongs to the image of the evaluation at \(v\) of the embedding \(D_{E^{\prime},p}\otimes\mathcal{O}\to V/L_{p}\otimes\mathcal{O}(1)\). Since by Lemma 3.3.5 the latter is an embedding of a subbundle, this implies that in fact \(s\in D_{E^{\prime},p}\) as claimed.
This proves the inclusion (3.7) for a generic \(p\in E\). But this implies that the composed map
\[E\setminus E^{\prime}\to G(2,V)\setminus E^{\prime}\rightarrow\mathbb{P}^{4}\]
has zero derivative everywhere, so it is constant. Hence, \(E\) is contained in a linear section of \(\mathbb{P}U\cap G(2,V)\), for some \(6\)-dimensional subspace \(U\subset\bigwedge^{2}V\) containing \(W^{\prime}\). Hence, \(\dim(W+W^{\prime})\leq 6\).
Conversely, assume \(W\) and \(W^{\prime}\) are such that \(U=W+W^{\prime}\) is \(6\)-dimensional. Then we claim that \([\Pi_{W},\Pi_{W^{\prime}}]=0\). Indeed, since the space of such pairs \((W,W^{\prime})\) is irreducible, it is enough to consider the case when the surface \(S=\mathbb{P}U\cap G(2,V)\) is smooth. Then
and \(E_{W^{\prime}}\) are anticanonical divisors on \(S\), and we can apply [3, Thm. 4.4] to the bundle \(\mathcal{V}_{S}:=\mathcal{U}^{\vee}|_{S}\) on \(S\). The fact that \((\mathcal{O}_{S},\mathcal{V}_{S})\) is an exceptional pair is easily checked using Koszul resolutions, as in Sec. 2.2.
(ii) It is well known that if a collection of \(k\)-dimensional subspaces in a vector space has the property that any two subspaces intersect in a \((k-1)\)-dimensional space, then either all of them are contained in a fixed \((k+1)\)-dimensional subspace, or they contain a fixed \((k-1)\)-dimensional subspace. The statement immediately follows from (i) using this fact for \(k=5\) and the collection \((W_{i})\).
Proof of Corollary C.: By Theorem B(ii), the brackets \((\Pi_{W_{i}})\) are pairwise compatible when either there exists a \(6\)-dimensional subspace \(U\subset\bigwedge^{2}\!V\), containing all \(W_{i}\), or there is a \(4\)-dimensional subspace \(K\subset\bigwedge^{2}\!V\), contained in all \(W_{i}\). In the former case the corresponding tensors \(\bigwedge^{2}\!W_{i}\) are all contained in the \(6\)-dimensional subspace
\[\bigwedge^{5}\!U\subset\bigwedge^{5}(\bigwedge^{2}\!V).\]
In the latter case all the tensors \(\bigwedge^{2}\!W_{i}\) are contained in the \(6\)-dimensional subspace
\[\bigwedge^{4}\!K\otimes(\bigwedge^{2}\!V/K)\simeq(\bigwedge^{4}\!K)\wedge( \bigwedge^{2}\!V)\subset\bigwedge^{5}(\bigwedge^{2}\!V).\]
Conversely, by [3, Thm. 4.4], if we take a smooth linear section \(S=\mathbb{P}U\cap G(2,V)\), where \(\dim U=6\), we claim that we will get a \(6\)-dimensional subspace of compatible Poisson brackets coming from anticanonical divisors of \(S\). We just need to show that the corresponding linear map from \(H^{0}(S,\omega_{S}^{-1})\) to the space of Poisson bivectors on \(\mathbb{P}(V)\) is injective. Suppose there exists an anticanonical divisor \(E_{0}\subset E\) such that the corresponding Poisson bivector is zero. Pick a generic anticanonical divisor \(E\). Then all elliptic curves in the pencil \(E+tE_{0}\) map to the same Poisson bivector. But this is impossible since we can recover \(E\subset G(2,V)\) from the corresponding Poisson bracket \(\Pi_{E}\) on \(\mathbb{P}(V)\), as the set of all lines lying in the zero locus \(S_{E}\) (see Sec. 3.2).
|
2304.04589 | Hyperspectral Image Super-Resolution via Dual-domain Network Based on
Hybrid Convolution | Since the number of incident energies is limited, it is difficult to directly
acquire hyperspectral images (HSI) with high spatial resolution. Considering
the high dimensionality and correlation of HSI, super-resolution (SR) of HSI
remains a challenge in the absence of auxiliary high-resolution images.
Furthermore, it is very important to extract the spatial features effectively
and make full use of the spectral information. This paper proposes a novel HSI
super-resolution algorithm, termed dual-domain network based on hybrid
convolution (SRDNet). Specifically, a dual-domain network is designed to fully
exploit the spatial-spectral and frequency information among the hyper-spectral
data. To capture inter-spectral self-similarity, a self-attention learning
mechanism (HSL) is devised in the spatial domain. Meanwhile the pyramid
structure is applied to increase the acceptance field of attention, which
further reinforces the feature representation ability of the network. Moreover,
to further improve the perceptual quality of HSI, a frequency loss(HFL) is
introduced to optimize the model in the frequency domain. The dynamic weighting
mechanism drives the network to gradually refine the generated frequency and
excessive smoothing caused by spatial loss. Finally, In order to better fully
obtain the mapping relationship between high-resolution space and
low-resolution space, a hybrid module of 2D and 3D units with progressive
upsampling strategy is utilized in our method. Experiments on a widely used
benchmark dataset illustrate that the proposed SRDNet method enhances the
texture information of HSI and is superior to state-of-the-art methods. | Tingting Liu, Yuan Liu, Chuncheng Zhang, Yuan Liyin, Xiubao Sui, Qian Chen | 2023-04-10T13:51:28Z | http://arxiv.org/abs/2304.04589v9 | # Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution
###### Abstract
Hyperspectral images (HSIs) with high spatial resolution are hard to obtain directly because of sensor limitations. Deep learning can provide an end-to-end reconstruction solution. Nevertheless, existing methods have two main drawbacks. First, networks with self-attention mechanisms often require a trade-off between internal resolution, model performance and complexity, resulting in the loss of fine-grained, high-resolution features. Second, there are visual discrepancies between the reconstructed hyperspectral image (HSI) and the ground truth because they focus on spatial-spectral domain learning. In this paper, a novel super-resolution algorithm for HSIs, called SRDNet, which uses a dual-domain network with hybrid convolution and progressive upsampling to exploit both spatial-spectral and frequency information of the hyperspectral data, is proposed. In this approach, we design a self-attentive pyramid structure (HSI) to capture hyperspectral self-similarity in the spatial domain, thereby increasing the receptive range of attention and improving the feature representation of the network. Additionally, we introduce a frequency loss mechanism (HFL) with dynamic weighting to optimize the model in the frequency domain and improve the perceptual quality of the HSI, avoiding the over-smoothing caused by spatial loss only. Experimental results on three benchmark datasets show that SRDNet can effectively improve the texture information of the HSI and outperform state- of-the-art methods for HSI reconstruction.
hyperspectral image, super-resolution, dual-domain network, self-attention mechanism, frequency loss.
## I Introduction
Hyperspectral imaging technology can effectively distinguish different targets or capture the complex spectral details of a scene by analyzing and comparing the spectral features of the images. As a result, hyperspectral imaging (HSI) has many applications in various fields [1-3]. However, a common challenge is the trade-off between spatial and spectral resolution in hyperspectral imagers. To achieve higher spectral resolution, the spatial resolution is often compromised, resulting in low-quality HSI [4, 5]. This low spatial resolution leads to the mixing of end member spectra, which affects the detection performance of the HSI.
Image super-resolution (SR) is the process of recovering high-resolution (HR) images from low-resolution (LR) images, overcoming the resolution constraints of imaging systems [6-8]. Many novel CNN networks have been developed to improve SR performance, using the powerful representation capabilities of CNN. Unlike RGB images, HSI needs to take into account the spectral information in resolution reconstruction (The correlation is illustrated in Fig. 1). Although several CNN-based single image super-resolution (SISR) methods have been developed for RGB images, these algorithms do not effectively utilize the spectral information inherent in HSIs, leading to poor performance [9-11]. HSIs possess a spectral dimension, which can be effectively handled by SISR methods employing 3D convolutions [12-14]. Unfortunately, the use of standard 3D convolution kernels in these methods results in a large number of network parameters.
Fig. 1: On the left is a comparison of the effects of several reconstruction algorithms, and on the far right is the spectral correlation (Harvard dataset \(\times\) 4). Correlation values closer to 0 indicate that the two bands are more independent of each other, while correlation values closer to 1 indicate that the two bands are more linearly correlated.
To address this limitation, subsequent studies have utilized separable convolution kernels, such as 1\(\times\) k\(\times\) k or k\(\times\) l\(\times\) 1, instead of k\(\times\) k\(\times\)k [13, 15-18]. This approach significantly reduces the number of model parameters and enables the design of deeper networks. In addition, some researchers have employed 2D or 3D separable convolutional kernels to extract spatial-spectral information [9, 19, 12, 20]. Nonetheless, these methods tend to overly emphasize low-frequency pixels in the spatial domain, leading to the blurring of synthesized images.
To address the aforementioned challenges, it is essential to develop a model that considers both spatial-spectral characteristics and frequency characteristics. In light of this, we propose a novel approach called SRDNet for HSI super-resolution. This manuscript makes the following key contributions:
* Considering the spectral dimensionality of the image, it is challenging to capture the complete spatial mapping relationship between low and high resolutions through simple upsampling. To address this, a hybrid convolution of 2D and 3D units with progressive upsampling is designed.
* In the 2D unit, the IGM module is utilized to attain refined and diversified feature expressions, addressing demanding detail capture problems. The IGM module comprises symmetric group convolution block and complementary convolution block, which enhance the internal and external relationships among individual channels in a parallel manner, facilitating the extraction of various types of low-frequency structural information.
* To better use the spectral prior to enhance the learning of spatially global information and spectral coherence, a dual-domain network for SR is designed. Specifically, in the spatial domain, the self-attention module (HSL) with pyramidal structure can not only increase the attention field of view, but also model the spectral features with the global spatial spectral context, allowing feature interactions to have different contributions in the reconstruction. In the frequency domain, the hyper-spectral frequency loss (HFL) is applied to optimize Fourier transformed model and improve image quality through the dynamic weighting mechanism.
The article is organized as follows: Section 2 provides an overview of related work on HSI SR. In Section 3, we describe the proposed SRDNet method, including the network structure and the dual-domain mechanism. Experimental parameter settings and analysis are presented in Section 4. Lastly, Section 5 concludes the paper.
## II Related Work
Some of the works most relevant to our approach are briefly reviewed, including CNN-based methods, attentional mechanisms, and the use of frequency domain analysis in SISR.
### _Single HSI Super-resolution_
Previous studies mainly formulated HSI SR as a constrained optimization problem, where some priors were used to limit the space [21]. For instance, to explore the non-local similarity in the spatial domain, Wang et al. [22] modeled three properties of the HSI: the non-local similarity in the spatial domain, the global correlation in the spectral domain, and the smooth structure of the spatial-spectral domain [23]. Huang et al. [24] used sparse and low-rank properties to reconstruct HSI with spatial super-resolution. Recently, CNNs have become more popular than traditional optimization-based solutions for SR, due to their strong feature extraction and representation abilities. Xie et al. [25] and Yuan et al. [26] employed DCNN networks with non-negative matrix decomposition to preserve the spectral properties of the intermediate results for HSI resolution. However, DCNN networks have difficulty in fully exploiting the spatial and spectral properties of HSI, due to the large number of spectral dimensions and the lack of enough training samples. Li et al. [9] proposed a grouping strategy, called the GDDRN method with a recursive module, to better capture the correlation between spectral bands. The method combines a spectral angle mapper (SAM) with mean squared error (MSE). However, its loss function affects the enhancement of spatial resolution. Mei et al. [12] developed a neural network, named 3D-FCNN, which used 3D convolution to jointly explore spatial texture features and spectral correlations. Although it reduces the distortion of the spectrum, it is computationally expensive at a high upsampling factor. Hu et al. [27] integrated a collaborative non-negative matrix factorization (CNMF) strategy with the outputs of a deep feature extraction network for learning. The method works well, but it considers spatial and spectral features separately and relies too much on manual intervention. Zheng et al. [11] designed a separable spectral convolution to obtain information on each spectral band. However, this would generate many redundant features with hundreds of spectral bands, making it hard to identify the most representative ones.
Despite the good performance of EUNet [28] proposed by Liu et al. with a small number of parameters, it is not the optimal choice. This method fails to fully utilize spectral information and instead focuses excessively on extracting spatial features, resulting in poor SAM scores. Li et al. [6] proposed a network with an attention mechanism, which is called 3D generative adversarial network to mitigate the issue of inter- spectral distortion. However, regular 3D convolution often has high storage complexity and computation cost. To better extract spatial and spectral features, Li et al. [14] designed a hybrid network by combining the 2D convolution and the 3D separable convolution. But it is computationally expensive, and the network structure causes information redundancy.
These HSI SR methods have achieved remarkable results, but they have some limitations. On one hand, the existing methods focus on learning the spatial-spectral information of HSI from the spatial domain. We address the challenge of synthesizing high frequencies by applying a separate module to the SR results. This module adaptively recovers high and hard frequencies, resulting in a higher resolution of the internal features. On the other hand, some algorithms do not selectively concentrate on important features and lack a global context to model spectral dependencies. Some features in certain locations and channels are more useful for SR reconstruction. To utilize the features more effectively, we devise a novel spatial-spectral self-attention mechanism in our method, which can acquire more detailed information and suppress other irrelevant information.
### _Attention Mechanism_
The attention mechanism is a signal processing mechanism that scans the image and allocates more attention to the focus of attention. It attends to the details of the target and suppresses other irrelevant information [29]. The attention mechanisms can be used in SR tasks to concentrate on prominent information, reduce image noise and improve the quality of reconstructed images. Some SR methods use attention mechanisms from other vision tasks [30, 31] to focus more on spectral characteristics. Dai et al. introduced second-order channels to capture long-distance spatial differences between SR and other tasks. Moreover, the attention mechanisms have been applied to HSI SR because of their powerful representation abilities [6, 32]. But most existing methods ignore the internal resolution of attention to speed up computations, which leads to degraded algorithm performance. There are some pixel-level attention mechanisms that are designed for advanced tasks [33-35] that further improve the representation abilities of the model. It is important to explore a pixel-level attention mechanism that can increase the effectiveness of HSI reconstruction.
### _Image Frequency Domain Analysis_
Spectrum analysis can decompose a complex signal into simpler signals. F-Principle [36] showed that deep learning methods tend to focus on low frequency to reconstruct targets, which causes differences in the frequency domain. In recent years, many CNN-based methods have been proposed to analyze the frequency domain. A coordinate-based MLP with Fourier transform [37] was used to recover high frequencies missing in single image reconstruction. Recent works have shown that frequency analysis can be integrated with SR [38, 39]. For example, some works have tried to reconstruct better visual images by minimizing the frequency domain difference between input and output during training [40]. In HSI SR, where the model is more likely to concentrate on low-frequency pixels in the spatial domain, the composite image becomes more blurred. Therefore, exploring an adaptive constraint based on intrinsic frequency is essential for reconstructing fine images.
## III The Proposed Method
### _Overall Architecture_
In this subsection, we present the detailed overall architecture of SRDNet, and show its algorithm model diagram in Fig. 2. Our method consists of four main components: the shallow feature extraction, the deep spatial-spectral feature extraction, the upsampling, and the reconstruction part. For HSI SR, we denote the LR of input and the reconstructed SR image as \(L_{LR}\!\in\!\mathbb{R}^{\text{C-H-W}}\) and \(I_{SR}\!\in\!\mathbb{R}^{\text{C-H-VW}}\), respectively. The W and H are the width and height of the HSI, and C is the number of channels in the HSI. The \(r\) is the SR scale factor that is generated by the LR. Our aim is to reconstruct the SR(\(I_{SR}\)) end-to-end by the network (SRDNet) from the LR(\(I_{LR}\)), as shown in Eq. (1).
\[\begin{array}{c}I_{SR}=H_{Net}\left(I_{LR}\right)\end{array} \tag{1}\]
where \(H_{Net}(.)\) denotes the function of the proposed super-resolution network. The flow of the overall network architecture is as follows.
Firstly, the convolutional layer and the first residual block are defined as \(F_{conv}(.)\) and \(F_{Res}^{(1)}(.)\), respectively, through which the shallow features are extracted. The corresponding feature \(x_{0}\) is defined as:
\[x_{0}=F_{Res}^{(1)}\big{(}F_{com}(I_{LR})\big{)} \tag{2}\]
Secondly, the parallel structure 2D/3D module (PAM) and the second residual block \(F_{Res}^{(2)}(.)\) are adopted to extract the deep features. The corresponding feature \(x_{t}\) is defined as:
\[x_{t}=F_{Res}^{(2)}\big{(}F_{PM}\left(x_{0}\right)+F_{I_{-WP}}\left(x_{0} \uparrow,r\right)\big{)} \tag{3}\]
Finally, there is the upsampling and reconstruction part, in order to upgrade the acquired features to the target size, where the upsampling module is introduced to generate a spatial spectral feature map of the target. To alleviate the affordability of the final SR reconstruction, the strategy of progressive upsampling is applied in this paper, which is divided into local upsampling and global upsampling. The \(F_{L\_up}(x_{0}\uparrow)\) denotes the local bicubic operation. The corresponding feature \(x_{up}\) is defined as:
\[x_{rec}=F_{com}\left(F_{G\_up}(x_{t},r)\right) \tag{4}\]
Where \(x_{rec}\) is represented by convolutional layer reconstruction, and \(F_{G\_up}\left(.\right)\) denotes the global upsampling operation, and the transposed 2D convolutional layer is applied to upsample the feature map by the scale factor \(r\).
\[I_{SR}=x_{rec}+F_{up}\big{(}I_{LR}\uparrow,r\big{)} \tag{5}\]
Where \(F_{up}\)_(\(I_{LR}\uparrow\)) represents the input \(I_{LR}\) for up-sampling operation by bicubic interpolation. Equation (5) is equivalent to Eq. (1).
The process of the overall network architecture has been described above. Next, we will introduce the network sub-modules in both the spatial-spectral domain and the frequency domain, respectively. The parallel architecture 2D/3D module will be elaborated from the spatial domain. In addition, the hyperspectral frequency loss (HFL) will be described in detail from the frequency domain.
### _Parallel architecture 2D/3D module (PAM)_
After shallow feature extraction, the information flow is divided into two branches, which the first is a 3D convolutional branching sub-network and the second is a 2D convolutional branching sub-network.
**1) 3D Unit Branch Network**
As mentioned in the second subsection, HSIs have extra spectral dimension compared to RGB images, which allows the use of 3D convolution to gain information beyond the spatial dimension. This paper makes use of separable 3D convolution (the convolution kernel of filter k\(\times\)k\(\times\)k is replaced with k \(\times\) 1 \(\times\) 1 and 1 \(\times\) k \(\times\) k) instead of the conventional 3D convolution. To be able to use 3D convolution, since the size of the input HSI is C \(\times\) W \(\times\) H, the \(I_{LR}\) is reshaped into four dimensions (1\(\times\)C\(\times\)W\(\times\)H).
\[\mathcal{Y}_{0}^{3D}=f_{1\mathrm{d}1\mathrm{d}4}^{3D}\big{(}f_{Reshape}\left(x _{0}+\mathcal{Y}_{IGM}^{2D}\right)\big{)} \tag{6}\]
where the \(\mathcal{Y}_{IGM}^{2D}\) represents the output of the IGM unit, \(f_{Reshape}(.)\) denotes the extended dimension of the feature map that extracts from the first residual group block. As for the 3D unit, as shown in Fig. 3. The kernel is the same for each 3D unit, and in the case of the first 3D unit, the output can be represented as follows.
\[\mathcal{Y}_{1D}^{3D}=\sigma\big{(}f_{3\mathrm{d}1}^{3D}\big{(}\sigma\big{(} f_{1\mathrm{d}3}^{3D}\big{(}\mathcal{Y}_{0}^{3D}\big{)}\big{)}\big{)} \tag{7}\]
The output of the previous 3D unit is employed as input to the next 3D unit. Assuming that there are \(N\) 3D units in the model, the output is shown as:
Fig. 2: Overall architecture of the SRDNet
\[\mathcal{Y}_{{}_{N0}}^{{}_{3D}}=\boldsymbol{f}_{{}_{N0}}^{{}_{3D}}\big{(} \boldsymbol{f}_{{}_{(N=1)D}}^{{}_{3D}}\big{(}\boldsymbol{\cdots}\boldsymbol{f}_ {{}_{2D}}^{{}_{3D}}\big{(}\boldsymbol{Y}_{{}_{1D}}^{{}_{3D}}\big{)}\big{)} \boldsymbol{\cdots}\big{)}\big{)} \tag{8}\]
Where \(\boldsymbol{f}_{{}_{N0}}^{{}_{3D}}(.)\) is operation of the \(N_{\#}\) 3D unit (in order to make the network simple and effective, here \(N\)=3).
Once the hierarchical features have been obtained from \(N\) 3D units, their outputs are combined or cascaded together to enable the network to capture more valuable information. Subsequently, local upsampling is carried out, and the corresponding feature operations are as follows.
\[\mathcal{Y}_{{}_{L-up}}^{{}_{3D}}=\boldsymbol{f}_{{}_{L-up}}^{{}_{3D}}\big{(} \boldsymbol{\sigma}\big{(}\boldsymbol{f}_{{}_{1D4}}^{{}_{3D}}\big{(} \boldsymbol{f}_{{}_{Concar}}^{{}_{3D}}\big{[}\mathcal{Y}_{{}_{1D}}^{{}_{3D}},\mathcal{Y}_{{}_{2D}}^{{}_{3D}},\mathcal{Y}_{{}_{1D}}^{{}_{3D}}\big{]}\big{)} \big{)} \tag{9}\]
Among them, the\(f_{{}_{L,up}}^{{}_{3D}}(.)\) is repeated a local upsampling operation. Finally, the obtained feature \(\mathcal{Y}_{{}_{L,up}}^{{}_{3D}}\) is reshaped from four dimensions (1 x \(\times\) \(\times\) W \(\times\) H) to the image size of three dimensions (C\(\times\)W\(\times\)H). The output of the 3D Unit branch is obtained.
\[\mathcal{y}^{{}_{3D}}=\boldsymbol{f}_{{}_{Reshape}}\big{(}\mathcal{Y}_{{}_{L, up}}^{{}_{3D}}\big{)} \tag{10}\]
Where \(\mathbf{y}^{{}_{3D}}\) is the dimension of the feature being squeezed.
**2) 2D Unit Branch Network**
In contrast to the 3D unit operation, the 2D unit branching network is used to extract deeper features through the IGM unit. Its structure is shown in Fig. 4.
#### Iii-A1 Isomeric Group Module (IGM)
Previous HSI SR work directly merges the layered features of all channels to enhance the image resolution performance, which may increase the redundant features and the convergence time of the algorithm. To solve this issue, we use the IGM module by interacting various channels to obtain deeper low-frequency information, which strengthens the relationship between different channels and improves the performance of the SR algorithm. The internal and external relationships of the various channels are improved by the parallel nature of acquiring different types of more representative structural information.
Symmetric Group Convolution Block: This block consists of two subnetworks with three layers each, which extract the main channel information from the image. The features obtained by the subnetworks are combined by a cascading operation to enhance their internal coherence. Each layer of the subnetworks follows the Conv+ReLU structure, with 32 input and output channels and 3\(\times\)3 convolution kernels. The output of the symmetric group block is obtained by cascading the outputs of the two subnetworks, resulting in 64 output channels. The two subnetworks have identical structures. For example, the output of the first subnetwork is shown as follows:
\[\mathcal{x}_{{}_{s}}^{{}_{2D}}=\boldsymbol{\sigma}\big{(}\boldsymbol{f}_{{}_{ cov}}^{{}_{33}}\big{(}\mathcal{x}_{{}_{s}}\big{)}\big{)}\] ( 11 )
\[\mathcal{y}_{{}_{1b}}^{{}_{2D}}=\boldsymbol{\sigma}(\boldsymbol{f}_{{}_{cov}}^ {{}_{33}}\big{(}\mathcal{x}_{{}_{0}}\big{)})\] ( 12 )
\[\mathcal{y}_{{}_{3GB-1}}^{{}_{2D}}=\boldsymbol{f}_{{}_{3b}}^{{}_{2D}}\big{(} \boldsymbol{f}_{{}_{2b}}^{{}_{2D}}\big{(}\boldsymbol{y}_{{}_{1b}}^{{}_{2D}} \big{)}\big{)}\] ( 13 )
Where the \(\boldsymbol{f}_{b}^{{}_{2D}}(.)\) denotes the operation of convolution and ReLU function per layer, for a total of 3 layers. The second branch subnet has the same operation as the first, and the output feature representation is as follows:
\[\mathcal{Y}_{{}_{3GB-2}}^{{}_{2D}}=\boldsymbol{f}_{{}_{3b}}^{{}_{2D}}\big{(} \boldsymbol{f}_{{}_{2b}}^{{}_{2D}}\big{(}\boldsymbol{y}_{{}_{1b}}^{{}_{2D}} \big{)}\big{)}\] ( 14 )
The output features of the symmetric group convolutional block are obtained by cascading the output features of the two subnetworks, which is the output feature representation of this network.
\[\mathcal{Y}_{{}_{3GB}}^{{}_{2D}}=\boldsymbol{f}_{{}_{Concar}}^{{}_{2D}}\big{[} \mathcal{Y}_{{}_{3GB-1}}^{{}_{2D}},\boldsymbol{y}_{{}_{3GB-2}}^{{}_{2D}}\big{]}\] ( 15 )
Complementary Convolution Block: This block consists of a single three-layer subnetwork that captures the total information of all channels in the image. The complementary convolution block improves the external correlation and robustness of the algorithm, which is a valuable complement to the symmetric group convolution block. Each layer of the complementary convolution block has the same structure as the symmetric group convolution block. The difference is that each convolution layer has 64 input, which allows for more fine-grained feature extraction. The output of the complementary convolution block is shown as:
\[\mathcal{Y}_{{}_{C\!R}}^{{}_{2D}}=\boldsymbol{f}_{{}_{3b}}^{{}_{2D}}\big{(} \boldsymbol{f}_{{}_{2b}}^{{}_{2D}}\big{(}\boldsymbol{y}_{{}_{1b}}^{{}_{2D}} \big{)}\big{)}\] ( 16 )
The output of symmetric group convolution blocks and complementary convolution blocks are spatially superimposed and integrated by a convolution fusion operation, which produces the output of the IGM module. The output charact
Fig. 4: IGM structure diagram
Fig. 3: Architecture of the 3D-unit
rstic of this module is expressed as:
\[y^{\,2D}_{\text{Total}}\ =\ f_{\text{conv}}\ (y^{\,2D}_{\text{CB}}+y^{\,2D}_{\text{ Segual}\cdot\text{v}})\in\mathbf{R}^{\text{C},1,1} \tag{17}\]
b) HSL module
To begin with, the indiscriminate treatment of all types of features in the convolution kernel limits the ability of the network to reconstruct. It is difficult to distinguish useful high-frequency information from rich low-frequency features [41]. Moreover, the perceptual field of each layer of the convolutional kernel is only a local expression of the fact that the output is only partially related to the input, and it lacks sufficient contextual information. Furthermore, the spectral features are of great importance for the HSI, so an adequate exploration of spectral correlations cannot be neglected. Based on the above three considerations, we incorporate attention mechanisms from the spatial and spectral dimensions, respectively. This allows the exploration of the long-range dependence of pixel positions and the correlation of spectral bands. The specific structure is illustrated in Fig. 5.
Spectral attention: The convolutional layer is utilized to linearly map the input feature to obtain the _query_ vector and _key_ vector. And the full space _query_ vector and half channel _key_ vector are obtained by reshape operation. Then, the _query_ vector is remapped to the _key_ vector through the matrix multiplication. And the auto-correlation of the spectral dimension is calculated to obtain the attention coefficient, which is recorded as the _value_ vector. Its channel is maintained as C/2, which avoids excessive computational costs.
\[y^{\,2D}_{\text{Spetal}\cdot\text{v}}=f_{\text{Softmax}}(f_{\text{Reshape}}(f_ {\text{conv}}(y^{\,2D}_{\text{Focal}}+y^{\,3D}_{\text{SD}})))\in\mathbf{R}^{ \text{I,HW}} \tag{18}\]
\[y^{\,2D}_{\text{Spetal}\cdot\text{v}}=y_{\text{Spetal}\cdot\text{v}}(f_{\text {conv}}(y^{\,2D}_{\text{Focal}}+y^{\,3D}_{\text{SD}}))\in\mathbf{R}^{\text{C}, 2,1\text{HW}} \tag{19}\]
\[y^{\,2D}_{\text{Spetal}\cdot\text{v}}=y_{\text{Spetal}\cdot\text{v}}(f_{\text {conv}}(y^{\,2D}_{\text{Spetal}\cdot\text{v}})\in\mathbf{R}^{\text{C},2,1,1} \tag{20}\]
where the \(\bigotimes\) denotes the matrix multiplication operation and the \(f_{\text{Spetal}\cdot\text{v}}(\cdot)\) represents the softmax function. As shown in the Fig. 4, after the value vector is activated by convolution and sigmoid, weight coefficient of each channel can be obtained.
\[y^{\,2D}_{\text{Spetal}\cdot\text{map}}=f_{\text{Spetal}}(f_{\text{conv}}(y^{ \,2D}_{\text{Spetal}\cdot\text{v}}))\in\mathbf{R}^{\text{C},1,1} \tag{21}\]
where \(f_{\text{Spetal}\cdot\text{v}}(\cdot)\) denotes the sigmoid activation function and \(f^{\,2D}_{\text{Spetal}\cdot\text{map}}(\cdot)\) denotes the channel weight coefficient.
Finally, the \(y^{\,2D}_{\text{Focal}}\) is multiplied by the weight factor of each channel to obtain the calibrated input signal, so that the HSI focuses on the channel with the greater weight.
\[y^{\,2D}_{\text{Spetal}\cdot\text{v}}=y^{\,2D}_{\text{Focal}}\ \bigodot\ y^{\,2D}_{\text{Spetal}\cdot\text{map}}\in\mathbf{R}^{\text{C}, \text{II},\text{W}} \tag{22}\]
Spatial attention: To enhance the fused features with spatial awareness, we compute the spatial attention mask based on spectral attention. A pyramid structure was designed to progressively expand the attentional receptive field to capture the spatial context at different scales. Then, we perform element-wise multiplication and addition operations to combine the features with the mask to fuse the features in a more adaptive and effective way.
\[y^{\,2D}_{\text{Fusion}}=f_{\text{conv}}\ (y^{\,2D}_{\text{Spetal}\cdot \text{v}})\in\mathbf{R}^{\text{C},\text{H}\cdot\text{W}} \tag{23}\]
\[y^{\,2D}_{\text{Down}}=f_{\text{conv}}\ [y^{\,2D}_{\text{max}},\ y^{\,2D}_{\text{avg}}]\in\mathbf{R}^{\text{C},\text{H}2,\text{ W}2} \tag{24}\]
\[y^{\,2D}_{\text{Spetal}\cdot\text{v}}=f_{\text{Up}}(f_{\text{Down}}(y^{\,2D} _{\text{Fusion}}))+y_{\text{Down}}\in\mathbf{R}^{\text{C},\text{H}2,\text{ W}2} \tag{25}\]
Where the \(y^{\,2D}_{\text{Fusion}}\) denotes the fusion of spectrally corrected features, the \(y_{\text{max}}(\cdot)\) and \(y_{\text{avg}}(\cdot)\) denote the maximum pooling and average pooling operations of downsampling, respectively. The \(f_{\text{Down}}(\cdot)\) denotes the acquired post-sampling features and the \(f_{\text{Up}}(\cdot)\) denotes the upsampling operation.
As shown in Fig. 5, the obtained features have undergone two downsampling and upsampling respectively. Then, according to the sigmoid function, the weight coefficient of the spatial pixel can be obtained.
\[y^{\,2D}_{\text{Spetal}\cdot\text{map}}=f_{\text{Spetal}}(f_{\text{Up}}(y^{ \,2D}_{\text{Spetal}\cdot\text{v}})+y_{\text{Fusion}})\in\mathbf{R}^{\text{C}, \text{II},\text{W}} \tag{26}\]
\[y^{\,2D}_{\text{Spetal}\cdot\text{invariant}}=y_{\text{Fusion}}\ \bigodot y^{\,2D}_{\text{Spetal}\cdot \text{map}}\in\mathbf{R}^{\text{C},\text{H}\cdot\text{W}} \tag{27}\]
Where the \(f^{\,2D}_{\text{Spetal}\cdot\text{map}}(\cdot)\) denotes the obtained spatial pixel weighting factor. After the calibrated input signal is obtained, so that the HSI focuses on the pixel area of greater weight.
The corrected feature signals in the spectral and spatial domains are fused so that the output feature maps are highly correlated in both spatial pixels and spectral dimensions.
Figure 5: Hyperspectral attention mechanism
\[\mathcal{Y}_{\textit{HSL}}^{2D}=(\mathcal{Y}_{\textit{spectral}}^{2D}+\mathcal{Y}_ {\textit{spectral}}^{2D})+\mathcal{Y}_{\textit{IGM}}^{2D} \tag{28}\]
where the \(f_{\textit{HSL}}^{2D}\) denotes the output characteristics of the HSL module. Similar to the 3D branch network, local upsampling is used to alleviate the burden of the final SR reconstruction. This is shown in the following operations:
\[y^{2D}=f_{L,\textit{up}}^{2D}\left(\sigma(f_{\textit{conv}}(\mathcal{Y}_{ \textit{HSL}}^{2D}))\right) \tag{29}\]
In the PAM network, the outputs of the 3D unit branch network and the 2D unit branch network are merged from spatial pixels. The definition is shown below.
\[x_{t}=y^{2D}+y^{3D} \tag{30}\]
where the \(x_{t}\) denotes the output of the entire PAM block resulting from the result of the operation of the \(F_{\textit{PAM}}\left(\cdot\right)\) function in Eq. (3)
**c)**: Hyperspectral frequency loss (HFL)
Discrete Fourier transform (DFT) image analysis: In order to process and analyze the image in the frequency domain, we apply a two-dimensional discrete Fourier transform (DFT) to convert the HSI from a spatial domain to a frequency domain representation, as shown in Figure 6(b).
\[F_{\left(\alpha,\nu\right)}=\sum_{x=0}^{H-1}\sum_{y=0}^{H-1}f\left(x,y\right) \cdot e^{-i2\pi\left(\frac{\text{HST}}{H},\frac{\text{VZ}}{H}\right)} \tag{31}\]
where the image dimensions are H\(\times\)W; the (x, y) denotes the coordinate of the image pixels in the spatial domain; and the _f(x,y)_ is a pixel value. The (_u,v_) denotes the spatial coordinate of the spectrum; the _F(u,v)_ is the complex frequency value, which is the sum of each image pixel in the spatial domain.
HSI frequency distance analysis: We can use the distance metric to measure the difference between the reconstructed SR and the ground truth in the frequency domain, as shown in Fig. 6(a). In the frequency domain, we operate on frequencies in the same space, which are represented as different 2D sinusoidal components in the image. We need to take into account both amplitude and phase when using frequency distances, as they capture different features of the image. We map each frequency value into a two-dimensional space (i.e., a plane) and convert it into a Euclidean vector. The frequency distance is then computed as the distance between the \(\vec{r}_{GT}\) and the \(\vec{r}_{\textit{SR}}\), which involves both the angle and magnitude of the vector (the purple line in Fig. 6(a)). We use the squared Euclidean distance for a single frequency (the \(k_{\textit{th}}\) example).
\[d_{\left(\vec{r}_{GT},\vec{r}_{\textit{SR}}\right)}^{k}=\left|\vec{r}_{GT}^{ k}-\vec{r}_{\textit{SR}}^{k}\right|_{2}^{2}=\left|F_{ GT}^{k}\left(u,\nu\right)-F_{\textit{SR}}^{k}\left(u,\nu\right)\right|^{2} \tag{32}\]
Where k\(=\) { 0, 1, 2,..., C-1}, and the frequency distance between the ground truth and the reconstructed image can be written as the mean value.
\[d_{\left(u,\nu\right)}^{k}=\frac{1}{HW}\sum_{u=0}^{H-1}\sum_{\nu=0}^{H-1} \left|F_{\textit{cr}}^{k}\left(u,\nu\right)-F_{\textit{SR}}^{k}\left(u,\nu \right)\right|^{2} \tag{33}\]
Due to the inherent bias, the network will still be biased towards easy frequencies, so to achieve the training focus on hard frequencies, a spectral weight matrix _w(u,v)_ is introduced to weight the easy frequencies. The specific definition is as follows:
\[w_{\left(u,\nu\right)}^{k}=\left|F_{\textit{cr}}^{k}\left(u,\nu\right)-F_{ \textit{SR}}^{k}\left(u,\nu\right)\right|^{a} \tag{24}\]
where \(a\) is the scaling factor for flexibility (in the experiment \(a\)=1). Further normalize w(u,v) to the range [0,1], where the 1 corresponds to the frequency that is currently lost the most and the easiest. Each frequency has a different weights.
Taking the _w(u,v)_ matrix value and the frequency distance matrix by Hadamard (shown in Fig. 6(b)), we get the output of the HSI frequency loss (HFL) as follows.
\[d\left(F_{\textit{cr}}^{k},F_{\textit{sr}}^{k}\right)=\frac{1}{HW}\sum_{u=0}^ {H-1}\sum_{\nu=0}^{H}w_{\left(u,\nu\right)}^{k}\left|F_{\textit{cr}}^{k} \left(u,\nu\right)-F_{\textit{SR}}^{k}\left(u,\nu\right)\right|^{2} \tag{35}\]
\[L_{\textit{HFL}}\left(F_{GT},F_{SR}\right)=\sum\nolimits_{k=0}^{C-1}d\left(F_ {\textit{cr}}^{k},F_{\textit{SR}}^{k}\right) \tag{36}\]
**d)**: Total training losses
The \(L_{l}\) loss is a common choice for SR works, as it can ensure good convergence in training. In this article, we use the \(L_{l}\) loss to compute the pixel loss in the spatial domain. At the same time, the \(L_{\textit{HFL}}\) (shown in Eq. 36) is used to calculate the loss in the spectral frequency domain.
\[L_{1}\left(\Theta\right)=\frac{1}{N}\sum_{x=1}^{N}\left|\vec{I}_{\textit{cr}}^ {x}-H_{\textit{sat}}\left(I_{\textit{L}}^{x}\right)\right|_{1} \tag{37}\]
Where the \(I_{\textit{cr}}^{x}\) and the \(H_{\textit{sat}}(I_{\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L}\textit{L} \textit{L}\
\[L_{total}=L_{1}+\beta L_{HFL} \tag{38}\]
Where the \(\beta\) is utilized to balance the contribution of various losses, and the \(\beta\) is set to 0.1.
## IV Experiments
### _Experiment Settings_
1) Datasets
a) CAVE database: The CAVE database1 was collected with a cooled CCD camera [42] in wavelength range of 400nm-700nm (31 bands) in 10 nm steps. The database contains 32 object scenes with the HSI sizes of 512\(\times\)512\(\times\)31.
b) Harvard database: The Harvard dataset2 was acquired using Nuance FX, CRI Inc.cameras [43] and was located in a scene of daylight or outdoor scene. The dataset contains 77 HSIs, each with a size of 1040\(\times\)1392\(\times\)31.
c) Chikusei database: The Chikusei dataset3 was captured by the Headwall Hyperspec-VNIR-C sensor in Chikusei, Japan. It encompasses a wavelength range of 343 nm to 1018 nm and comprises 128 spectral bands. The HSIs have a spatial resolution of 2.5 meters and a size of 2517\(\times\)2335.
Footnote 1: [https://www.cs.columbia.edu/CAVE/databases/](https://www.cs.columbia.edu/CAVE/databases/)
2) Implementation Details
Each dataset was captured using a different camera, requiring separate training and testing. In experiments, 80% of the datasets were randomly allocated for training, 10% for validation, and 10% for testing. During training, we randomly selected 24 patches for augmenting the training data.
These patches underwent operations such as horizontal flipping (90deg, 180deg, 270deg) and scaling (1, 0.5, 0.75). Subsequently, the patches were downsampled using interpolation with a scale factor of \(r\), resulting in low spatial resolution images sized 32\(\times\)32\(\times\)C.
During training, the initial learning rate is set to 1.1e-4, and it has a total of 200 epochs. Our experiments were conducted on the Ubuntu 18.04 operating system, utilizing the PyTorch 1.7.1 deep learning framework. Hardware acceleration was performed using an RTX 3090 GPU. To optimize test efficiency, we selected a 512\(\times\)512 area in the upper left corner as the test image for evaluation.
3) Evaluation Metrics
To assess the performance of our network, we used four popular quantitative measures of image quality (PQI): peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [44], mutual correlation (CC) [45], and spectral angle mapper (SAM) [46]. PSNR and SSIM are common metrics for image restoration quality, and their ideal values are \(+\infty\) and 1, respectively. CC and SAM are frequently used in HSI fusion works, and their best values are 1 and 0, respectively.
### _Comparisons With the State-of-the-Art Methods_
We conducted a thorough comparison of the SRDNet with six classical methods: Bicubic, GDRRN [9], 3D-FCNN [12], EDSR [47], EUNet [28], and MCNet [14]. Three datasets, namely CAVE, Harvard, and Chikusei, were utilized to validate the benefits of SRDNet across various sampling factors and bands' numbers.
1) Results on CAVE Dataset
As shown in Table I, our method has slightly more parameters than EDSR, but it achieves a better result under the same scale factor. Figure 7 shows the visual quality of the reconstructed images for each algorithm. And figures 7 and 8 show the difference in the reconstruction information between
the algorithms using error plots and spectral fitting plots. Table II and figure 7 demonstrate that all algorithms perform well in reconstructing high-resolution HSIs. Specifically, the proposed SRDNet method surpasses the spatially-based prior EDSR [47] and the spectral prior 3D-FCNN [12]. GDRRN [9] and EUNet algorithms are designed for (HSI), but their performance on different datasets is unstable. In particular, the SAM value of the EUNet [28] algorithm performs poorly at upsampling factors of 3 and 4. In Table II, SRDNet algorithmxhibits higher PSNR and SSIM values compared to the second-best method, with improvements of 1.62 dB/ 0.0008, 0.159 dB/0.0006, and 0.304 dB/0.0005, respectively, for the upsampling factors of 2/3/4. Notably, the PSNR shows a significant improvement of 3.6% for an upsampling factor of 2. Additionally, as indicated in Table I, the SRDNet method has a smaller parameter count compared to MCNNet.
features and spectral information of the image. Table IV presents the average performance of all compared algorithms on the four test images. It is evident that SRDNet outperforms all other algorithms, except for EUNet [28] when the upsampling factor is 4. However, as discussed earlier with the previous datasets, the EUNet [28] algorithm does not perform well when dealing with a large number of images. Furthermore, the computationally intensive operations in the network architecture of the MCNet [14] can significantly prolong the convergence time of the model. The limited number of images in the Chikusei dataset also impacts the performance of algorithms. To achieve models with better generalization, a larger number of samples are required for training. Although EDSR [47] networks demonstrate good performance in RGB image SR, its SAM index is relatively poor compared tosingle hyperspectral algorithms.
### _Ablation Study_
As explained in subsection 3, our method has four main components, where the PAM block is the main part of the feature extraction network. We call the network without the PAM module the baseline. In this subsection, we study the effect of different combinations of PAM on model performance. Table V shows the ablation studies for these combinations at the upsampling factor of 3 on the CAVE dataset (fake and real lemon slices ms). Specifically, when
Fig. 12: Reconstructed the _Chikusei_test_2_ image of in Chikusei dataset with spectral bands 55-45-55 as R-G- B when the upsampling factor is 8. The last column is spectral fits at pixel points (10, 50), and (40, 450), respectively.
Fig. 13: The absolute error map comparison of the _fake_and_real_lemon_slices_ms_ in the CAVE when an upsampling factor is 3. The less information the picture contains, the closer the reconstructed image is to the target image.Where ‘w/o’ indicates that this module is not included.
we remove the PAM module from the network, the performance drops for the same number of parameters. When the PAM module has only 3D units, the results are not very satisfactory. This is because the network pays too much attention to spectral information and loses spatial resolution. When the PAM has both 2D and 3D units, the performance is much better than only 2D units.
Fig. 13 shows that the absolute error of using the HFL loss is lower than that of using only the L1 loss. And even compared to the classic non-local attention mechanism (Non_Local), our proposed HSL has some advantages, and the absolute error information is lower than the latter. Finally, when we combine all components into the model, we can see that it performs better than any other combination in all three aspects. Thus, these analyses show that each component of SRDNet helps the model learn and optimize.
## V Conclusion
We propose a hyperspectral image super-resolution algorithm via a dual-domain network (SRDNet) in this paper. Our method uses a hybrid convolution of 2D and 3D units with progressive upsampling to capture more spatial information while learning spectral information. Unlike previous work, a dual-domain learning network is designed. We use the spectral self-attention mechanism to select important spectral information adaptively, and the network can obtain fine and smooth pixel-level features. Moreover, to address the visual discrepancy caused by the pixel-level loss, a frequency loss is adopted to narrow the frequency domain difference between the reconstructed image and the ground truth. The effectiveness of various modules in enhancing spatial information and spectral coherence has been verified in ablation studies.
Visual analysis and quantitative experiments on three common hyperspectral datasets demonstrate that the SRDNet achieves excellent performance in HSI reconstruction at both pixel and frequency levels. Our algorithm achieves optimal results on several common objective metrics, and the images are perceptually closer to the ground truth than other methods.
|
2303.00748 | Efficient and Explicit Modelling of Image Hierarchies for Image
Restoration | The aim of this paper is to propose a mechanism to efficiently and explicitly
model image hierarchies in the global, regional, and local range for image
restoration. To achieve that, we start by analyzing two important properties of
natural images including cross-scale similarity and anisotropic image features.
Inspired by that, we propose the anchored stripe self-attention which achieves
a good balance between the space and time complexity of self-attention and the
modelling capacity beyond the regional range. Then we propose a new network
architecture dubbed GRL to explicitly model image hierarchies in the Global,
Regional, and Local range via anchored stripe self-attention, window
self-attention, and channel attention enhanced convolution. Finally, the
proposed network is applied to 7 image restoration types, covering both real
and synthetic settings. The proposed method sets the new state-of-the-art for
several of those. Code will be available at
https://github.com/ofsoundof/GRL-Image-Restoration.git. | Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, Luc Van Gool | 2023-03-01T18:59:29Z | http://arxiv.org/abs/2303.00748v2 | # Efficient and Explicit Modelling of Image Hierarchies for Image Restoration
###### Abstract
The aim of this paper is to propose a mechanism to efficiently and explicitly model image hierarchies in the global, regional, and local range for image restoration. To achieve that, we start by analyzing two important properties of natural images including cross-scale similarity and anisotropic image features. Inspired by that, we propose the anchored stripe self-attention which achieves a good balance between the space and time complexity of self-attention and the modelling capacity beyond the regional range. Then we propose a new network architecture dubbed GRL to explicitly model image hierarchies in the Global, Regional, and Local range via anchored stripe self-attention, window self-attention, and channel attention enhanced convolution. Finally, the proposed network is applied to 7 image restoration types, covering both real and synthetic settings. The proposed method sets the new state-of-the-art for several of those. Code will be available at [https://github.com/ofsoundof/GRL-Image-Restoration.git](https://github.com/ofsoundof/GRL-Image-Restoration.git).
## 1 Introduction
Image restoration aims at recovering high-quality images from low-quality ones, resulting from an image degradation processes such as blurring, sub-sampling, noise corruption, and JPEG compression. Image restoration is an ill-posed inverse problem since important content information about the image is missing during the image degradation processes. Thus, in order to recover a high-quality image, the rich information exhibited in the degraded image should be fully exploited.
Natural images contain a hierarchy of features at global, regional, and local ranges which could be used by deep neural networks for image restoration. _First_, the local range covers a span of several pixels and typical features are edges and local colors. To model such local features, convolutional neural networks (CNNs) with small kernels (\(3\times 3\)) are utilized. _Second_, the regional range is characterized by a window with tens of pixels. This range of pixels can cover small objects and components of large objects (pink squares in Fig. 1). Due to the larger range, modelling the regional features (consistency, similarity) explicitly with large-kernel CNNs would be inefficient in both parameters and computation. Instead, transformers with a window attention mechanism are well suited for this task. _Third_, beyond local and regional, some features have a global span (cyan rectangles in Fig. 1), incl. but not limited to symmetry, multi-scale pattern repetition (Fig. 0(a)), same scale texture similarity (Fig. 0(b)), and structural similarity and consistency in large objects and content (Fig. 0(c)). To model features at this range, global image understanding is needed.
Different from the local and regional range features, there are two major challenges to model the global range features. Firstly, existing image restoration networks based on convolutions and window attention could not capture long-range dependencies explicitly by using a single computational module. Although non-local operations are used in some works, they are either used sparsely in the network or applied to small image crops. Thus, global image under
Figure 1: Natural images show a hierarchy of features in a global, regional, and local range. The local (edges, colors) and regional features (the pink squares) could be well modelled by CNNs and window self-attention. By contrast, it is difficult to efficiently and explicitly model the rich global features (cyan rectangles).
standing still mainly happens via progressive propagation of features through repeated computational modules. Secondly, the increasing resolution of today's images poses a challenge for long-range dependency modelling. High image resolution leads to a computational burden associated with pairwise pixel comparisons and similarity searches.
The aforementioned discussion leads to a series of research questions: 1) how to efficiently model global range features in high-dimensional images for image restoration; 2) how to model image hierarchies (local, regional, global) explicitly by a single computational module for high-dimensional image restoration; 3) and how can this joint modelling lead to a uniform performance improvement for different image restoration tasks. The paper tries to answer these questions in Sec. 3, Sec. 4, and Sec. 5, resp.
_First_, we propose anchored stripe self-attention for efficient dependency modelling beyond the regional range. The proposed self-attention is inspired by two properties of natural images including cross-scale similarity and anisotropic image features. Cross-scale similarity means that structures in a natural image are replicated at different scales. Inspired by that, we propose to use anchors as an intermediate to approximate the exact attention map between queries and keys in self-attention. Since the anchors summarize image information into a lower-dimensional space, the space and time complexity of self-attention can be significantly reduced. In addition, based on the observation of anisotropic image features, we propose to conduct anchored self-attention within vertical and horizontal stripes. Due to the anisotropic shrinkage of the attention range, a further reduction of complexity is achieved. And the combination of axial stripes also ensures a global view of the image content. When equipped with the stripe shift operation, the four stripe self-attention modes (horizontal, vertical, shifted horizontal, shifted vertical) achieves a good balance between computational complexity and the capacity of global range dependency modelling. Furthermore, the proposed anchored stripe self-attention is analyzed from the perspective of low-rankness and similarity propagation.
_Secondly_, a new transformer network is proposed to explicitly model global, regional, and local range dependencies in a single computational module. The hierarchical modelling of images is achieved by the parallel computation of the proposed anchored stripe self-attention, window self-attention, and channel-attention enhanced convolution. And the transformer architecture is dubbed **GRL**.
_Thirdly_, the proposed GRL transformer is applied to various image restoration tasks. Those tasks could be classified into three settings based on the availability of data including real image restoration, synthetic image restoration, and data synthesis based real image restoration. In total, seven tasks are explored for the proposed network including image super-resolution, image denoising, JPEG compression artifacts removal, demosaicking, real image super-resolution, single image motion deblurring, and defocus deblurring. As shown in Fig. 2, the proposed network shows promising results on the investigated tasks.
## 2 Related Works
**Convolution for local range modelling.** One of the basic assumptions for example and learning-based image restoration is that repetitive patterns could exist in either the same or different images [17] and that the redundant information they carry could help to restore the local patches. Thus, it helps if repetitive patterns could be detected and modelled [13, 33, 44, 61]. This intuition matches the computational procedure of convolution well, which slides the kernel across the image and detects local patterns similar to the learnable kernels. By stacking multiple convolutional layers, the receptive field of a CNN gets progressively enlarged and rich image features are captured. Since the advent of deep learning, great efforts have been made to design CNNs for image restoration [26, 39, 71, 72, 86].
**Non-local and global priors.** Besides the local features, it is also important to model the non-local and global image priors. The early work of non-local means serves this purpose, which computes an output pixel as the weighted sum of all the pixels within the image [4]. Inspired by that, later works have been developed to utilize the repetitive patterns in a non-local range for image denoising [10] and super-resolution [22]. Apart from the traditional methods, non-local operations are also introduced into deep neural networks for video classification [70] and image SR [45, 85].
Besides the non-local operations, self-attention has been developed to model the global range dependencies [12, 68]. However, the computational complexity of global self-attention grows quadratically with the number of tokens. Thus, the increase in efficiency of global self-attention is investigated by several works [7, 9, 31, 35, 69].
**Regional self-attention.** Among the methods for accelerating transformers, regional self-attention appears to
Figure 2: The proposed GRL achieves state-of-the-art performances on various image restoration tasks. Details provided in Sec. 5.
be promising. The idea is proposed in the pioneering works [54, 56] and improved as shifted window attention [47, 48]. Inspired by the success of shifted window attention for visual recognition and perception, this method is also used for image restoration [6, 42, 43]. Despite the good performance of the window attention mechanism, it is pointed out in recent works that a wider range of pixel involvement could lead to better image restoration [6, 23]. Thus, in this paper, we try to propose a method that efficiently brings the modelling capacity of self-attention beyond the regional range.
## 3 Motivation
### Self-attention for dependency modelling
Self-attention is good at modelling long-range dependencies explicitly and it facilitates the propagation of information across the modelled dependencies. This operation allows a token to be compared with all the other tokens. The output token is computed as a weighted sum of all the tokens based on a similarity comparison, _i.e._,
\[\mathbf{Y}=\mathrm{Softmax}\left(\mathbf{Q}\cdot\mathbf{K}^{T}/\sqrt{d} \right)\cdot\mathbf{V}, \tag{1}\]
where \(\mathbf{Q}=\mathbf{X}\cdot\mathbf{W}_{Q}\), \(\mathbf{K}=\mathbf{X}\cdot\mathbf{W}_{K}\), \(\mathbf{V}=\mathbf{X}\cdot\mathbf{W}_{V}\), \(\mathbf{W}_{Q},\mathbf{W}_{K},\mathbf{W}_{V}\in\mathbb{R}^{d\times d}\), and \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{N\times d}\). \(N\) and \(d\) denote the number of tokens and the dimension of one token, respectively. Additionally, \(\mathbf{M}\) denotes the attention map, _i.e._\(\mathbf{M}=\mathrm{Softmax}(\mathbf{Q}\cdot\mathbf{K}^{T}/\sqrt{d})\).
The time complexity of self-attention is \(\mathcal{O}(N^{2}d)\) and the space complexity is dominated by the term \(\mathcal{O}(N^{2})\) of the attention map \(\mathbf{M}\). The computational complexity and memory footprint of self-attention grow quadratically with the number of tokens. Thus, self-attention can easily become a computation bottleneck for images where the number of tokens is the multiplication of the two dimensions of the feature map. To overcome this problem, it is proposed to apply self-attention within a window. In this way, the number of tokens that participate in self-attention is significantly reduced and the computational burden is also lifted.
The problem of window self-attention is that the modelling capacity of the operation is limited to a regional range due to the small window size (\(8\times 8\)[43]). On the other hand, it is shown in recent works [6, 23] that even a slight increase in window size can lead to better image restoration. Thus, it can be conjectured that modelling dependencies beyond the regional range is still important for image restoration. Hence, it remained to be investigated how to maintain the ability for long-range dependency modelling under a controlled computational budget.
### Motivation I: cross-scale similarity
The attention map \(\mathbf{M}\) plays an essential role in self-attention as it captures the similarity between every paired pixels in the image. Thus, improving the efficiency of the self-attention in Eq. (1) needs one to analyze the property of the attention map. And we are inspired by a property of images, _i.e._ cross-scale similarity. That is, the basic structure such as lines and edges of an image is kept in the different versions of the image with different scaling factors. In Fig. 3, the attention map between pixels in an image is shown. Particularly, the attention map between a pixel and the whole image is visualized as a gray-scale heat map. As shown, no matter whether the pixel comes from the high-resolution image or the down-scaled version, the heat map between the pixel and the high-resolution image shows the basic structure of the image. And the heat maps in Fig. 3 and Fig. 3 are very similar to each other.
**Anchored self-attention.** Inspired by the cross-scale similarity shown in Fig. 3, we try to reduce the complexity of the global self-attention in Eq. (1) by operating on images with different resolutions and manipulating the number of tokens, _i.e._ the \(N^{2}\) term in \(\mathcal{O}(N^{2}d)\). To achieve that, we introduce an additional concept named anchors besides the triplets of queries, keys, and values. The set of anchors is a summary of the information in the image feature map and has a lower dimensionality. Instead of conducting the similarity comparison between the queries and keys directly, the anchors act as an intermediate for the similarity comparison. Formally, the anchored self-attention is proposed as in the following equation
\[\mathbf{Y} =\mathbf{M}_{e}\cdot\mathbf{Z}=\mathbf{M}_{e}\cdot\left(\mathbf{ M}_{d}\cdot\mathbf{V}\right), \tag{2}\] \[\mathbf{M}_{d} =\mathrm{Softmax}(\mathbf{A}\cdot\mathbf{K}^{T}/\sqrt{d}),\] (3) \[\mathbf{M}_{e} =\mathrm{Softmax}(\mathbf{Q}\cdot\mathbf{A}^{T}/\sqrt{d}), \tag{4}\]
where \(M\ll N\), \(\mathbf{A}\in\mathbb{R}^{M\times d}\) is the anchor, \(\mathbf{M}_{e}\in\mathcal{R}^{N\times M}\) and \(\mathbf{M}_{d}\in\mathcal{R}^{M\times N}\) denotes the attention map between the query-anchor pair and anchor-key pair. The choice of the operations to derive the anchors is investigated in the implementation details of the ablation study of the paper.
Since the number of anchors is much smaller than the number of the other tokens, the size of the resulting two attention maps \(\mathbf{M}_{e}\) and \(\mathbf{M}_{d}\) are much smaller than the orig
Figure 3: Cross-scale similarity. (c) and (d) shows the attention map between the selected pixels and the example high-resolution image. Although the cyan pixel in (a) and the red pixel in (b) are from images with different resolutions, their attention map with respect to the high-resolution image shows very similar structures.
inal attention map \(\mathbf{M}\) in Eq. (1). Then the matrix multiplication in Eq. (2) is computed from the right hand. The self-attention is first done for the anchors and keys. The attention map \(\mathbf{M}_{d}\) distills the tokens \(\mathbf{V}\) into an intermediate feature \(\mathbf{Z}\). Then the self-attention is done between the queries and the anchors. The second attention map \(\mathbf{M}_{e}\) expands the size of the feature \(\mathbf{Z}\) and recovers the information in \(\mathbf{V}\). The computational complexity of the anchored self-attention is reduced to \(\mathcal{O}(NMd)\). And the space complexity is reduced to \(\mathcal{O}(NM)\).
### Motivation II: anisotropic image features
The anchored self-attention could reduce the space and time complexity of the self-attention in Eq. (1) significantly by removing the quadratic term \(N^{2}\). Yet, for image restoration tasks, the remaining term \(N\) is the multiplication of the width and height of the image. Thus, the complexity of the anchored self-attention in Eq. (2) could still be unaffordable due to the large term \(N\). Thus, it is desirable to further reduce the complexity of the anchored self-attention.
To achieve that goal, we resort to another characteristic of natural images, _i.e_., the anisotropic image features. As shown in Fig. 4, the natural image features such as the single object in Fig. 4(c)&(d), the multi-scale similarity in Fig. 4(h), symmetry in Fig. 4(e)&(g) span in an anisotropic manner. Thus, isotropic global range attention across the entire image is redundant to capture the anisotropic image features. And in response to that, we propose to conduct attention within the anisotropic stripes shown in Fig. 4.
**Stripe attention mechanism**. The proposed stripe attention mechanism consists of four modes including the horizontal stripe, the vertical stripe, the shifted horizontal stripe, and the shifted vertical stripe. The horizontal and vertical stripe attention mechanisms could be employed alternately across a transformer network. In this way, a trade-off is made between maintaining the global range modelling capacity and controlling the computation complexity of global self-attention. Thus, in combination with the concept of anchors, we propose the **anchored stripe self-attention**. For this attention mechanism, efficient self-attention is conducted inside the vertical and horizontal stripes with the help of the introduced anchors.
### Discussion
The proposed anchored stripe self-attention mechanism is closely related to two other concepts including low-rankness and similarity propagation. And we detail the relationship in this subsection as follows.
**Low-rankness of attention map.** By comparing the self-attention mechanisms in Eq. (1) and Eq. (2), we can easily found out that the original attention map \(\mathbf{M}\) is decomposed into small attention maps \(\mathbf{M}_{d}\) and \(\mathbf{M}_{e}\) whose rank is no larger than \(M\). And the essence here is to provide the low-rank approximation without calculating the original attention map first. For the success of the anchored self-attention, it is important to ensure that with the anchors as the intermediate, the approximated attention map is similar to the original attention map. Thus, an additional analysis is provided in Fig. 5.
First, by observing the queries, anchors, and keys, we can conclude that the anchors have a very similar structure to the query and key. Thus, the anchors are a good summary of the information in the queries and keys. And approximating self-attention with anchors as intermediate seems to be plausible. Additionally, the approximate attention map \(\mathbf{M}_{e}\cdot\mathbf{M}_{d}\) and the exact attention map \(\mathbf{M}\) are also compared in Fig. 5. As shown, the approximate attention map keeps the major structure in the exact attention map, which is confirmed by the large Pearson correlation coefficients (0.9505) between the two attention maps. Thus, the quality of the anchored self-attention is guaranteed.
**Metric and similarity propagation.** From another perspective, in the proposed anchored self-attention, the queries and keys are first compared with the anchors and
Figure 4: The image features in natural images are anisotropic. Thus, it is not always necessary to employ the uniform global range attention in all parts of the image.
Figure 5: The visualization of the (a) queries, (b) anchors, and (c) keys from the different layers of the proposed network. (d) shows the attention map approximated by Eq. (2), _i.e_. \(\mathbf{M}_{e}\cdot\mathbf{M}_{d}\). (e) shows the exact attention map \(\mathbf{M}\) computed in Eq. (1).
then the query-key similarity is computed. Thus, this computation procedure needs to propagate the query-anchor and key-anchor similarity to the query-key pair. And similarity propagation is related to the triangle inequality in a metric space [19, 73, 27]. A mathematical metric needs to satisfy several conditions including the essential triangle inequality, \(d(\mathbf{q},\mathbf{k})\leq d(\mathbf{a},\mathbf{q})+d(\mathbf{a},\mathbf{k})\), where \(d(\cdot,\cdot)\) defines a metric between two entities. Thus, the \(\mathbf{q}\) / \(\mathbf{k}\) distance is upper-bounded by the sum of the \(\mathbf{a}\) / \(\mathbf{q}\) distance and the \(\mathbf{a}\) / \(\mathbf{k}\) distance. This implies that if \(\mathbf{a}\) is similar (close) to both \(\mathbf{q}\) and \(\mathbf{k}\), then \(\mathbf{q}\) and \(\mathbf{k}\) should also be similar (close) to each other. Yet, the similarity measure in Eq. (1) and Eq. (2) is defined by the dot product instead of the distance between tokens, which does not satisfy the triangle inequality. Thus, similarity propagation could not be theoretically guaranteed. To study the influence of the similarity measure, an ablation study is conducted and the results are shown in Sec. 5. Dot product and distance are compared as a similarity measure. According to the results, although the dot product does not strictly obey the triangle inequality, it still guarantees better image restoration results. Thus, we can conclude empirically that the dot product is enough for similarity propagation.
## 4 Modelling Image Hierarchies
In this section, we answer the second research question described in the introduction, that is, how to explicitly model image hierarchies by a single computational module. In response to that, we propose the GRL network architecture that incorporates global range, regional range, and local range image modelling capacities.
**Network architecture.** The overall architecture of the proposed network is shown in Fig. 6. The network takes a degraded low-quality image as input, processes the image inside the network, and outputs a recovered high-quality image. In detail, the network contains three parts. 1) The feature extraction layer is implemented as a simple convolution and converts the input image into feature maps. 2) The representation learning component enriches the information extracted in the previous operation. The transformer stage consists of several transformer layers and ends with a convolution layer. The dimension of the feature map is maintained across the whole representation learning module. Skip connection is applied to both the transformer stage and the representation learning module. 3) The image reconstruction module takes the rich features calculated by the previous operations and estimates a recovered image.
**Transformer Layer.** This layer in Fig. 6b is the key component that provides the hierarchical image modelling capacity in the global, regional, and local range. This layer first processes the input feature map by the parallel self-attention module and channel attention enhanced convolutions. The convolution branch serves to capture local structures in the input feature map. On the other hand, the self-attention module contains the window attention proposed in Swin transformer V2 [47] and the anchored stripe atten
Figure 6: Network architecture. (a) The representation learning module contains stages of transformer layers. (b) The transformer layer is equipped with global, regional, and local modelling blocks. (c) The anchored stripe attention helps to attend beyond regional ranges.
tion proposed in this paper. The feature map is split equally along the channel dimension and concatenated along the channel dimension again after the parallel processing within the two attention modules. The window attention provides the mechanism to capture the regional range dependencies. Then the feature maps outputted by the convolution module and the attention module are added to the input feature map, which is processed by the following MLP module.
**Anchored stripe self-attention.** The operation of the proposed anchored stripe attention is conducted according to Eq. (2) and visualized in Fig. 6c. The dimension of different features is also shown. The triplet of \(\mathbf{Q},\mathbf{K}\), \(\mathbf{V}\) is derived by plain linear projections. To summarize the information into anchors, the anchor projection is implemented as an average pooling layer followed by a linear projection. After the anchor projection, the resolution of the image feature map is down-scaled by a factor of \(s\) along both directions. As shown in Fig. 6, the two attention maps \(\mathbf{M}_{d}\) and \(\mathbf{M}_{c}\) play a similar role as the original attention map \(\mathbf{M}\) but with less space and time complexity.
## 5 Experimental Results
The experimental results are shown in this section. We answer the third research question raised in the introduction by investigating the performance of the proposed network on different image restoration tasks. Based on the data type, the investigated tasks are classified into three commonly used settings including 1) real image restoration (single-image motion deblurring, defocus deblurring), 2) image restoration based on synthetic data (image denoising, single image SR, JPEG compression artifact removal, demosaicking), and 3) real image restoration based on data synthesis. We provide three networks with different model sizes including the tiny, small, and base versions (GRL-T, GRL-S, GRL-B). For real and synthetic image restoration, Adam optimizer and \(L_{1}\) loss are used to train the network with an initial learning rate \(2\times 10^{-4}\). More details about the training dataset, training protocols, and additional visual results are shown in the _supplementary material_.
### Image deblurring
We first investigate the performance of the proposed network on two real image restoration tasks including single-image motion deblurring, and motion deblurring.
**Single image motion deblurring**. Tab. 2 and Tab. 3 shows the experimental results for single image motion deblurring on synthetic datasets (GoPro [51], HIDE [59]) and real dataset (RealBlur-R [57]), respectively. Compared with the previous state-of-the-art Restormer [76], the proposed GRL achieves significant PSNR improvement of 1.01 dB on the GoPro dataset. On the HIDE dataset, the PSNR improvement is 0.43 dB. Please note that the improvement is achieved under fewer parameter budget. As shown in Tab. 4,
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{**Indoor Scenes**} & \multicolumn{3}{c|}{**Outdoor Scenes**} & \multicolumn{3}{c}{**Combined**} \\ \cline{2-13} & PSNR & SSIM & MAE & LPIPS & PSNR & SSIM & MAE & LPIPS & PSNR & PSNR & SSIM & MAE & LPIPS \\ \hline EBDB\({}_{S}\)[30] & 25.77 & 0.772 & 0.040 & 0.297 & 21.25 & 0.599 & 0.058 & 0.373 & 23.45 & 0.683 & 0.049 & 0.336 \\ DMENet\({}_{S}\)[40] & 25.50 & 0.788 & 0.038 & 0.298 & 21.43 & 0.644 & 0.063 & 0.397 & 23.41 & 0.714 & 0.051 & 0.349 \\ JNB\({}_{S}\)[60] & 26.73 & 0.828 & 0.031 & 0.273 & 21.10 & 6.008 & 0.064 & 0.355 & 23.84 & 0.715 & 0.048 & 0.315 \\ DPDNets\({}_{S}\)[11] & 26.54 & 0.816 & 0.031 & 0.239 & 22.25 & 0.682 & 0.056 & 0.313 & 24.34 & 0.074 & 0.044 & 0.277 \\ KAP\({}_{C}\)[5] & 27.97 & 0.852 & 0.026 & 0.182 & 22.62 & 0.701 & 0.053 & 0.269 & 25.22 & 0.774 & 0.040 & 0.227 \\ IFA\({}_{N}\)[41] & 28.11 & 0.861 & 0.026 & 0.179 & 22.26 & 0.720 & 0.052 & 0.254 & 25.37 & 0.789 & 0.039 & 0.217 \\ Restormer\({}_{S}\)[76] & 28.87 & 0.882 & 0.025 & 0.145 & 23.24 & 0.743 & 0.050 & 0.209 & 25.98 & 0.811 & 0.038 & 0.178 \\ GRL\({}_{S}\)-B & 29.06 & 0.886 & 0.024 & 0.139 & 23.45 & 0.761 & 0.049 & 0.196 & 26.18 & 0.822 & 0.037 & 0.168 \\ \hline DPDNet\({}_{D}\)[1] & 27.48 & 0.849 & 0.029 & 0.189 & 22.90 & 0.726 & 0.052 & 0.255 & 25.13 & 0.786 & 0.041 & 0.223 \\ RDPD\({}_{D}\)[2] & 28.10 & 0.843 & 0.027 & 0.210 & 22.82 & 0.704 & 0.053 & 0.298 & 25.39 & 0.772 & 0.040 & 0.255 \\ Uformer\({}_{D}\)[74] & 28.23 & 0.860 & 0.026 & 0.199 & 23.10 & 0.728 & 0.051 & 0.285 & 25.65 & 0.795 & 0.039 & 0.243 \\ IFAD\({}_{D}\)[41] & 28.66 & 0.868 & 0.025 & 0.172 & 23.46 & 0.743 & 0.049 & 0.240 & 25.99 & 0.804 & 0.037 & 0.207 \\ Restormer\({}_{D}\)[76] & 29.48 & 0.895 & 0.023 & 0.134 & 23.97 & 0.773 & 0.047 & 0.175 & 26.66 & 0.833 & 0.035 & 0.155 \\ GOLO\({}_{D}\)-B & 29.83 & 0.903 & 0.022 & 0.114 & 24.39 & 0.795 & 0.045 & 0.150 & 27.04 & 0.847 & 0.034 & 0.133 \\ \hline \end{tabular}
\end{table}
Table 1: _Defocus deblurring_ results. **S:** single-image defocus deblurring. **D:** dual-pixel defocus deblurring.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Method** & **GePro**[51] & **HIDE**[59] & Average \\ & PSNR \(\uparrow\) / SSIM & PSNR \(\uparrow\) / SSIM & PSNR \(\uparrow\) / SSIM\(\uparrow\) \\ \hline DeblurGAN [37] & 28.70 / 0.858 & 24.51 / 0.871 & 26.61 / 0.865 \\ Nah _et al._[51] & 29.08 / 0.914 & 25.73 / 0.874 & 27.41 / 0.894 \\ DeblurGAN-v2 [38] & 29.55 / 0.934 & 26.61 / 0.875 & 28.08 / 0.905 \\ SRN [64] & 32.66 / 0.934 & 28.36 / 0.915 & 29.31 / 0.925 \\ Gao _et al._[20] & 30.90 / 0.935 & 29.11 / 0.913 & 30.01 / 0.924 \\ DBGAN [81] & 31.10 / 0.942 & 28.94 / 0.915 & 30.02 / 0.929 \\ MT-RNN [53] & 31.51 / 0.945 & 29.15 / 0.918 & 30.15 / 0.932 \\ DMPFM [79] & 31.20 / 0.940 & 29.09 / 0.924 & 30.15 / 0.932 \\ Suin _et al._[63] & 31.85 / 0.948 & 29.98 / 0.930 & 30.92 / 0.939 \\ SPAIR [55] & 32.06 / 0.953 & 30.29 / 0.931 & 31.18 / 0.942 \\ MIMO-UNet+ [8] & 32.45 / 0.957 & 29.99 / 0.930 & 31.22 / 0.944 \\ IPT [5] & 32.52 / - - - & - - \\ MPRNet [77] & 32.66 / 0.959 & 30.96 / 0.939 & 31.81 / 0.949 \\ Restormer [76] & 32.92 / 0.961 & 31.22 / 0.942 & 32.07 / 0.952 \\ GRL-B (ours) & 33.93 / 0.968 & 31.65 / 0.947 & 32.79 / 0.958 \\ \hline \hline \end{tabular}
\end{table}
Table 2: _Single-image motion deblurring_ results. GoPro dataset [51] is used for training.
\begin{table}
\begin{tabular}{l|c|c|c
GRL-B saves 24% parameters compared with Restormer. As shown in Tab. 3, GRL-B sets the new state-of-the-art performance of 40.20 PSNR on RealBlur-R dataset.
**Defocus deblurring**. Tab. 1 shows the experimental results for defocus deblurring using single image and dual-pixel images. Our GRL outperforms the previous methods for all three scene types. Compared with Restormer on the combined scenes, our GRL achieves an elegant performance boost of 0.20 dB and 0.38 dB for single and dual-pixel defocus deblurring. Compared with Uformer [74] and IFAN [41], GRL achieves PSNR gain of 1.39 dB and 1.05 dB for the dual-pixel setting.
### Image restoration based on synthetic data
Investigating image restoration with synthetic data is also valuable to reveal the network capacity of restoration methods. Besides the experiments on the real data, we also study the performance of the network on synthetic data.
**Image denoising**. First, the experimental results on Gaussian image denoising are shown in Tab. 4. For a fair comparison between different models, both the network complexity and accuracy are shown in the table. And several key findings are observed. **I.** The tiny version GRL-T is extremely efficient, reducing model complexity by two orders of magnitude (only 0.76% of [5] and 2.7% of DRUNet [80]) while not sacrificing network accuracy. **II.** The small version GRL-S performs competitive with the previous state-of-the-art SwinIR [43] and Restormer [76]. **II.** On Urban100, the base version outperforms Restormer by a large margin (0.44dB PSNR gain for color image and noise level 50).
**Image SR**. Experimental results for classical images are shown in Tab. 5. Both lightweight models and accurate SR models are summarized. A similar conclusion could be drawn from the results. **I.** Among the lightweight networks, GRL-T outperforms both convolution and self-attention-based networks including DBPN [25], SwinIR [43] and EDT [42]. Compared with EDT, Significant improvements are obtained on Urban100 and Manga109 datasets (0.44 dB and 0.22 dB for \(\times 4\) SR). **II.** GRL-B sets the new state-of-the-art for accurate image SR. **III.** GRL-S achieves a good balance between network complexity and SR image quality.
**JPEG compression artifact removal**. The experimental results for color and grayscale images are shown in Tab. 6. Four image quality factors ranging from 10 to 40 for JPEG
compression are studied. As shown in the table, the proposed GRL-S network outperforms the previous state-of-the-art method elegantly across different datasets and quality factors. Notably, GRL-S has a much smaller model complexity than FBCNN.
**Demosaicking**. Results for image demosaicking is shown in Tab. 8. The proposed method outperforms the previous methods RNAN [85] and DRUNet [80] significantly.
### Real image restoration based on data synthesis
Finally, we also investigate the performance of the network for real-world image restoration. The aim is to superresolve a low-quality image by an upscaling factor of 4. Since there are no ground-truth images for this task, only the visual comparison is given in Fig. 7. Compared with the other methods, the proposed GRL is able to remove more artifacts in the low-resolution images.
### Ablation study
**Influence of the similarity comparison method**. As mentioned in Sec. 3.4, for theoretical guarantee of similarity propagation, a mathematical metric rather than a dot product should be used. To study the difference between, image restoration with the two operations are compared and the results are shown in Tab. 9. As revealed by the table, the dot product is very competitive compared with a metric and it outperforms a distance metric for a couple of settings. Considering this, the dot product is still used.
**Influence of the anchor projections**. The anchor projection operation helps to summarize the information in the feature map. The ablation study is shown in Tab. 10. Considering both the accuracy performance and parameter budget, Avgpool followed by linear projection is finally used.
## 6 Conclusion
In this paper, we proposed GRL, a network with efficient and explicit hierarchical modelling capacities for image restoration. The proposed network was mainly inspired by two image properties including cross-scale similarity and anisotropic image features. Based on that, we proposed the efficient anchored stripe self-attention module for long-range dependency modelling. Then a versatile network architecture was proposed for image restoration. The proposed network can model image hierarchies in the global, regional, and local ranges. Owing to the advanced computational mechanism, the proposed network architecture achieves state-of-the-art performances for various image restoration tasks.
**Acknowledgements.** This work was partly supported by ETH Zurich General Fund (OK), Meta Reality Labs and the Alexander von Humboldt Foundation.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Set} & \multirow{2}{*}{GF} & \multirow{2}{*}{JPEG} & \multirow{2}{*}{DmCNN [82]} & \multirow{2}{*}{DSC [18]} & \multirow{2}{*}{DGAC [14]} & \multirow{2}{*}{DGAC [14]} & \multirow{2}{*}{HKENCN [29]} & \multirow{2}{*}{GRL-S} \\ & & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \multirow{8}{*}{**D**} & 10 & 27.82 & 0.760 & 29.40 & 0.803 & 29.62 & 0.810 & 39.84 & 0.812 & 30.01 & 0.820 & 30.12 & 0.829 \\ & 30 & 31.02 & 0.834 & 31.63 & 0.863 & 31.81 & 0.864 & 31.98 & 0.869 & 32.16 & 0.870 & 32.31 & 0.872 & 32.49 & 0.878 \\ & 30 & 31.48 & 0.867 & 32.91 & 0.886 & 33.06 & 0.888 & 33.22 & 0.892 & 33.43 & 0.893 & 33.54 & 0.894 & 33.72 & 0.899 \\ & 40 & 32.43 & 0.885 & 33.77 & 0.900 & 33.87 & 0.902 & 34.05 & 0.905 & 34.37 & 0.906 & 34.35 & 0.907 & 34.53 & 0.911 \\ \hline \multirow{8}{*}{**D**} & 10 & 27.80 & 0.768 & 29.21 & 0.809 & 29.32 & 0.813 & 29.46 & 0.821 & 29.61 & 0.820 & 29.67 & 0.821 & 29.74 & 0.823 \\ & 20 & 30.05 & 0.849 & 31.53 & 0.878 & 31.63 & 0.880 & 31.73 & 0.884 & 31.92 & 0.885 & 32.00 & 0.885 & 32.05 & 0.855 \\ & 30 & 31.37 & 0.884 & 32.90 & 0.907 & 32.99 & 0.908 & 30.79 & 0.912 & 33.30 & 0.912 & 33.37 & 0.913 & 33.48 & 0.912 \\ & 40 & 32.30 & 0.903 & 33.85 & 0.923 & 33.92 & 0.924 & 34.01 & 0.927 & 34.27 & 0.928 & 34.33 & 0.928 & 34.38 & 0.928 \\ \hline \end{tabular}
\end{table}
Table 6: _Grayscale image JPEG compression artifact removal_ results. As a comparison- Table of some metric, the parameter count of FBCNN [29] GRL-S are 71.92M and 3.12M.
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c} \hline \multirow{2}{*}{Test set} & \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{Color DN} & \multicolumn{2}{c|}{Gray DN} & \multicolumn{2}{c}{Image SR} \\ & & \(\sigma 15\) & \(\sigma 25\) & \(\sigma 50\) & \(\sigma 52\) & \(\sigma 50\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \hline \multirow{2}{*}{SSD68 or} & Euclidean & 35.02 & 32.56 & 29.42 & 31.84 & 29.36 & 26.43 & 32.30 & 29.19 & 27.67 \\ & Dot product & 35.10 & 32.64 & 29.54 & 31.85 & 29.39 & 26.44 & 32.33 & 29.22 & 27.70 \\ \hline \multirow{2}{*}{Urban100} & Euclidean & 34.63 & 32.28 & 28.94 & 33.25 & 30.64 & 27.17 & 32.76 & 28.62 & 26.50 \\ & Dot product & 34.77 & 32.41 & 29.19 & 33.28 & 30.75 & 27.26 & 32.88 & 28.78 & 26.67 \\ \hline \end{tabular}
\end{table}
Table 10: Ablation study on anchor projection operation.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \multirow{2}{*}{Test set} & \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{Color DN} & \multicolumn{2}{c|}{Gray DN} & \multicolumn{2}{c}{Image SR} \\ & & \(\sigma 15\) & \(\sigma 25\) & \(\sigma 50\) & \(\sigma 52\) & \(\sigma 50\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \hline \multirow{2}{*}{SSD68 or} & Euclidean & 35.02 & 32.56 & 29.42 & 31.84 & 29.36 & 26.43 & 32.30 & 29.19 & 27.67 \\ & Dot product & 35.10 & 32.64 & 29.54 & 31.85 & 29.39 & 26.44 & 32.33 & 29.22 & 27.70 \\ \hline \multirow{2}{*}{Urban100} & Euclidean & 34.63 & 32.28 & 28.94 & 33.25 & 30.64 & 27.17 & 32.76 & 28.62 & 26.50 \\ & Dot product & 34.77 & 32.41 & 29.19 & 33.28 & 30.75 & 27.26 & 32.88 & 28.78 & 26.67 \\ \hline \end{tabular}
\end{table}
Table 9: Ablation study on similarity comparison operation.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \multirow{2}{*}{
\begin{tabular}{l} Anchor projection operation \\ \end{tabular} } & \multirow{2}{*}{\# Params [m]} & \multirow{2}{*}{PSNR on Set 5} \\ \hline Depthwise Conv & & 3.17 & 35.03 \\ Conv & & 4.19 & 35.03 \\ Patch merging & 3.53 & 34.98 \\ Maxpool + Linear Projection & 3.12 & 35.02 \\ Avgpool + Linear Projection & 3.12 & 35.03 \\ \hline \end{tabular}
\end{table}
Table 10: Ablation study on anchor projection operation.
Figure 7: Visual results for real-world image SR. |
2305.15975 | Triplet Knowledge Distillation | In Knowledge Distillation, the teacher is generally much larger than the
student, making the solution of the teacher likely to be difficult for the
student to learn. To ease the mimicking difficulty, we introduce a triplet
knowledge distillation mechanism named TriKD. Besides teacher and student,
TriKD employs a third role called anchor model. Before distillation begins, the
pre-trained anchor model delimits a subspace within the full solution space of
the target problem. Solutions within the subspace are expected to be easy
targets that the student could mimic well. Distillation then begins in an
online manner, and the teacher is only allowed to express solutions within the
aforementioned subspace. Surprisingly, benefiting from accurate but
easy-to-mimic hints, the student can finally perform well. After the student is
well trained, it can be used as the new anchor for new students, forming a
curriculum learning strategy. Our experiments on image classification and face
recognition with various models clearly demonstrate the effectiveness of our
method. Furthermore, the proposed TriKD is also effective in dealing with the
overfitting issue. Moreover, our theoretical analysis supports the rationality
of our triplet distillation. | Xijun Wang, Dongyang Liu, Meina Kan, Chunrui Han, Zhongqin Wu, Shiguang Shan | 2023-05-25T12:12:31Z | http://arxiv.org/abs/2305.15975v1 | # Triplet Knowledge Distillation
###### Abstract
In Knowledge Distillation, the teacher is generally much larger than the student, making the solution of the teacher likely to be difficult for the student to learn. To ease the mimicking difficulty, we introduce a triplet knowledge distillation mechanism named TriKD. Besides teacher and student, TriKD employs a third role called anchor model. Before distillation begins, the pre-trained anchor model delimits a subspace within the full solution space of the target problem. Solutions within the subspace are expected to be easy targets that the student could mimic well. Distillation then begins in an online manner, and the teacher is only allowed to express solutions within the aforementioned subspace. Surprisingly, benefiting from accurate but easy-to-mimic hints, the student can finally perform well. After the student is well trained, it can be used as the new anchor for new students, forming a curriculum learning strategy. Our experiments on image classification and face recognition with various models clearly demonstrate the effectiveness of our method. Furthermore, the proposed TriKD is also effective in dealing with the overfitting issue. Moreover, our theoretical analysis supports the rationality of our triplet distillation.
Machine Learning, ICML
## 1 Introduction
Knowledge distillation (KD) generally optimizes a small student model by transferring knowledge from a large teacher model. While most existing works aim to make a student learn better from a given teacher, the training of the teacher itself usually follows the trivial way and is rarely investigated. However, without any intervention, large models suffer from high risk of coming into solutions that, while generalize well, are difficult for small models to mimic, which would unfavourably affect distillation. This argument is supported by recent work showing the optimization difficulty is a major barrier in knowledge distillation (Stanton et al., 2021), and is also confirmed by evidence that larger teacher with higher accuracy counter-intuitively makes worse student (Cho and Hariharan, 2019; Zhu and Wang, 2021; Mirzadeh et al., 2020). An illustration is shown in Fig.1(a-c). Considering the function space from input image to target output, the subspace consisting of functions that the teacher could fit, \(\mathcal{F}_{\mathrm{T}}\) (referred to as _hypothesis space_ in machine learning), is larger than that of the student, \(\mathcal{F}_{\mathrm{S}}\), since the teacher has larger capacity. When the solution of the teacher is out of the subspace attainable to the student (\(\mathcal{F}_{\mathrm{S}}\)), the student would fail to mimic the teacher's solution well.
Our proposed method, TriKD, is based on online knowledge distillation and inspired by the following motivation: _could we make the teacher not only accurate, but also easy to mimic?_ In this paper, we try to achieve this goal through providing both the online teacher and the student with a common anchor, which constrains the two models to learn to solve the target task in a small-model friendly approach. The pre-trained anchor model is of **equal** capacity comparing with the **student**, which ensures the function expressed by the anchor, \(f_{\mathrm{A}}\), is within \(\mathcal{F}_{\mathrm{S}}\) and easily mimickable to the student. By penalizing the function distances from the anchor to the student and especially to the teacher, the anchor pulls the search space of both the student and especially the teacher near \(f_{\mathrm{A}}\). The teacher then has good chance to also lie within or close to \(\mathcal{F}_{\mathrm{S}}\), leading to easy mimicking. Meanwhile, even being restricted to a small search space, we find that the large teacher could still reveal high-accuracy solutions thanks to its high capacity. Benefited from accurate but easy-to-mimic hints, the student can then mimic the teacher more faithfully and perform better after distillation. _In short, the anchor model, teacher model, and student model formulate a novel triplet knowledge distillation mechanism._ An illustration is shown in Fig.1(d).
Since an appropriate anchor is not trivial to find, we develop a curriculum strategy: the trained student from one TriKD generation is used as the anchor of the next generation, and a new pair of randomly initialized student and teacher join in. Generation by generation, the newly trained student
becomes better and better, and its performance finally converges. Considering Fig.1(d), this process can be interpreted as gradually moving the anchor towards local minima.
Overall, _our main contributions are as below_: 1). We propose a novel triplet knowledge distillation mechanism named TriKD. TriKD makes distillation more efficient by making the teacher not only accurate by also easy to mimic. 2). To find a proper anchor model for TriKD, we propose a curriculum strategy where student in one generation serves as the anchor of the next generation. 3). Our TriKD achieves state-of-the-art performance on knowledge distillation, and also demonstrates better generalization in tackling the over-fitting issue. 4). Theoretical analysis in a statistical perspective is given to analyze the rationality of triplet distillation.
## 2 Related work
### Offline Knowledge Distillation
Offline knowledge distillation makes the student learn from a **pre-trained** and **fixed** teacher. Hinton et al. (2015) propose mimicking the softened class distributions predicted by large teachers. Some studies (Ding et al., 2019; Wen et al., 2019) then go a further step to explore the trade-off between the supervision of soft logits and hard task label, and others (Tian et al., 2020; Xu et al., 2020) propose to introduce auxiliary tasks to enrich the transferred knowledge. Instead of final outputs, many works exploit the intermediate features (Romero et al., 2015; Kim et al., 2018; Jin et al., 2019; Zagoruyko and Komodakis, 2017; Chen et al., 2021) as transferred knowledge. Self-distillation, pioneered by Born again (Furlanello et al., 2018), makes the teacher share the same network architecture as the student, and continuously updates the student in an iterative manner. Our TriKD is related to Born again as it also involves such iterative training, but we use it to obtain a more reliable anchor.
### Online Knowledge Distillation
Online knowledge distillation makes multiple randomly-initialized models collaboratively learn from scratch. This line of research is especially significant for scenarios without available pre-trained teacher model. A monumental work is deep mutual learning (DML) (Zhang et al., 2018). During the training phase, DML uses a pool of randomly initialized models as the student pool, and each student is guided by the output of other peers as well as the task label. Based on DML, some works (Zhang et al., 2020; Yao and Sun, 2020) additionally take intermediate features into account, and others (Guo et al., 2020; Chen et al., 2020) design different mimicking targets. Our TriKD is also built upon DML as the teacher and the student are all randomly initialized and learn mutually from each other, but we additionally incorporate an anchor model to enhance distillation.
### 'Larger Teacher, Worse Student'
Intuitively, the performance of the student should increase when the teacher has larger capacity and higher performance.
Figure 1: An intuitive illustration of our motivation. The 2d plane represents the function space from input image to task-specific output. Every neural network with compatible input and output format corresponds to a certain point on the plane, and the color represents the expected risk, darker means lower risk. The small model is the target student and its performance is our major interest. As the large teacher model has stronger fitting ability than the student, the collection of functions it could attain, \(\mathcal{F}_{\mathrm{T}}\), is also larger than \(\mathcal{F}_{\mathrm{S}}\). (a) When trained independently, the teacher model may step towards local minima out of the scope that the student could well fit. (b)(c) For both online and offline distillation, the large model is likely to lie beyond the subspace attainable to student model. This makes the student, though performing better, still lie far away from the optima, leading to a sub-optimal solution. (d) In our TriKD, a pre-trained anchor model is used to pull both the teacher and student models within or near the subspace attainable to the student model, making the teacher easy to mimic. The mutual learning between teacher and student then makes the student learn a high-quality solution with better generalization.
However, Cho _et al._Cho and Hariharan (2019) identify that very large teacher actually makes the student deteriorate. This phenomenon has also been witnessed by following works Mirzadeh et al. (2020); Zhu and Wang (2021), and has been attributed to the capacity mismatch between teacher and student. To overcome this problem, ESKD Cho and Hariharan (2019) proposes an early-stopping strategy, and SCKD Zhu and Wang (2021) automatically adjusts the distillation process through considering the gradient similarity between the teacher's and the student's distillation loss. TAKD Mirzadeh et al. (2020) divides the distillation process into multiple stages, and introduces intermediate-sized models, called teacher assistant, to bridge the capacity gap between the original teacher and student. While TAKD Mirzadeh et al. (2020) treats mimicking difficulty as an inherent property of teacher model capacity, _i.e._, larger teachers are inherently harder to mimic, we believe that a given large network with fixed capacity should be able to fit both hard and easy functions, and we could make a large teacher still easy to mimic by deliberately making the function it expresses easy. Detailed comparisons between TAKD and our TriKD are provided in C.1 in Appendix.
## 3 Method
### Triplet Distillation
Our TriKD incorporates three models: online teacher \(\mathrm{T}\), student \(\mathrm{S}\), and anchor \(\mathrm{A}\). Among them, the anchor supervises both the teacher and student, and the student and the teacher learn mutually from each other. At the beginning of the distillation process, the anchor is already fully-trained on the target task, while the student and the teacher are randomly initialized. During distillation, the parameters of the anchor model keep fixed, while the parameters in the other two models are optimized, which is detailed below.
#### 3.1.1 Guidance from anchor to teacher/student
The anchor \(\mathrm{A}\) is designed to constrain the student \(\mathrm{S}\) and the teacher \(\mathrm{T}\) to learn to solve the target task in a student-friendly manner. For this purpose, we first ensure the function expressed by the anchor itself, \(f_{\mathrm{A}}\), is easily attainable to the student. This is achieved by making the anchor model \(\mathrm{A}\) of the same architecture and size as the student \(\mathrm{S}\), and already trained on the target task. We then try to constrain the search space of both the teacher and the student to be near \(f_{\mathrm{A}}\), which is realized through penalizing the KL-divergence from the anchor to the teacher/student:
\[\mathcal{L}_{KL}(f_{\mathrm{A}},f_{\mathrm{T}})=\sum_{i=1}^{N}\tau^{2}\mathbf{ KL}\left(f_{\mathrm{A}}(x_{i})||f_{\mathrm{T}}(x_{i})\right), \tag{1}\]
\[\mathcal{L}_{KL}(f_{\mathrm{A}},f_{\mathrm{S}})=\sum_{i=1}^{N}\tau^{2}\mathbf{ KL}\left(f_{\mathrm{A}}(x_{i})||f_{\mathrm{S}}(x_{i})\right), \tag{2}\]
where \(x\) denotes training sample, \(N\) is the number of training samples, \(\tau\) represents temperature used to soften the
Figure 2: An overview of Triplet Knowledge Distillation. In the \(g\)th generation, a pre-trained anchor \(\mathrm{A}_{g}\) supervises a pair of randomly initialized student \(\mathrm{S}_{g}\) and teacher \(\mathrm{T}_{g}\); the student and the teacher also learn mutually from each other. After the \(g\)th generation, the student \(\mathrm{S}_{g}\) will become the new anchor \(\mathrm{A}_{g+1}\) for the \((g+1)\)th generation. Supervision from task label is omitted in the figure.
output distributions. Specifically,
\[f_{(\cdot)}(x)=\sigma(\frac{\mathbf{z}_{(\cdot)}(x)}{\tau}), \tag{3}\]
where \(\sigma\) denotes the softmax function, and \(\mathbf{z}\) is logit scores output by the penultimate layer of the neural network. In this way, the teacher is prevented from solutions that are far from the anchor, and thus has good chance to lie within or close to \(\mathcal{F}_{\mathrm{S}}\). It is then reasonable to expect that the function expressed by the teacher, \(f_{\mathrm{T}}\), would be a relatively easy mimicking target to the student. We will show some experiment results supportive of this expectation in 4.3, which demonstrate that the constraint from the anchor does make mimicking easier, as teacher-student behavior similarity becomes substantially higher.
#### 3.1.2 Mutual distillation between teacher and student
When not considering the anchor A, the rest part of TriKD follows the standard online knowledge distillation method DML (Zhang et al., 2018). Specifically, the student and the online teacher not only learn from the hard labels, but also mutually draw lessons from the training experiences of each other. From the student perspective, the loss regarding hard label is the standard cross-entropy loss \(\mathcal{L}_{ce}(f_{\mathrm{S}})\), defined as:
\[\mathcal{L}_{ce}(f_{\mathrm{S}})=-\sum_{i=1}^{N}\sum_{k=1}^{K}y_{i}^{k}log(f_ {\mathrm{S}}^{k}(x_{i})), \tag{4}\]
\(K\) is the number of classes, \(y\) is hard classification label. Furthermore, the student also learns from the teacher:
\[\mathcal{L}_{KL}(f_{\mathrm{T}},f_{\mathrm{S}})=\sum_{i=1}^{N}\tau^{2}\mathbf{ KL}\left(f_{\mathrm{T}}(x_{i})||f_{\mathrm{S}}(x_{i})\right). \tag{5}\]
Combining with the constraint from anchor, the complete loss function for the student is:
\[\mathcal{L}_{\mathrm{S}}=w_{1}\mathcal{L}_{ce}(f_{\mathrm{S}})+w_{2}\mathcal{ L}_{KL}(f_{\mathrm{T}},f_{\mathrm{S}})+w_{3}\mathcal{L}_{KL}(f_{\mathrm{A}},f_{ \mathrm{S}}). \tag{6}\]
Similarly, the loss function for the teacher is in the symmetric form:
\[\mathcal{L}_{\mathrm{T}}=w_{4}\mathcal{L}_{ce}(f_{\mathrm{T}})+w_{5}\mathcal{ L}_{KL}(f_{\mathrm{S}},f_{\mathrm{T}})+w_{6}\mathcal{L}_{KL}(f_{\mathrm{A}},f_{ \mathrm{T}}), \tag{7}\]
where \(w\) is the weight of each loss. For \(\mathcal{L}_{ce}\), \(\tau\) is fixed to 1, whereas for \(\mathcal{L}_{KL}\), \(\tau\) is a hyper-parameter to tune.
Our TriKD is based on online knowledge distillation, and uses an additional anchor to make the teacher easy to mimic by constraining the search space. On the other hand, we hope the teacher, with large capacity and correspondingly strong learning ability, could still find a low-expected-risk solution to accurately guide the student, even though its search space is constrained by the anchor. Note that here exists a potential risk that if the constraint from the anchor is too strong (\(w_{3}\) and \(w_{6}\) are too large), the performance of the teacher may be upper-bounded by the anchor, thus leading to easy but inaccurate teacher solutions. However, experiments in 4.3 and 4.4 show that with proper hyper-parameters, the teacher can be both easy (4.3) and accurate (4.4) simultaneously. This means that low mimicking difficulty of the teacher could be attained even when the constraint from anchor is relatively mild, and the constraint would not barrier the accuracy of the teacher until its grows much stronger. There is thus a range of constraint strength where the merits of both low-mimicking-difficulty and low-expected-risk teacher could be simultaneously enjoyed. With the aforementioned merits, the student could benefit substantially more from TriKD than existing distillation methods, and finally become more accurate than existing models.
### Curriculum learning for Proper Anchor
Intuitively, the selection of anchor model affects the performance of TriKD, and it is thus of great significance to find a proper anchor. However, such an appropriate anchor is not trivial to find. We therefore propose a curriculum strategy to achieve this goal.
The curriculum process is composed of a sequence of **generations**, each of which is a triplet distillation process as described in 3.1. In curriculum learning, the student of the \(g\)th generation will become the anchor of the \((g+1)\)th generation, denoted as:
\[\mathrm{A}_{g+1}=\mathrm{S}_{g}^{*}, \tag{8}\]
where \(\mathrm{S}_{g}^{*}\) is the student trained in the \(g\)th generation. The student and the teacher are randomly re-initialized at the beginning of each generation. We empirically find that the performance of the student tend to raise within the first several generations; it then converges and more generations would not make further improvement. We can then take the student with converged performance as the final model, which is generally with better performance. Fig.2 shows the whole pipeline of the proposed method.
For the first generation, as there is no available last-generation student to serve as the anchor, we simply pre-train the anchor model with only online distillation between it and the teacher. We also try to use a trivial one only trained with label, and find it achieves comparable performance but with slower convergence. Therefore, in this paper we use the student trained with vanilla online distillation as the anchor for generation 1, and we refer to the vanilla online distillation process itself as generation 0.
### Theoretical Analysis
We explain why TriKD could improve knowledge distillation in a formal context of the risk minimization decomposition. Lopez-Paz _et al._ (Lopez-Paz et al., 2015) decomposed
the excess risk of the student trained only with hard label as follows:
\[R(f_{\mathrm{S}})-R(f_{\mathrm{R}})\leq O\left(\frac{|\mathcal{F}_{\mathrm{S}}|_{ C}}{\sqrt{n}}\right)+\epsilon_{1}, \tag{9}\]
where \(R(\cdot)\) denotes expected risk, \(f_{\mathrm{S}}\) is the student function in function class \(\mathcal{F}_{\mathrm{S}}\), \(f_{\mathrm{R}}\) is the real (target) function. The \(O(\cdot)\) term is the estimation error, and \(\epsilon\) term is approximation error. \(|\cdot|_{C}\) is some appropriate capacity measurement of function class. For distillation, the teacher learns from the target function, leading to the following excess risk:
\[R(f_{\mathrm{T}})-R(f_{\mathrm{R}})\leq O\left(\frac{|\mathcal{F}_{\mathrm{T}} |_{C}}{n^{\alpha}}\right)+\epsilon_{2}, \tag{10}\]
and the student learns from the teacher, leading to the following excess risk:
\[R(f_{\mathrm{S}})-R(f_{\mathrm{T}})\leq O\left(\frac{|\mathcal{F}_{\mathrm{S}} |_{C}}{n^{\beta}}\right)+\epsilon_{3}, \tag{11}\]
where \(\alpha,\beta\) range between \([\frac{1}{2},1]\), higher value means easier problem and faster learning. As analyzed in (Lopez-Paz et al., 2015), the effectiveness of vanilla knowledge distillation is theoretically ensured by the following inequality:
\[O\left(\frac{|\mathcal{F}_{\mathrm{T}}|_{C}}{n^{\alpha}}\right)+O\left(\frac{ |\mathcal{F}_{\mathrm{S}}|_{C}}{n^{\beta}}\right)+\epsilon_{2}+\epsilon_{3} \leq O\left(\frac{|\mathcal{F}_{\mathrm{S}}|_{C}}{\sqrt{n}}\right)+\epsilon_{1}. \tag{12}\]
Furthermore, if the left side of Eq. (12) decreases, the excess risk of the student becomes lower, meaning better performance. Next, we show that introducing the anchor model \(\mathrm{A}\) lowers the left side of Eq. (12).
Considering vanilla online knowledge distillation, its loss function is:
\[\begin{split}\mathcal{L}_{online}=& w_{1}\mathcal{L}_{ce}(f_{ \mathrm{S}})+w_{2}\mathcal{L}_{KL}(f_{\mathrm{T}},f_{\mathrm{S}})\\ &+w_{4}\mathcal{L}_{ce}(f_{\mathrm{T}})+w_{5}\mathcal{L}_{KL}(f_{ \mathrm{S}},f_{\mathrm{T}}).\end{split} \tag{13}\]
TriKD can be equivalently recognized as minimizing \(\mathcal{L}_{online}\), but with additional inequality constraints coming from the anchor:
\[\begin{split}\underset{f_{\mathrm{S}},f_{\mathrm{T}}}{min}& \mathcal{L}_{online},\\ s.t.&\mathcal{L}_{KL}(f_{\mathrm{A}},f_{\mathrm{S}})< \delta,\\ &\mathcal{L}_{KL}(f_{\mathrm{A}},f_{\mathrm{T}})<\delta,\end{split} \tag{14}\]
where \(\mathcal{L}_{KL}\) serves as a function distance metric to constrain the search space of the teacher and the student; \(\delta\) is the distance threshold. Rather than directly solving Eq. (14), we can instead add penalty terms to the loss function to substitute the hard constraints, making the optimization much easier. We then get Eq. (6) and Eq. (7), which we actually optimize in practice. Considering Eq. (14), it means conducting the vanilla online distillation, but with constraints that shrink the search space of teacher \(\mathrm{T}\) from the entire \(\mathcal{F}_{\mathrm{T}}\) to its subset \(\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}\):
\[\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}=\{f|f\in\mathcal{F}_{\mathrm{T}}, \mathcal{L}_{KL}(f_{\mathrm{A}},f_{\mathrm{T}})<\delta\}, \tag{15}\]
and similarly shrink the search space of student \(\mathrm{S}\) from \(\mathcal{F}_{\mathrm{S}}\) to its subset \(\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}\):
\[\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}=\{f|f\in\mathcal{F}_{\mathrm{S}}, \mathcal{L}_{KL}(f_{\mathrm{A}},f_{\mathrm{S}})<\delta\}. \tag{16}\]
The student and especially the teacher are then asked to find a solution within the shrinked search space \(\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}\) and \(\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}\). Following the left side of Eq. (12), the risk bound for our proposed TriKD is:
\[O\big{(}\frac{|\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}|_{C}}{n^{\alpha^{{}^{ \prime}}}}\Big{)}+O\Big{(}\frac{|\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}|_{C}}{ n^{\beta^{{}^{\prime}}}}\Big{)}+\epsilon_{2}^{{}^{\prime}}+\epsilon_{3}^{{}^{ \prime}}. \tag{17}\]
First, as \(\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}\), \(\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}\) are subsets of \(\mathcal{F}_{\mathrm{S}}\), \(\mathcal{F}_{\mathrm{T}}\), we have \(|\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}|_{C}\leq|\mathcal{F}_{\mathrm{S}}|_{C}\), \(|\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}|_{C}\leq|\mathcal{F}_{\mathrm{S}}|_{C}\). Next, recall that TriKD is built upon two empirically-validated expectations: 1) the teacher would be easy to mimic if its search space is near \(f_{\mathrm{A}}\) (_i.e._ it is taken from \(\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}\) rather than \(\mathcal{F}_{\mathrm{T}}\)), and 2) even the search space is constrained to \(\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}\), the teacher could still find a low-expected-risk solution therein to provide accurate enough guidance. The first one implies that \(\beta^{{}^{\prime}}>\beta\), _i.e._ the mimicking from student to teacher is easier in our case. The second one implies that
\[O\Big{(}\frac{|\mathcal{F}_{\mathrm{T}}^{{}^{\prime}}|_{C}}{n^{\alpha^{{}^{ \prime}}}}\Big{)}+\epsilon_{2}^{{}^{\prime}}\approx O\Big{(}\frac{|\mathcal{F}_ {\mathrm{T}}|_{C}}{n^{\alpha}}\Big{)}+\epsilon_{2}, \tag{18}\]
indicating the teacher would present similar expected risk either with or without anchor. Now we have analyzed all the involved variables except the \(\epsilon_{3}\) term, and they all support that the bound in Eq. (17) is lower than the left side of Eq. (12). Finally, considering \(\epsilon_{3}\) term, it signifies the approximation error from the student search space \(\mathcal{F}_{\mathrm{S}}\) to the teacher function \(f_{\mathrm{T}}\in\mathcal{F}_{\mathrm{T}}\):
\[\epsilon_{3}=\Big{(}\inf_{f\in\mathcal{F}_{\mathrm{S}}}R(f)\Big{)}-R(f_{\mathrm{ T}}). \tag{19}\]
According to Eq. (18), the difference in the \(R(f_{\mathrm{T}})\) term will be minor between TriKD and standard distillation; For the infimum term, in TriKD \(\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}\) replaces \(\mathcal{F}_{\mathrm{S}}\), and since \(\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}\) is a subset of \(\mathcal{F}_{\mathrm{S}}\), its infimum should be higher, making \(\epsilon_{3}^{{}^{\prime}}\geq\epsilon_{3}\). However, it is unclear how large the difference is because the infimum on \(\mathcal{F}_{\mathrm{S}}^{{}^{\prime}}\) could still be very low. More importantly, the impact of the \(\epsilon_{3}\) term to the total distillation process is limited, because the expected risk of real models in practice are far from the best one they could theoretically attain. Therefore, the influence of the \(\epsilon_{3}\) term should be dwarfed by that of the other terms. _Combining all the aforementioned changes together, the bound in Eq. (17) is lower than the left side of Eq. (12), signifying better distillation_.
## 4 Experiments
In this section, we empirically validate our proposed methods from five aspects. In 4.1 we compare TriKD with state-of-the-art knowledge distillation methods on image classification to show the general effectiveness of the proposed
method. In 4.2, we validate the proposed method on the fine-grained problem of face recognition, with a special focus on the method's performance when confronting overfitting. In 4.3 and 4.4, we justify the rationality of our motivation. Specifically, in 4.3, we show TriKD makes the teacher an easier mimicking target from perspective of teacher-student behavior similarity; in 4.4 we show the performance of the teacher is not limited by the small volume of \(\mathcal{F}^{{}^{\prime}}_{\mathrm{T}}\). In 4.5, we conduct ablation studies to dissect the effect of each involved component. Detailed descriptions of experiment settings, as well as additional experiments and ablations, are provided in the Appendix.
### Knowledge Distillation on Image Classification
We compare TriKD with state-of-the-art knowledge distillation methods on two widely-used image classification benchmarks: CIFAR100 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). Given a pair of model architectures including one large and one small, we choose the small model as the anchor and as the student, and choose the big model as the teacher.
**CIFAR100**(Krizhevsky et al., 2009): results are shown in Table 1. TriKD averagely raises the student's performance by 3.84% comparing with the non-distillation baseline, and performs significantly better than vanilla KD (Hinton et al., 2015), with an average improvement by 2.16%. TriKD also outperforms state-of-the-art methods on all teacher-student pairs. Note that TriKD only uses the logits for knowledge transfer, but achieves better performance than those involving more complex information like intermediate feature map (Chen et al., 2021; Romero et al., 2015; Ahn et al., 2019), attention map (Zagoruyko and Komodakis, 2017), instance similarity (Tian et al., 2020), _etc_
**ImageNet**(Deng et al., 2009): to validate the efficacy of our method on large-scale datasets, we also compare TriKD with other methods on ImageNet. As shown in Table 2, TriKD also outperforms other methods, showing that the proposed triplet distillation mechanism could steadily produce high-quality models regardless of dataset volume.
### Knowledge Distillation on Face Recognition
We validate our proposed TriKD framework on the fine-grained problem of face recognition, with MobileFaceNet (Chen et al., 2018) as the main architecture. We use CASIA-WebFace (Yi et al., 2014) for training, and MegaFace (Kemelmacher-Shlizerman et al., 2016) for testing. Rank-1 face identification rate is reported.
Unlike CIFAR100 and ImageNet, where the performance generally raises as the capacity of the model increases (at least within the scope of our interest), training with the CASIA-WebFace dataset is frequently bothered with the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Teacher** & **wn-40-2** & **wn-40-2** & **resnet56** & **resnet110** & **resnet110** & **resnet32x4** & **vgg13** \\
**Student** & **wrn-16-2** & **wrn-40-1** & **resnet20** & **resnet20** & **resnet32** & **resnet8x4** & **vgg8** \\ \hline Teacher & 75.61 & 75.61 & 72.34 & 74.31 & 74.31 & 79.42 & 74.64 \\ Student & 73.26 & 71.98 & 69.06 & 69.06 & 71.14 & 72.50 & 70.36 \\ \hline KD(Hinton et al., 2015) & 74.92 & 73.54 & 70.66 & 70.67 & 73.08 & 73.33 & 72.98 \\ FitNet(Romero et al., 2015) & 73.58 & 72.24 & 69.21 & 68.99 & 71.06 & 73.50 & 71.02 \\ AT(Zagoruyko and Komodakis, 2017) & 74.08 & 72.77 & 70.55 & 70.22 & 72.31 & 73.44 & 71.43 \\ DML(Zhang et al., 2018) & 75.41 & 74.73 & 71.22 & 71.47 & 73.52 & 75.36 & 74.58 \\ VID(Ahn et al., 2019) & 74.11 & 73.30 & 70.38 & 70.16 & 72.61 & 73.09 & 71.23 \\ CRD(Tian et al., 2020) & 75.64 & 74.38 & 71.63 & 71.56 & 73.75 & 75.46 & 74.29 \\ Review(Chen et al., 2021) & 76.12 & 75.09 & 71.89 & (71.86) & 73.89 & 75.63 & 74.84 \\ DKD(Zhang et al., 2022) & 76.24 & 74.81 & 71.97 & (71.66) & 74.11 & 76.32 & 74.68 \\ \hline TriKD(Ours) & **76.94** & **75.96** & **72.34** & **72.55** & **74.31** & **76.82** & **75.35** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Compare the top-1 accuracy (%) of different KD methods on CIFAR100. **Bold** and underline denote the best and the second best results, respectively. For methods from KD to CRD, we quote the results in Tian et al.(Tian et al., 2020). For Review to DKD, we show the results reported by their original authors. For DML, we report our reimplemented results. ”\((\cdot)\)” means the result was not reported by the authors and we re-run their provided codes. Note that DML and TriKD do not involve pre-trained teacher model.
\begin{table}
\begin{tabular}{c|c c|c c c c c c|c} \hline \hline Error(\%)Methods & Teacher & Student & KD & AT & OFD & CRD & Review & DKD & DML & TriKD(Ours) \\ \hline Top-1 & 73.31 & 69.75 & 70.66 & 70.69 & 70.81 & 71.17 & 71.61 & 71.70 & 71.18 & **71.88** \\ \hline Top-5 & 91.42 & 89.07 & 89.88 & 90.01 & 89.98 & 90.13 & 90.51 & 90.41 & 90.05 & **90.70** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Compare different KD methods on ImageNet. **Bold** and underline denote the best and the second best results, respectively. The results of Review of and DKD are from their original paper. Results of other existing methods are quoted from Tian et al.(Tian et al., 2020)
overfitting problem since each person has only about 50 images, which is much smaller than that on general image dataset. Intuitively, the constraint from the anchor prevents the teacher from expressing overly complicated functions. Therefore, we naturally wonder if TriKD could help alleviate the overfitting issue. Consequently, for experiments on face recognition, we especially care about the relationship between student capacity and performance. We fix the model size of teacher, but adjust the model size of student to investigate the relationship. For sake of convenience, in each generation we make the anchor model \(\mathrm{A}\) slightly smaller than the student model \(\mathrm{S}\), so that with training only one time we can obtain a serious of output models with increasing size. In all experiments unless otherwise specified, the student model starts with width 0.5X of MobileFaceNet and each generation uniformly increases the width of the network by 0.125 times of the MobileFaceNet size. The teacher model is 2.0X of MobileFaceNet in all generations.
We first investigate the performance of the student w.r.t. its capacity. The 150k CASIA-WebFace subset is used for this experiment. The results are shown in Fig.3. The **Baseline(L)** with only task label loss performs poorly, and starts in an underfitting state and then grows to an overfitting state. In contrast, our **TriKD** not only performs better than the baseline by a large margin in terms of all model sizes (even up to 10% in G5, MobileFaceNet 1.125X), but also overcomes the overfitting issue, making the performance consistently raise as model capacity grows. Ablative results are also shown in Fig.3, indicating both the teacher and the anchor are indispensable. We defer detailed analysis of this ablation study to Sec.4.5.
We further compare TriKD with the existing methods including KD (Hinton et al., 2015), DML (Zhang et al., 2018), and BYOT (Zhang et al., 2019). The 50k, 150k subsets and the full set with 490k images of CASIA-WebFace are used for training. The experimental results are shown in Table 3. As can be seen, our TriKD achieves better accuracy. Importantly, the advantage of TriKD is more significant with fewer training data: on the 490k training set, TriKD improves over the baseline by 3%, and outperforms DML by 0.9%; on the 50k training set, our TriDMG achieves larger improvement by 20.7% comparing with the baseline, and by 9.19% comparing with DML. The advantage in small-data tasks again indicates that TriKD could help alleviate the overfitting problem.
### Teacher-Student Behavior Similarity
We introduce the anchor \(\mathrm{A}\) in hopes that it could lower the difficulty for the student to mimic the teacher. If it does work as expected, we should see an increase in teacher-student behavior similarity because the student would mimic the teacher more faithfully. Here we conduct experiments to validate this phenomenon.
We show the KL-divergence between outputs of the student and the teacher trained on CIFAR100. For in-domain data, we report the results on CIFAR100. For out-of-domain data, where the student is more likely to act differently from the teacher, we report the results on SVHN (Netzer et al., 2011) and STL10 (Coates et al., 2011). Table 4 shows the results. Compared with offline knowledge distillation, online distillation has a huge advantage in increasing teacher-student behavior similarity. On the other hand, our TriKD steadily shows great improvement upon online distillation, showing that the anchor does make the mimicking easier. The increase in teacher-student behavior similarity shows that the anchor model successfully drives the large teacher into easy-to-mimic solutions, supporting the expectation in 3.1.1.
### Performance of Teacher after TriKD
In TriKD, the search space of the teacher is constrained by the anchor, and the teacher is expected to find a high-quality solution within the designated search space. This implies our expectation that the anchor would not barrier the teacher in chasing good solutions. Here we investigate the performance of teacher after TriKD to check if the expectation holds. The results are shown in Table 6. The teacher ac
Figure 3: Evaluate TriKD with different student size on Megaface in terms of rank-1 face identification rate (%). The baseline is trained with hard label only. Besides the baseline and our TriKD, we also conduct ablative studies (\(\mathrm{L}+\mathrm{T}\) and \(\mathrm{L}+\mathrm{A}\)) to reveal the effect of anchor \(\mathrm{A}\) and \(\mathrm{T}\), respectively.
\begin{table}
\begin{tabular}{c|c c c c|c} \hline Dataset Methods & baseline & KD & DML & BYOT & TriKD (Ours) \\ \hline
50k & 35.24 & 40.48 & 46.76 & 44.26 & 55.95 \\ \hline
150k & 64.00 & 71.80 & 74.10 & 72.80 & 79.30 \\ \hline
490k & 81.50 & 83.00 & 83.60 & 81.50 & 84.50 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison with existing methods on MegaFace in terms of rank-1 face identification rate (%). Training set: CASIA-WebFace. Backbone: MobileFaceNet.
tually outperforms its trivially-trained baseline, and also performs better than online distillation in most cases. The result indicates that the teacher is not encumbered by the constraint from anchor, and thus with TriKD, we can simultaneously enjoy the merits of an easy-to-mimic and accurate teacher model. Note that existing works have already shown that online knowledge distillation would make both the large model (teacher) and the small model (student) improve (Zhang et al., 2018). However, it is also shown in (Tian et al., 2020) that after switching from offline distillation to online distillation, the performance gain of the teacher could hardly trigger performance gain of the student. Our TriKD, in contrast, makes the accurate teacher model also easy to mimic, and thus the student could benefit more from distillation.
### Ablation study
The proposed triplet distillation consists of three roles, _i.e._ the teacher \(\mathrm{T}\), and target student \(\mathrm{S}\) and the Anchor \(\mathrm{A}\). From the student perspective, it is supervised by \(\mathrm{T}\), \(\mathrm{A}\) and task label \(\mathrm{L}\). Here we investigate the influence of each role.
For CIFAR100, results are shown in Table 7. The \(\mathrm{L}+\mathrm{T}\) setting is similar to DML (Zhang et al., 2018). The \(\mathrm{L}+\mathrm{A}\) setting is similar to Born again (Furlanello et al., 2018), where the first generation anchor is a trivially trained model. In contrast, the first generation anchor in \(\mathrm{L}+\mathrm{A}^{*}\) is trained with \(\mathrm{L}+\mathrm{T}\). For both conditions we report the result after three iterative generations. The result shows that both \(\mathrm{A}\) and \(\mathrm{T}\) could boost the performance of the target student when introduced individually. However, simply combining these two methods through making the student of \(\mathrm{L}+\mathrm{T}\) the first-generation anchor of \(\mathrm{L}+\mathrm{A}\) brings minor improvement. Our TriKD, in contrast, further improves the performance of the target student.
For CASIA-Webface, results are shown in Fig.3(a). The Baseline (\(\mathrm{L}\)) with only task label loss starts in an under-fitting state and then grows to an overfitting state. Then, adding only the anchor \(\mathrm{L}+\mathrm{A}\) and adding only the teacher \(\mathrm{L}+\mathrm{T}\) both bring impressive improvement, illustrating the effectiveness of each role. When including all three roles, further improvement is obtained, clearly illustrating the necessity and effectiveness of the three different roles. We refer readers to Appendix for more ablative experiments.
Appendix
## Appendix A Variance and Bias Analysis
In this section, we empirically analyze how TriKD works from a variance-bias perspective. We will show that 1) TriKD reduces the variance of the target student, and 2) a large teacher induces a better-calibrated distribution for the student to mimic, leading to lower bias. We hope the analysis in this section could provide some extra insight.
According to Proposition 3 in (Menon et al., 2020), for constant \(C>0\) and any student network \(\mathrm{S}\), the risk in vanilla knowledge distillation could be bounded as:
\[\begin{split}&\mathbb{E}\left[\left(\bar{R}(f_{\mathrm{S}},D)-R(f_ {\mathrm{S}})\right)^{2}\right]\\ &\leq\frac{1}{N}\mathbb{V}\left[\mathcal{L}(f_{\mathrm{T}}(x),f_ {\mathrm{S}}(x)]+C\left(\mathbb{E}\left[\left\|f_{\mathrm{T}}(x)-f_{\mathrm{R}} (x)\right\|_{2}\right]\right)^{2},\right.\end{split} \tag{20}\]
where \(\mathbb{E}\) denotes the expectation, \(\mathbb{V}\) denotes the variance, \(\bar{R}(\cdot,D)\) is empirical risk on dataset \(D\). \(\mathcal{L}\) is the distillation loss, typically the KL-Divergence loss.
In TriKD, there are two types of supervision for the student, _i.e._ that from the teacher (\(f_{\mathrm{T}}\)) and the anchor model (\(f_{\mathrm{A}}\)), we apply two coefficients (\(w_{\mathrm{T}},w_{\mathrm{A}}\)) to combine them, and \(w_{\mathrm{T}}+w_{\mathrm{A}}=1\). Following Eq. (20), the variance-bias decomposition of TriKD is:
\[\begin{split}&[l]\mathbb{E}\left[\left(\tilde{R}(f_{\mathrm{S}},D)-R(f_ {\mathrm{S}})\right)^{2}\right]\\ &\leq\frac{1}{N}\mathbb{V}\left[\mathcal{L}((w_{\mathrm{T}}f_{ \mathrm{T}}(x)+w_{\mathrm{A}}f_{\mathrm{A}}(x)),f_{\mathrm{S}}(x)\right]\\ &+C\left(\mathbb{E}\left[\left\|((w_{\mathrm{T}}f_{\mathrm{T}}(x )+w_{\mathrm{A}}f_{\mathrm{A}}(x))-f_{\mathrm{R}}(x)\right\|_{2}\right] \right)^{2}.\end{split} \tag{21}\]
This error bound establishes a fundamental variance-bias trade-off when performing distillation. Specifically, they show the fidelity of the distilled risk's approximation to the expected one mainly depends on two factors: how variable the loss is given a random instance (the variance term), and how well the mimicking target \(w_{\mathrm{T}}f_{\mathrm{T}}(x)+w_{\mathrm{A}}f_{\mathrm{A}}\) approximates the real output \(f_{\mathrm{R}}\) on average (the bias term). Our goal is to analyze how arranging the teacher model \(\mathrm{T}\) and the anchor model \(\mathrm{A}\) could lower the bound in Eq. (21).
For the **Variance** part, as shown in Fig.4, we conduct experiments to explore how to lower it. There are basically four valid combinations, _i.e.__M0_: \(\mathrm{S}\) learns from \(\mathrm{A}\) with vanilla distillation, _M1_: \(\mathrm{S}\) learns from both \(\mathrm{A}\) and \(\mathrm{T}\) with vanilla offline distillation, _M2_: \(\mathrm{S}\) learns from \(\mathrm{A}\) with offline distillation and from \(\mathrm{T}\) with online distillation, _M3_: \(\mathrm{T}\) learns from \(\mathrm{A}\) with vanilla distillation and \(\mathrm{S}\) learns from \(\mathrm{T}\) with online learning, _M4_: both \(\mathrm{S}\) and \(\mathrm{T}\) learns from \(\mathrm{A}\) with vanilla distillation and \(\mathrm{S}\) learns from \(\mathrm{T}\) with online learning. Generally, we consider two main factors: the way model \(\mathrm{S}\) learns from model \(\mathrm{T}\) - vanilla offline distillation or online mutual distillation, and whether model \(\mathrm{T}\) learns from model \(\mathrm{A}\). Fig.4(a) reveals that online mutual learning makes important contribution to decrease the variance, and M4, which is used in TriKD, can gain lower variance when the size of model \(\mathrm{A}\) is small comparing with \(\mathrm{T}\). Furthermore, we compare M4 with vanilla distillation (M0 and M1) as shown in Fig.4(b), M4 can get the lowest variance in all the experiments settings. To sum up, the above experiments show that arranging the anchor \(\mathrm{A}\) and the teacher \(\mathrm{T}\) as in M4 and making \(\mathrm{A}\) small can greatly help reduce the variance.
For the **Bias** part, it follows:
\[\begin{split}& C\left(\mathbb{E}\left[\left\|((w_{\mathrm{T}}f_{ \mathrm{T}}(x)+w_{\mathrm{A}}f_{\mathrm{A}}(x))-f_{\mathrm{R}}(x)\right\|_{2} )\right)^{2}\right.\\ &\leq C\left(\mathbb{E}\left[w_{\mathrm{T}}\|(f_{\mathrm{T}}(x)-f _{\mathrm{R}}(x)\|_{2}+w_{\mathrm{A}}\|f_{\mathrm{A}}(x)-f_{\mathrm{R}}(x)\|_{2 }\right]\right)^{2}.\end{split} \tag{22}\]
The second line is obtained based on triangular inequality. Minimizing this term means that we should make the introduced teacher model \(\mathrm{T}\) as well as the anchor model \(\mathrm{A}\) approximate the Bayes class-probability distribution \(f_{\mathrm{R}}\) better. In detail, it means the expected calibration error (ECE) (Naeini et al., 2015) of the two models should be small. In (Guo et al., 2017), the authors analysed the calibration measured by ECE in terms of different aspects, such as network depth, width, Batch Normalization and weight decay. The experiments in (Guo et al., 2017) showed that increasing width of a network will make the ECE first rise and then fall. To make it clearer, we conduct this experiment again in terms of the effect of network width on face recognition task (Webface) and image classification task (CIFAR100), and all the models are trained enough epochs to ensure the model converges sufficiently. The backbones are MobileFaceNet and Resnet18 respectively, we applied various width including \(0.5X,1.0X,2.0X,3.0X,4.0X\). As shown in Fig.5, we observe that increasing the network width positively affect model calibration. As a result, we can minimize the bias term through making the model \(\mathrm{T}\) wider. The anchor \(\mathrm{A}\), however, faces a variance-bias trade-off: as shown in the variance part, small anchor tend to benefit lowering the variance, but it could degrade the bias, and vice versa. In this paper, we keep the anchor \(\mathrm{A}\) small (the same size as the student) in favor of low variance, and we leave further exploration of the trade-off to future work. Combining the above two parts, we can introduce a large model \(\mathrm{T}\) to M4, and keep the anchor \(\mathrm{A}\) small, which forms our proposed TriKD.
## Appendix B Experimental Details
**CIFAR100** (Krizhevsky et al., 2009) dataset consists of 60K images from 100 categories with size of \(32\times 32\). In the standard protocol, 50k images are used for training and 10K for testing. We choose CIFAR-style resnet (He et al., 2016), wide-resnet (Zagoruyko and Komodakis, 2016) and vgg (Simonyan and Zisserman, 2014) as model architecture.
We train all the models for 240 epochs. The initial learning rate is 0.1 and is decayed by a factor of 10 at 150, 180, and 210 epochs, respectively. We run experiments on one Tesla-V100 GPU with a batch size of 128. An SGD optimizer with 0.0005 weight decay and 0.9 momentum is adopted. For all the experiments, we set \(w_{1}=w_{2}=w_{3}=w_{4}=w_{5}=w_{6}=1\) at the beginning. After epoch 150, where the learning rate decays for the first time, we decrease \(w_{1}\) to \(0.1\) and increase \(w_{2}\) to \(10\). For all experiments except vgg, the temperature \(\tau\) is set to 1 for \(\mathcal{L}_{KL}\); for vgg, we set it to 4.
**ImageNet**(Deng et al., 2009) consists of 1.28 million training images and 50k validation images from 1000 categories. Following the mainstream settings, all methods are trained on the entire training set and evaluated on the single-crop validation set. The input image resolution is \(224\times 224\) for both training and evaluation. We use resnet34 as teacher and resnet18 as student. We train all the models for 100 epochs. The initial learning rate is 0.1 and is decayed by a factor of 10 at 30, 60, and 90 epochs, respectively. We run experiments on one Tesla-V100 GPU with a batch size of 256. An SGD optimizer with a 0.0001 weight decay and 0.9 momentum is adopted. Due to limited resources, we simply set \(w_{1}=w_{2}=w_{3}=w_{4}=w_{5}=w_{6}=1\), and \(\tau=1\).
**CASIA-WebFace**(Yi et al., 2014) consists of 494,414 face images from 10,575 identities. Besides the full training set, two subsets of 50k and 150k images are randomly selected for efficient training. **MegaFace**(Kemelmacher-Shlizerman et al., 2016) dataset is used for testing, which contains 1M images of 60k identities as the gallery set and 100k images of 530 identities from FaceScrub as the probe set. For better stability of training, Arcface loss (Deng et al., 2019) used in MobileFaceNet is replaced with AM-Softmax loss (Wang
Figure 4: Exploring how to arrange \(\mathrm{T}\) and \(\mathrm{A}\) to get a lower variance \(\mathrm{S}\). (a) and (b) reveal the variance of target model’s losses on different conditions. There are basically four valid combinations (_i.e._ M1-M4) in terms of two main factors: the way model \(\mathrm{S}\) learns from model \(\mathrm{T}\) – standard offline distillation or online mutual learning, and whether model \(\mathrm{T}\) learns from model \(\mathrm{A}\). Online denotes that two networks study with each other step by step during the training process. **(a)** illustrates that online mutual learning makes important contribution to decrease the variance, and M4 can gain lower variance when the size of model \(\mathrm{A}\) is smaller than model \(\mathrm{T}\). **(b)** demonstrates that M4 can get the lowest variance under all the experimental settings compared with standard distillation (M0 and M1). Dataset: Webface.
et al., 2018) in our experiments. Following the work of AM-Softmax loss, the faces are aligned and cropped out with size of \(112\times 96\). For optimization, SGD with momentum 0.9 is used and the batch size is 256. All the models are trained with 40k iterations. The learning rate starts from 0.1 and linearly reduces to 0. The setting of weight decay keeps the same as (Chen et al., 2018).
## Appendix C More experiments
### Comparing with TAKD
Large models tend to generalize better. However, existing studies (Mirzadeh et al., 2020; Zhu and Wang, 2021; Cho and Hariharan, 2019) have shown that in knowledge distillation, the performance of the student would indeed deteriorate when the capacity of the teacher increases. To boost the performance of the student when the capacity gap between the teacher and the student is large, TAKD (Mirzadeh et al., 2020) proposed to bridge the gap by introducing intermediate-sized models named teacher assistant. Both TAKD and our TriKD attempt to reduce the difficulty for the student to mimic the teacher. However, TAKD treats learning difficulty as an inherent property of teacher model capacity, _i.e._ larger teachers are inherently harder, and smaller teachers are easier. In contrast, we believe that a given network architecture with fixed capacity should be able to fit both hard and easy functions, and we could make a large teacher still easy to mimic by deliberately making the function it expresses easy; the reason why large teacher usually fails in existing distillation frameworks is that the teacher would spontaneously learn to express sophisticated functions when trained without constraint. This is easy to understand when considering the teacher model's function identity: with larger capacity, the larger teacher should be able to easily fit the same function as a smaller teacher does, and thus in distillation a student supervised by a larger teacher should at least perform no worse than supervised by a smaller one. Here we also provide an experiment to compare our TriKD with TAKD. The experiment is conducted on CIFAR100. For fair comparision, following TAKD, we use resnet8 as the student and resnet110 as the teacher, and we use stochastic gradient descent with Nesterov momentum of 0.9 and learning rate of 0.1 for 150 epochs. we decrease learning rate to 0.01 on epoch 80 and 0.001 on epoch 120. Weight decay is set to 0.0001. The result is shown in Table 8. Its shows that our TriKD consistently outperforms TAKD with different teacher assistant size.
We further emphasize that our proposed TriKD is a general knowledge distillation method rather than specially designed for situations where the capacity gap between the teacher and the student is large, like (Mirzadeh et al., 2020; Cho and Hariharan, 2019; Zhu and Wang, 2021). The mimicking difficulty is a ubiquitous problem in knowledge distillation rather than exclusive to teacher-student pairs with extremely large capacity gap. Experiments also show that this method could greatly benefit the student even though the teacher is relatively small.
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline \multirow{2}{*}{KD} & \multicolumn{4}{c|}{TAKD} & \multirow{2}{*}{TriKD} \\ \cline{2-2} \cline{4-5} & TA=56 & TA=32 & TA=20 & TA=14 \\ \hline
61.41 & 61.47 & 61.55 & 61.82 & 61.50 & 62.79 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Compare TriKD with KD (Hinton et al., 2015) and TAKD (Mirzadeh et al., 2020). Dataset: CIFAR100. Student=resnet8, Teacher=resnet110. The results of KD and TAKD are quoted from the original TAKD paper.
Figure 5: Expected Calibration Error for Different Model Widths. We explore Expected Calibration Error in terms of network width on Face Recognition task (Webface) and Image classification task (CIFAR100), and all the models are trained enough epochs to ensure the model converges sufficiently. The backbones are MobileFaceNet and Resnet18 respectively, we applied various width including \(0.5X,1.0X,2.0X,3.0X,4.0X\).
### Additional Results on Image Classification
We provide some additional results with more architectures on image classification. For the experiments in this section, we set the teacher to be 2 times as wide as the student. For experiments on ImageNet, all methods are trained for 120 epochs. For the hyper-parameters, SGD with momentum 0.9 is used for optimization and the batch size is 256. The learning rate starts from 0.1 and linearly reduces to 0. The weight decay set as \(5e-4\) for ShuffleNet V2, \(1e-4\) for ResNet18. For experiments on CIFAR100, all models are trained for 200 epochs. As for the hyper-parameters, SGD with momentum 0.9 is used for optimization and the batch size is 128. The learning rate starts from 0.1 and is multiplied by 0.1 at 60, 120 and 180 epochs. The weight decay is set as \(5e-4\). Table 9 shows the result.
### Impact of Teacher Size
The teacher, a large network with high fitting ability, represents the potential upper limit of student's performance. Without losing flexibility, it can be set with any desired model size no less than the target model size. Table 10 shows the results of our TriKD with the teacher in different model size, _i.e._\(0.5\times\), \(1.0\times\), \(2.0\times\) of the base network size. The experiment is conducted on face recognition and the network architecture is MobileFaceNet. As can be seen, our learning mechanism is stable w.r.t. different size of the teacher models, which can flexibly adapt to different training resources and better meet the trade-off between computational cost and performance. More specifically, larger teacher \(\mathrm{T}\) induce better model \(\mathrm{S}\), which is consistent with our motivation and demonstrates that larger model \(\mathrm{T}\) has an edge in exploring generalizable solutions.
### Iterate for different number of generations
As mentioned in 3.2, we adopt a curriculum strategy to obtain an appropriate anchor model for TriKD. Here we investigate how many generations are needed for this process. The experiment is conducted on CIFAR100. Table 11 shows the results. Generation 0, as mentioned in 3.2, is a plain online distillation process without using an anchor. The result shows that it generally takes 1 to 2 generations (generation 0 not included) for the process to converge, and at that time the student generally reaches a good performance. We empirically find that the first and the second generations are the most likely to bring in improvement, and the following generations tend to bring in less, if any. Specifically, we
\begin{table}
\begin{tabular}{c|c c c c|c c c|c} \hline \hline \multirow{2}{*}{Student} & \multicolumn{4}{c|}{Rank-1 identification rate of S(\%)} & \multicolumn{4}{c|}{Rank-1 identification rate of \(\mathrm{T}\)(\%)} & \multirow{2}{*}{Madds} \\ \cline{2-3} \cline{5-8} & Baseline & \(\mathrm{T}\)=0.5X & \(\mathrm{T}\)=1.0X & \(\mathrm{T}\)=2.0X & \(\mathrm{T}\)=0.5X & \(\mathrm{T}\)=1.0X & \(\mathrm{T}\)=2.0X \\ \hline
0.50X & 64.0 & 63.2 & 67.6 & 69.0 & 65.0 & 73.5 & 75.8 & 50M \\
0.75X & 68.3 & 68.6 & 74.0 & 75.7 & 68.0 & 77.1 & 79.7 & 109M \\
1.00X & 69.7 & 71.7 & 77.4 & 79.3 & 68.8 & 77.8 & 80.7 & 189M \\
1.25X & 69.4 & 73.2 & 79.5 & 81.5 & 68.7 & 78.4 & 81.6 & 292M \\
1.50X & 68.3 & 74.6 & 81.0 & 82.4 & 69.6 & 78.5 & 81.5 & 487M \\ \hline \hline \end{tabular}
\end{table}
Table 10: Performance of target student (S) w.r.t. different model size of online teacher (T), e.g. 0.5X/1.0X/2.0X. Baseline means trained with only hard label. Dataset: WebFace. Network: MobileFaceNet.
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{CIFAR100} & \multicolumn{4}{c}{ImageNet} \\ \hline Backbone & MobileV2 & ResNet18 & ResNet34 & ResNet50 & MobileV1 & MobileV2 & ShuffleV2 & ResNet18 \\ (Madds) & (90M) & (555M) & (1.16G) & (1.30G) & (569M) & (300M) & (147M) & (2.34G) \\ \hline Baseline & 72.0 & 77.4 & 77.9 & 77.4 & 71.8 & 72.6 & 68.9 & 71.0 \\ TriKD(Ours) & 75.1 & 79.3 & 80.3 & 79.4 & 74.2 & 73.8 & 70.6 & 72.7 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Additional results of TriKD w.r.t. different network architectures. Teacher is two times as wide as the student.
Figure 6: Teacher-student behavior similarity w.r.t. generations. Generation 0 is vanilla online knowledge distillation without anchor. The networks are trained on the training set of CIFAR100, and KL-Divergence is measured on the test set of CIFAR100. Legend format: student (teacher).
attribute the improvement in the first and later generations to different mechanisms. The first generation's improvement is due to the introduction of the triplet relationship, and the later generations improves the student through using more accurate anchor; the former is qualitative, and the latter is majorly quantitative. As shown in Fig.6, from a teacher-student behavior similarity perspective, the KL-divergence between the teacher and the student drops dramatically after generation 1, but then drops slowly in the following generations. It means that it is the triplet relationship, rather than the curriculum process, that makes the mimicking easier. On the other hand, from the variance-bias perspective (see A), the curriculum learning can be identified as a means to gradually decrease the bias of the anchor.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \multirow{2}{*}{Generations} & resnet56 & resnet110 & resnet110 & wrn-40-2 & wrn-40-2 & resnet32x4 & vgg13 \\ & resnet20 & resnet20 & resnet32 & wrn-40-1 & wrn-16-2 & resnet8x4 & vgg8 \\ \hline
0 & 71.22 & 71.47 & 73.52 & 74.73 & 75.41 & 75.36 & 74.58 \\
1 & 71.76 & 71.82 & 73.99 & 75.35 & 76.94 & 76.27 & 75.35 \\
2 & 72.34 & 72.24 & 74.31 & 75.87 & 76.94 & 76.82 & 75.35 \\
3 & 72.34 & 72.55 & 74.31 & 75.96 & 76.94 & 76.82 & 75.35 \\
4 & 72.34 & 72.55 & 74.31 & 75.96 & 76.94 & 76.82 & 75.35 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Best accuracy(%) achieved by student after each generation. Except generation 0, where we use vanilla online distillation to train an initial anchor, for all generations we use the last-generation student as the anchor, and use randomly initialized student and teacher to form the triplet relationship. The experiment is conducted on CIFAR100. |
2309.12317 | Using a Catenary Trajectory to Reduce Wellbore Friction in Horizontal
Extended Reach Drilling | Wellbore friction is one of the biggest concerns when drilling due to its
relation to the total cost. The catenary concept was introduced to reduce
wellbore friction, but it requires detailed analyses. This project would fill
this gap. A catenary shape is simply the natural shape of a rope, chain, or
drill string. The drill string will then hang freely inside the wellbore.
Perfectly, there should be no contact between the hole and the string, and thus
no friction. Torque and drag should be minimized this way. A case study is
introduced to examine the outcome between Catenary Trajectory Design and
traditional 2D Arc design. The calculation procedure of Catenary Trajectory and
2D Arc Design can be found in an MS Excel spreadsheet which is easy to use and
reliable for designing catenary well trajectories for extended-reach wells. | Vu Nguyen | 2023-07-23T22:43:51Z | http://arxiv.org/abs/2309.12317v1 | **Using a Catenary Trajectory to Reduce Wellbore Friction in Horizontal Extended Reach Drilling**
## Abstract
Wellbore friction is one of the biggest concerns when drilling due to its relation to the total cost. The catenary concept was introduced to reduce wellbore friction, but it requires detailed analyses. This project would fill this gap. A catenary shape is simply the natural shape of a rope, chain, or drill string. The drill string will then hang freely inside the wellbore. Perfectly, there should be no contact between the hole and the string, and thus no friction. Torque and drag should be minimized this way. A case study is introduced to examine the outcome between Catenary Trajectory Design and traditional 2D Arc design. The calculation procedure of Catenary Trajectory and 2D Arc Design can be found in an MS Excel spreadsheet which is easy to use and reliable for designing catenary well trajectories for extended-reach wells.
## Introduction
Climate change is one of the most concerning topics in our world today. The Carbon Capture Utilization Storage is not the only solution to combat this issue (Nguyen et al., 2021; Olayiwola et al., 2023). There are many new existing technologies in the campaign to reduce greenhouse gas emissions, such as hydrogen, renewable energy, etc. While increasing the efficiency of existing oil wells by enhanced oil recovery (Carrera et al., 2023) and maintaining the condition of the wells (Olayiwola et al., 2022; Nguyen et al., 2022; Mahmood et al., 2023; Olayiwola et al., 2022), extended reach drilling activities also play an essential role in making energy transition become smoother by meeting oil &gas global demand. One of the innovative methods in drilling technology is using a Catenary Trajectory.
The extended reach drilling has been studied by many researchers (El Sabeh et al., 2023; Huang and Guo, 2022; Hussain et al., 2021). El Sabeh et al. (2023) reviewed the issues of extended reach drilling, including high torque and drag, poor hole cleaning, bottom hole assembly design, and hydraulic and Equivalent Circulating Density (ECD) management, while Huang and Gao (2022) analyzed the drilling difficulty and still emphasized that high friction and drag is one of the biggest challenges. Hussain et al. (2021) introduced the help of technologies to intervene in extended-reach drilling activities, such as using mapping while drilling, Magnetic Resonance while drilling service, and advanced polyglycol water-based mud systems.
Well reach has significantly increased in the past decade. In the 1980s, a 3 km step out was common. This increased in the 1990s, and one targeted 10 km as a likely step out. Ryan et. al (1995) conducted a review of technology in 1995 and concluded that 10 km step out is feasible. This was convinced a few years later on Wytch Farm (2000), where it was also justified that shallow extended-reach wells could be drilled.
Mason and Judzis (1998) also thoroughly review and recognize technological constraints for reaching up to 18 km. Although such reach will require fit-for-purpose equipment, it would be possible in a few years. The paper will primarily inspect the potential for a catenary build section to achieve these goals.
The catenary trajectory design was first recommended to the oil industry by McClendon and Anders (1985). Several attempts have been tested, but the method has had limited application. Aadnov and Andersen (1998) and (2001) developed the general catenary solution, which is suitable for any inclination. Ma et al. (1998) introduced field cases for catenary wells drilled in China. Torque and drag are always a big concern for all types of extended-reach well.
Planeix and Fox (1979) published a method to calculate the final angle and direction of a well that turns and builds an angle to reach an expected target from a known surface location. Unluckily, their method contains only one turn rate and must decide the turn-end point through calculation. McMillian (1981) implemented a technique for deciding the lead angle to offset the bit walk effects. Maidla and Sampaio (1989) calculated the bit walk rate and the lead angle based on a rock-bit interaction. The model was validated with the data from 15 directional wells, and the results present that bit-walk prediction was good for most of the well trajectories, but additional field examples are needed to further justify the method. The above discussion shows that 3D bit-walk paths will not be possible at a fundamental level. A set of characteristic parameter values would determine the shape of a well profile. The crucial point is that the best solution has to include the well friction as the deliberated index. With the catenary shape, a drill string will have no contact with the borehole wall. Therefore, the drill string will tend to stand off the borehole wall, and the drag and torque can be minimized.
This paper presents a 2D model of catenary trajectory design, which is easy and convenient to use. The solutions of the catenary design are in closed form and do not demand thorough numerical estimations. A traditional arc well design is also included to compare the hook load with the catenary trajectory well design.
### Mathematical model (Method description)
In a 2D Cartesian Coordinate system, a simple Catenary Curve can be given as a hyperbolic function.
\[y=\text{acosh}\left(\frac{x}{a}\right)\qquad(1)\]
Where a is the intercept of the catenary curve with the y axis
The vertical displacement V in catenary section can be written in term of horizontal displacement S as the following equation:
\[V=V_{end}-\left\{\frac{a}{2}\left[e^{\frac{S-S_{end}}{a}}+e^{\frac{S-S_{end}}{ a}}\right]-a\right\}\ \ (2)\]
Where \(V_{end}\) is the total vertical displacement in the catenary section
\(S_{end}\) is the total horizontal displacement in the catenary section.
The curvature of catenary section, C, can be expressed as:
\[C=\frac{-\frac{1}{2a}[e^{\frac{S-S_{end}}{a}}+e^{-\frac{S-S_{end}}{a}}]}{\left(1+ \frac{1}{4}\left[e^{\frac{S-S_{end}}{a}}-e^{\frac{S-S_{end}}{a}}\right]^{2}\right) ^{\frac{3}{2}}} \tag{3}\]
The radius of curvature at top of catenary section, R, can be estimated by:
\[R=\frac{1}{C}\ \ \ (4)\]
The build rate of the arc section or at the top of catenary section in degree per 100 feet, B, is:
\[B=5730C\ \ \ (5)\]
The inclination angle 0 can be calculated as:
\[\theta=\frac{\pi}{2}-\alpha=\frac{\pi}{2}-arctan\frac{dV}{dS}\ \ (6)\]
Where \(\alpha\) is slope angle
In a 2D Cartesian Coordinate system, the Kick of Point (KOP) for Arc design can be decided by:
\[VD_{KOP}=VD_{target\ base}-\frac{5730}{B_{min}}\left(\sin(I_{f})-\sin(I_{i})\right) \tag{7}\]
Where I\({}_{i}\) is inclination angle at target base; I\({}_{i}\) is inclination angle at KOP
For upper curve section:
\[VD_{2}=VD_{1}+\frac{5730}{B_{high}}[\sin(I_{2})-\sin(I_{1})]\ \ \ (8)\]
\[HD_{2}=HD_{1}+\frac{5730}{B_{high}}[\cos(I_{1})-\cos(I_{2})]\ \ (9)\]
\[MD_{2}=MD_{1}+100\left[\frac{I_{2}-I_{1}}{B_{high}}\right]\ \ (10)\]
Where VD is vertical displacement; HD is horizontal displacement; MD is measured depth.
For tangent section:
\[VD_{2}=VD_{1}+\Delta MDcos(I_{tan})\ \ (11)\]
\[HD_{2}=HD_{1}+\Delta MDsin(I_{tan})\ \ (12)\]
\[MD_{2}=MD_{1}+\Delta MD\ \ \ (13)\]
For lower curve section:
\[VD_{2}=VD_{1}+\frac{5730}{B_{low}}[\sin(I_{2})-\sin(I_{1})]\ \ (14)\]
\[HD_{2}=HD_{1}+\frac{5730}{B_{low}}[\cos(I_{1})-\cos(I_{2})] \tag{15}\]
\[MD_{2}=MD_{1}+100\left[\frac{I_{2}-I_{1}}{B_{low}}\right] \tag{16}\]
## Results
### A Case Study
The data used for designing arc trajectory and catenary trajectory is presented in Table 1 and 2, respectively.
The design for catenary trajectory includes three parts: Arc section, Catenary section, and slant section. The Kick of Point (KOP) is required before starting for Arc, Catenary, and Slant section design. The arc length is calculated based on the radius of curvature and inclination angle at the top of the catenary section. The most complicated design lies in the catenary section where inclination angle, curvature, and radius are in Eqn.6, Eqn. 3, and Eqn.4, respectively, are required to design the catenary section. The Vertical, Horizontal, North, and East Displacement calculations, which can be found in the Excel Spreadsheet, are also computed to plot the trajectory. The plot of Vertical Displacement versus Horizontal Displacement and North Displacement versus East Displacement are presented in Figures 1 and 2, respectively.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Description & Value & Unit \\ \hline Target Depth & 12500 & ft \\ \hline Azimuth & 45 & degree \\ \hline Build Rate & 0.691 & degree/100f t \\ \hline Horizontal wellbore length & 7500 & ft \\ \hline Inclination angle at target base & 90 & degree \\ \hline \end{tabular}
\end{table}
Table 1: Data for arc trajectory design
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Description & Value & Unit \\ \hline Total measured depth & 24000 & ft \\ \hline Target depth & 12500 & ft \\ \hline Vertical displacement in the catenary section V\({}_{end}\) & 2000 & ft \\ \hline Horizontal displacement in the catenary section S\({}_{end}\) & 4000 & ft \\ \hline Azimuth & 45 & degree \\ \hline \end{tabular}
\end{table}
Table 2: Data for catenary trajectory design
Similarly, the Vertical Displacement versus Horizontal Displacement and North Displacement versus East Displacement are presented in Figure 3 and 4, respectively.
To have a better observation, the data of two design methods is plotted together to compare as shown in Figure 5 and 6.
Then the total hook load (tension at surface T) analysis between two methods are performed to investigate the reduction of wellbore friction in the catenary design.
### For 2D Arc design:
The axial compressive force at the heel can be expressed by the following equation:
\[F_{\pi/2}=\mu*W_{h} \tag{17}\]
Figure 5: Trajectory profile comparison between two methods for Vertical Displacement versus Horizontal Displacement
Figure 6: Trajectory profile comparison between two methods for North Displacement versus East Displacement
The axial force in the curve section can be computed as:
\[F_{O}=F_{\frac{\pi}{2}}+w_{C}R(\mu+1) \tag{18}\]
The Total hook load in Figure 7 is the combination of axial force in the curve section and Vertical force Wv generated by pipe weight:
\[T=F_{o}+W_{\nu} \tag{19}\]
**For Catenary Trajectory Design**:
The axial compressive force at the heel has a similar expression with Arc design:
\[F_{\pi/2}=\mu*w_{h} \tag{20}\]
The influence of catenary shape on axial compressive force at the heel can be expressed as \(F_{ct}\):
\[F_{ct}=F_{\frac{\pi}{2}}/\sin(I) \tag{21}\]
Figure 7: Arc design diagram
The axial force in the catenary can be computed as:
\[F_{O}=F_{ct}+w_{c}R[\sin(I)+\mu(1-\cos(I))] \tag{22}\]
The total hook load is the combination of axial force in the curve section and Vertical force Wv generated by pipe weight:
\[T=F_{o}+W_{V} \tag{23}\]
#### Hook load Analysis for the case study
The data for pipe weight and friction coefficient is given below in table 3 to calculate the hook load.
#### Sensitivity Analysis:
Two uncertain parameters can be varied. They are pipe weight and friction coefficient. The pipe weight of the curve section differs from 85 lbf/ft to 95 lbf/ft, while the friction coefficient of the horizontal section ranges from 1.5 to 2.5. The increase in friction coefficient would cause an increase in hook load. In fact,
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Data Input & Arc Design & Catenary design & Unit \\ \hline Horizontal friction coefficient,\(\mu\) & 2 & 2 & \\ \hline Horizontal pipe weight section, & 16.25 & 16.25 & lbf/ft \\ \(w_{h}\) & & & \\ \hline Horizontal length & 7500 & 7430 & ft \\ \hline Curve friction coefficient, \(\mu_{c}\) & 0.35 & 0.35 & \\ \hline Curve pipe weight section, \(w_{c}\) & 91.69 & 91.69 & \\ \hline Radius of curvature, R & 8,292 & 9,228 & lbf/ft \\ \hline Inclination angle, \(l\) & & 42.99 & degree \\ \hline Vertical pipe weigh section, \(w_{v}\) & 19.50 & 19.50 & lbf/ft \\ \hline Vertical length & 4,207 & 4,208 & ft \\ \hline Vertical force, \(W_{v}\) & 82,049 & 82,057 & lbf \\ \hline Axial compressive force & 243,750 & 241,475 & lbf \\ at heel, \(F_{\text{pl/2}}\) & & & \\ \hline Catenary force, \(F_{ct}\) & & 354,162 & lbf \\ \hline Axial compressive force in string & 1,270,187 & 1,010,575 & lbf \\ at KOP, \(F_{o}\) & & & \\ \hline Tension at surface, T & 1,352,236 & 1,092,633 & lbf \\ \hline \end{tabular}
\end{table}
Table 4: Calculation of hook load
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Parameter & Vertical Section & Curve Section & Horizontal Section \\ \hline Pipe weight (lbf/ft) & 19.5 & 91.69 & 16.25 \\ \hline Friction Coefficient & & 0.35 & 2 \\ \hline \end{tabular}
\end{table}
Table 3: The pipe weight and friction values for each section
with the pipe weight = 85 lbf/ft and friction coefficient=1.5, the total hook load for catenary design is about 956,199 lbf while it is approximately 973,907 lbf with friction coefficient=1.6. Also, the total hook load would increase with the increase in pipe weight. The total hook load varies from 956,189 lbf to 1,02,7789 lbf in the range of pipe weight of the curve section from 85 to 95 lbf/ft. The total hook load plots with different friction coefficients and pipe weights are presented in Figures 8 to 18.
Figure 10a: Hook load profile with \(\mu\)=1.7 Figure 10b: Percentage of hook load difference
Figure 12a: Hook load profile with \(\mu\)=1.9
Figure 12b: Percentage of hook load difference
Figure 12a: Hook load profile with \(\mu\)=1.9
Figure 14a: Hook load profile with \(\mu\)=2.1 Figure 14b: Percentage of hook load difference
Figure 14a: Hook load profile with \(\mu\)=2.1 Figure 14b: Percentage of hook load difference
Figure 16a: Hook load profile with \(\mu\)=2.3 Figure 16b: Percentage of hook load difference
Figure 17a: Hook load profile with \(\mu\)=2.4 Figure 17b: Percentage of hook load difference
## Discussion
Based on Figure 6, the two methods have a similar profile of North Displacement versus East Displacement because they have similar horizontal displacement. The difference in vertical displacement would create a distinction between the two methods.
The total hook load for arc trajectory design is 1,352,236 lbf which is far greater than the total hook load of catenary trajectory design (1,092,633 lbf). Based on the outcome from Table 4, the hook load difference is caused by friction in the curve section. In fact, the axial compressive force in the string at KOP for acr trajectory is 1,270,187 lbf compared to 1,010,575 lb for catenary trajectory design. The difference is 23.8 %.
## Conclusions
Several conclusions can be withdrawn from this study:
1. A mathematical model applied for this study is simple and straightforward. It uses closed-form equations and removes the complexity of numerical calculation.
2. The difference in vertical displacement would create a distinction between the two methods. This could cause the disparity of total hook load.
3. The outcome has shown that catenary trajectory design would reduce the total hook load compared to traditional arc design. The reduction is about 23.8 %.
## Acknowledgement
The author is very grateful for Dr. Guo's instruction to complete this paper. |
2302.10672 | Importance of methodological choices in data manipulation for validating
epileptic seizure detection models | Epilepsy is a chronic neurological disorder that affects a significant
portion of the human population and imposes serious risks in the daily life of
patients. Despite advances in machine learning and IoT, small, nonstigmatizing
wearable devices for continuous monitoring and detection in outpatient
environments are not yet available. Part of the reason is the complexity of
epilepsy itself, including highly imbalanced data, multimodal nature, and very
subject-specific signatures. However, another problem is the heterogeneity of
methodological approaches in research, leading to slower progress, difficulty
comparing results, and low reproducibility. Therefore, this article identifies
a wide range of methodological decisions that must be made and reported when
training and evaluating the performance of epilepsy detection systems. We
characterize the influence of individual choices using a typical ensemble
random-forest model and the publicly available CHB-MIT database, providing a
broader picture of each decision and giving good-practice recommendations,
based on our experience, where possible. | Una Pale, Tomas Teijeiro, David Atienza | 2023-02-21T13:44:13Z | http://arxiv.org/abs/2302.10672v1 | Importance of methodological choices in data manipulation for validating epileptic seizure detection models
###### Abstract
Epilepsy is a chronic neurological disorder that affects a significant portion of the human population and imposes serious risks in the daily life of patients. Despite advances in machine learning and IoT, small, nonstimgaturizing wearable devices for continuous monitoring and detection in outpatient environments are not yet available. Part of the reason is the complexity of epilepsy itself, including highly imbalanced data, multimodal nature, and very subject-specific signatures. However, another problem is the heterogeneity of methodological approaches in research, leading to slower progress, difficulty comparing results, and low reproducibility. Therefore, this article identifies a wide range of methodological decisions that must be made and reported when training and evaluating the performance of epilepsy detection systems. We characterize the influence of individual choices using a typical ensemble random-forest model and the publicly available CHB-MIT database, providing a broader picture of each decision and giving good-practice recommendations, based on our experience, where possible.
Methodological choices, machine learning, seizure detection, epilepsy, data selection, cross-validation approaches, performance metrics, reproducibility, comparability
## 1 Introduction
In recent years, advances in signal processing, machine learning algorithms, the Internet of Things (IoT), and wearable devices have enabled a variety of continuous monitoring applications in many domains, particularly health monitoring. One such example is epilepsy detection, with the ultimate goal of having small, non-stigmatizing, wearable devices for long-term epilepsy monitoring in patients' homes and everyday life, rather than limited to in-hospital monitoring. Epilepsy is a chronic neurological disorder characterized by the unexpected occurrence of seizures, imposing serious health risks and many restrictions on daily life. It affects a significant portion of the world's population (0.6 to 0.8%) [1], of which one third of patients still suffer from seizures despite pharmacological treatments [2]. Thus, there is a clear need for solutions that allow continuous unobstructed monitoring and reliable detection (and ideally prediction) of seizures [3], [4]. Furthermore, these solutions will be instrumental in the design of new treatments, assisting patients in their daily lives, and preventing possible accidents. This need is also evident in the growing number of publications on seizure detection methods [5], [6] and wearable devices [7], [8].
However, although many studies report impressive levels of accuracy via machine learning (ML) methods, the widespread adoption of commercial technology has not yet occurred. The reasons for this are many and include the specificities of epilepsy itself. For example, to properly characterize epileptic seizures, recordings must be continuous, often lasting days and leading to extremely unbalanced datasets. This imbalance must be taken into account when preparing the data set, splitting it into training and testing, training epilepsy detection models, and reporting final performance values. Another challenge is the fact that epilepsy is a holistic phenomenon affecting many signal modalities, and thus, to get a full picture, multimodal data are needed from several different sensors. How to efficiently process all this data and fuse information and predictions remains an open research topic [9]. Finally, seizures show highly personalized patterns, which require new methods of personalizing general models (that were developed from many subjects) using the characteristics of individual patients. [10], [11].
The last reason for slower progress is that the way studies are designed, algorithms assessed, and results reported is very heterogeneous. It can be difficult to understand the level of evidence these studies provide [12] and it is also impossible to fairly compare the results. For example, it is very difficult to compare the performance of various systems when only two quantitative values are reported (e.g., sensitivity and specificity) and when the prior probabilities vary significantly (the a priori probability of a seizure is very low, which means that the assessment of background events dominates the error calculations) [13].
Thus, in this paper, we want to bring attention to a number of methodological choices which are usually underreported but ultimately can have a strong influence on system performance. These choices are necessarily made during data preparation, training, and also evaluation and reporting of the results.
The contributions of this work are summarized as follows:
* We identify a wide range of methodological decisions that must be made and reported when training and evaluating the performance of epilepsy detection systems.
* We characterize and assess the influence of individual choices using a typical ensemble random-forest model and the publicly available CHB-MIT database.
* We provide a broader picture of each decision and give good-practice recommendations where possible.
The remainder of the paper is organized as follows: Section 2 details the relevant methodological choices and their potential influence. Section 3 provides a description of the experimental setup used to evaluate the influence of these methodological choices and parameters. Section 4 presents the experimental results, while Section 5 comments on more broad and general observations on the presented results. It also presents certain methodological recommendations for the development of future epileptic seizure detection algorithms, as well as more general time series analysis applications. Section 6 concludes this work.
## 2 Methodological choices
There are many methodological choices to make when evaluating machine learning algorithms and systems in terms of their performance and suitability for real-life applications. These choices, can significantly impact the performance and repeatability of such results in practice. In this section we go through the most important choices, discussing data preparation, training and testing methodology, and performance measures, as listed in Table 1. We will later show how they influence the detection of epileptic seizures.
### Data preparation
An important part of evaluating machine learning algorithms is the data used to train and test the algorithm. A well-known practice is that training, validation, and test subsets must be chosen without overlap and be statistically independent to avoid the effect known as 'data leakage'. But the question that is less discussed is how representative are the data that we use. With the increasing amount of big data collected using the Internet of Things (IoT) and wearable devices, big data sets are no longer rare. Such large datasets are incredibly valuable and essential for having more ML/AI-powered devices in everyday life, but they also bring certain challenges. Training on such a huge amount of data, especially for computationally demanding or memory intensive algorithms or without lots of computational resources, can be complex, slow, and even potentially not feasible. For this reason, a common approach is to create smaller subsets of available datasets.
In the case of epilepsy, it is characterized by recurrent but unpredictable electrical discharges in the brain. Epilepsy episodes can last from a few seconds to a few minutes. Overall, when looking at the recorded data, the percentage of seizure data is extremely small, commonly less than 0.5%. This huge imbalance in epilepsy recordings leads to the common choice of creating a data subset that contains all seizure signals but only a reduced amount of non-seizure signals. This step of generating smaller or even balanced datasets makes training simpler, performance reporting clearer, and speeds up the research process. Most papers tackling the epilepsy detection problem do not use whole long-term epilepsy recordings, but rather data subsets and also very rarely discuss the influence of this decision on their results.
In this paper, we address this question by testing several epilepsy subsets created from a main dataset. We evaluate the influence of using all or only some data samples, as well as the impact of the seizure to non-seizure imbalance ratio. We also test the influence of data splitting during the training and cross-validation folds and show that this choice can be very critical and make a big difference in whether the proposed algorithm will work in practice when all data are used, without the possibility of performing any selection.
### Generalized vs. personalized models
In many applications where underlying data patterns are highly specific, such as in many biomedical use cases, there are two approaches to training; personalized and generalized training and models. Epilepsy is a good example of this, where underlying electroencephalography (EEG) patterns are highly variable between EEG channels, recording sessions, and subjects. Personalized training means that data from the same subject are used to train the model. This leads to as many ML models as subjects we have. Generalized training, on the other hand, would lead to a single ML model for all subjects. To avoid data leakage (and enable comparison with personalized models), every subject has its own generalized model trained on all subject's data but that test subject, which is also known as the leave-one-subject-out approach.
On one hand, personalized modes can capture subject-specific patterns better, but are also trained on less data in total, which can sometimes be limiting, as some subjects have very few seizures recorded. On the other hand, generalized models are more complex to train as they are trained on more data, and can also be less subject-specific but may be more interesting for building large-scale wearable outpatient systems.
Figure 1: Epilepsy model predictions example. Predictions without any post-processing and with two types of post-processing as well as true labels are shown. Of interest are the distributions of false positives. The distributions of false positives are of particular interest.
### _Respecting temporal data dependencies_
Another aspect of data that is commonly forgotten is that all data are recorded in time and that sometimes this imposes some unavoidable statistical dependencies. Some underlying patterns that our ML algorithms can use can only exist in a certain order, and for this reason it might not be fair to use data that are in the future to train and then test using data that was before it. On one hand, it can miss some patterns useful for detection, and on the other hand, it can lead to potentially unfeasible results for in-practice applications.
Two parts of the ML workflow must be considered in this case. Often, data samples are shuffled before training, whereas for temporal data, this might not be advisable. Furthermore, if some statistical knowledge on the distribution and length of certain classes is available, this knowledge can be used to post-process predicted labels and lower misclassification chances. For example, in the case of epileptic seizures, it would not be realistic. that an individual suffers an epileptic seizure of 1-second duration every minute.
Second, the temporal aspect of the data is relevant when choosing the cross-validation (CV) approach. A common CV approach for personalized training is leave-one-seizure-out, which means that data from one seizure is left out for testing, and seizures that come before but also after will be used for training. On the other hand, in the time series cross-validation (TSCV) approach [14, 15] only previously acquired data can be used for training. This means that if files are ordered in time, for the first CV fold only one file will be used for training and the one after it for testing. For the following CV folds, one more file is always added to the training set (the file previously used for testing), and testing is done on the next available file. This CV approach is rarely used in the literature but is the only feasible approach for online data training (and inference) on wearable devices.
### _Data segmentation_
Typically, features are extracted from fixed-size windows of data and calculated with the'moving window' repeatedly in shifts of a chosen step size. Here, two parameters have to be decided: window size (WS) and window step size (WSS) for which we move the feature extraction window. Choosing a larger window size might be necessary when extracting frequency information, but it also limits the possibility of detecting very short patterns. Similarly, a smaller step size can decrease detection latency, but increases the computational costs of the algorithm due to more frequent feature extraction. These parameters can be optimized according to several aspects: features used and their properties and complexity, latency requirements, or available computational resources. If none of these is limiting, the parameters are generally optimized in terms of performance. It is interesting to note how much performance can change depending on these choices. More importantly, parameter choice and the reasoning behind it should be mentioned and documented in papers.
### _Evaluation metrics_
For temporal and sequential data, standard performance evaluation metrics, such as sensitivity and specificity, may not always be the most appropriate and can even be misleading [16]. Evaluation metrics must ultimately reflect the needs of users and also be sufficiently sensitive to guide algorithm development [13]. As Shah et al. stated in [17] there is a lack of standardization in the evaluation of sequential decoding systems in the bioengineering community.
The same authors compare five popular scoring metrics for sequential data in [13]. Among them, the most interesting are 'Epoch-based sampling' (EPOCH), 'Any-overlap' (OVLP), and 'Time-aligned event scoring' (TAES). EPOCH treats the reference and hypothesis as temporal signals, samples them at a fixed epoch duration, and counts errors (TP, TN, FP, and FN) accordingly. For an epoch duration of one sample, this metric processes data sample-by-sample and results in typical performance measures (such as accuracy, sensitivity, specificity, F1 score etc). The OVLP measure [18], interprets signals as a series of same-label episodes and then assesses the overlap in time between reference and hypothesis. It counts a 'hit' in case there is any overlap between the reference and hypothesis. In Fig. 2 we illustrate several use cases, how errors are counted, and what is the final performance measure. The authors in [13] also propose the TAES metric which combines the EPOCH and OVLP information into one metric. The approach is very similar to OVLP, but rather than simply considering if there is any overlap between reference and hypothesis episodes, the percentage of overlap is measured and weighs the errors (TP, TN, FP, FN) accordingly. Here, we want to demonstrate the difference in performance in the use case of epilepsy detection, depending on the chosen performance measure, and also how these performance metrics can be used to interpret the quality of algorithm predictions. The code of those metrics is available online1.
Footnote 1: [https://c4science.ch/source/PerformanceMetricsLib/](https://c4science.ch/source/PerformanceMetricsLib/)
Another performance measure with a strong practical impact, and thus often used for epilepsy detection, is the false alarm rate (FAR), or the number of false positives per hour/day. Clinicians and patients see this measure as more meaningful than many more commonly used metrics, and are very demanding in terms of performance, requiring it to be as low as possible for potential wearable applications (e.g., less than 1 FP/day) [17]. This also necessitates exceptionally high constraints on the required precision (usually much higher than 99%).
Finally, to quantify global performance, the accumulated performance of all cross-validation folds has to be calculated.
Fig. 2: Illustration of duration and episode-based performance metrics.
But here are also choices to be made. One can measure the average performance of all CV folds (micro-averaging) or can, for example, append predictions of all test files next to each other and only then measure performance on all appended data (macro-averaging). In Fig. 1, an example of predictions (also with moving average post-processing) is given for all files of one subject. What is important to notice is the distribution of false positives over time and over the different files/CV folds. Most often, false positives occur around seizures. However, there can potentially be a fold(s) with an unexpectedly large number of false positives. If the final performance is measured as an average of the performances of each fold, a fold with many false positives, as in Fig. 1, will have a lower influence on the total performance than if all predictions are appended and performance is measured only afterward. This potential overestimation of performance when averaging cross-validations should also be taken into account.
## 3 Experimental setup
### Dataset
In this work, we use the CHB-MIT epilepsy dataset, an open source widely used dataset for the detection of epilepsy [19], as it is a good representative of continuous, relatively long-term monitoring (over several days). CHB-MIT is an EEG database, with a total of 982.9 hours of data recorded at 256Hz. It consists of 183 seizures forming a total of 3.2 hours or 0.32% of labeled ictal data, from 24 subjects with medically resistant seizures ranging in age from 1.5 to 22 years. On average, it has 7.6 \(\pm\) 5.8 seizures per subject, with an average seizure length of 58.6 \(\pm\) 65.0 s. It was recorded using the bipolar montage (10-20 system) and thus contains between 23 and 26 channels, of which we use the 18 channels that are common to all patients.
### Machine learning training
We extract 19 features from each of the 18 channels, similar to [20], calculating them on 4-second windows with a moving step of 0.5 seconds (unless otherwise specified). We use two time-domain features, mean amplitude and line length, and 17 frequency domain features. Both relative and absolute values of power spectral density in the five common brain wave frequency bands are used: delta: [0.5-4] Hz, theta: [4-8] Hz, alpha: [8-12] Hz, beta: [12-30] Hz, gamma: [30-45] Hz, and low-frequency components: [0-0.5] Hz and [0.1-0.5] Hz. Before extracting the features, the data is filtered with a 4th-order, zero-phase Butterworth bandpass filter between [1, 20] Hz.
As an algorithm to test the range of parameters mentioned, we choose a highly popular but also feasible algorithm for wearable and outpatient monitoring devices. We implemented a random forest classification algorithm, which is based on an ensemble of 100 decision trees to reduce model overfitting. It is fast and lightweight, both in model size and memory footprint [21], and has been used extensively for EEG-based seizure classification [22, 23, 24]. In the end we postprocess predicted labels with a moving average window of 5 seconds, and majority voting to smooth predictions and remove unrealistically small seizures. If seizures are closer than 30s, we merge them into one.
## 4 Experimental results
### Evaluation metrics
In this work, we use two metrics to measure performance. One is the EPOCH approach described in Sec. 2.5, with an epoch duration of one sample. TP, TN, FP, and FN are detected sample by sample, which are further used to calculate the sensitivity (TPR), precision (PPV), and F1 score. We call this duration-based performance, which characterizes how well seizures are detected with respect to their full length. The other performance metric is OVLP, which detects overlaps between predicted (hypothetic) and reference seizure or non-seizure episodes. We call this an episode-based metric, as it cares only about whether each of the seizure episodes has been detected, not caring exactly about the predicted seizure duration. These two metrics are very easily interpretable. For example, if the sensitivity at the episode level is 80% and there were 10 seizures, it means that 8 episodes were detected but 2 were missed. The TAES metric proposed in [13] is an interesting approach to combine both metrics but is harder to interpret, and thus it is not used here.
Fig. 3 shows the average performance of personalized models for the 24 subjects from the balanced CHB-MIT dataset. For both episode- and duration-based performance, sensitivity, precision, and F1 scores are shown, as well as one accumulative measure, the mean of F1 score for episode and duration-level ('F1_DE'). The sensitivity of the episode is on average 100%, which means that except for a few cases, all episodes of seizures were detected. Looking at duration-level sensitivity, it is clear that even if seizure episodes were perfectly detected,
Figure 4: Epilepsy detection results depending on the data subset used.
Figure 3: Epilepsy detection performance measured through seven measures; both on episode and duration level. For each, sensitivity (TPR), precision (PPV) and F1 score are measured. The results show average performance for all 24 subjects for ‘Fact1’ data subset.
not always predicted. Looking at the precision, it is clear that there are also false positive predictions, and more of them when measuring on episode level than duration level, meaning that there were many short false positives. Observing the performance through these six values enables a more complete characterization of the prediction performance of a certain algorithm. It also enables a more nuanced comparison between different methodological steps or parameter values used, as will be shown next.
### _Generalized vs. personalized models_
Here we test the performance difference when training both personalized and generalized models for each subject. We used a balanced data subset ('Fact1'). In Fig. 6, the average for all subjects is shown, making clear the lower performance of generalized models. Inspecting the performance of the generalized model per subject reveals a clear distinction between patients on which generalized models perform very well and those for whom it performed poorly, either due to many false positives or almost no detected seizures. How to create generalized models and whether there should be subtypes of generalized models for different patient groups remains a question for future generations. For the remainder of this work, we will focus on personalized models.
### _Data preparation_
In Fig. 4, the performance of epilepsy detection is shown for five different data subsets used to train and test. The first two approaches, 'Fact1', and 'Fact10', contain a subset of the original CHB-MIT dataset in two different ratios of seizure and non-seizure data. 'Fact1' is a balanced data subset that has the same amount of seizure and non-seizure data, where all available seizure data are used, along with a randomly selected equal amount of non-seizure data. The 'Fact10' subset is constructed similarly, with the difference that the amount of randomly selected non-seizure data is 10x more than seizure data. Data are divided into files equal to the number of seizures with one seizure per file. Each file is arranged such that seizure data occur in the middle of the file, with nonseizure data split on both sides. Therefore, the total file length depends on the length of the seizure and the factor value. This organization enables easier training in the case of the leave-one-seizure-out approach, as each file is equally balanced.
The last three approaches, Seizure To Seizure ('StoS'), and 1 or 4 hour windows ('Win1h'/'Win4h', 'WinXh' together), contain all data samples from the CHB-MIT database but are rearranged into files containing different amounts of data. The 'StoS' approach consists of files that start with the non-seizure data after the previous seizure and end when the next seizure ends. In this way, every file contains exactly one seizure, but the entire length of the file is not fixed. The last two approaches, 'Win1h' and 'Win4h', as the names imply, divide the dataset into files of 1-hour or 4-hour duration. In this way, some of the files may contain zero to possibly multiple seizures. In all three cases, we trained using time-series cross-validation. We specified that the first file must contain a certain amount of data (five hours) and at least one seizure, and as such it is slightly different from other consequent files.
We analyze the impact of data preparation considering three metrics: false alarm rate (FAR), sensitivity (TPR), and precision (PPV). First, the false alarm rate, defined here as the number of FPs per day, as shown in Fig. 4, is significantly higher for data subsets (Fact1(0)) than for the whole dataset training (StoS or Win1h/4h). This can be traced to two reasons. The first is that Fact1(0) is trained on much less non-seizure data, potentially not enough to model properly the non-seizure patterns, resulting in more non-seizure samples being falsely classified. The second is that because testing is done only on a subset of data, FPs must be linearly scaled to estimate the false alarm rate per day, potentially leading to very high numbers. For these reasons, data subsets should not be used to estimate the false alarm rate of an epilepsy detection algorithm.
Next, we consider the sensitivity and precision. For Fact1(0), seizure episode detection is easier, visible from episode-level sensitivity that is 100%. Detecting all seizure episodes perfectly is much harder when the entire dataset (StoS or WinXh) is used and visible by episode sensitivity values ranging between 80 and 95%. Precision presents a more complex picture, dropping sharply for StoS before recovering for WinXh. This is because StoS retraining and testing only occurs after every new seizure, where in the meanwhile could have been hours of non-seizure data. The WinXh strategies retrain much more often, thus making it easier to learn non-seizure patterns and lower false predictions. To conclude, using all data significantly reduces false positives, but also results in lower, but more realistic, sensitivity and precision values.
### _Respecting temporal data dependencies_
To show the influence of cross-validation choices on performance results, we trained and tested three data sub
Fig. 5: Performance results when using leave-one-out vs. time-series cross-validation. Results are shown for three data subsets (F1, F10 and StoS).
Fig. 6: Comparison of average performance (for all subjects) of personalized and generalized models.
sets with different data imbalance ratios ('Fact1', 'Fact10', 'StoS'), both using leave-one-seizure-out (L1O), and time-series cross-validation (TSCV) approaches. The results are shown in Fig. 5. The superior performance of the L1O approach is evident in almost all aspects (sensitivity, precision, and numFP). This is reasonable as more data were used to train with L1O than with TSCV approach. The difference in performance ranged from 3 to 7% for the F1 score in episode detection. This is not a recommendation to use L1O; in contrast, it demonstrates that training on future data leads to overestimated performance and should be avoided.
### Data segmentation
In Fig. 7, the performance of epilepsy detection is shown when the window step ranged from 0.5 to 4 s, with a window size of 4 s. The most clear and expected pattern is that by increasing the step size, the number of false positives is reduced significantly, but what is interesting is that the proportion of FP increases, thus reducing the precision for episodes. More precisely, precision increases first (while sensitivity is still high) and then drops for episodes (because less seizures were also detected), leading to the conclusion that too big steps are risky. Increasing window step size also reduces sensitivity, more noticeably for duration level, which may be due to the fact that with large steps, shorter seizures can be potentially missed. These results demonstrate the complexity of the window step size parameter, that it is beneficial to experiment before choosing one value, and that it has to be necessarily reported to make the results comparable and reproducible.
## 5 Discussion
### Data aspects
As seen from results in Sec. 4.3, using all the data significantly reduces false positives and also results in lower and more realistic sensitivity values. Thus, if computational and memory resources are sufficient, models should be trained using all available data. If this is not possible, then the subset of data should contain significantly more non-seizure data. However, data subsets will result in an unrealistic false alarm rate. When using the whole dataset, the most appropriate seems to be to use fixed-size time frames in which models are retrained/updated regularly. The size of this time frame should also be tested and reported. General advice would be to use data subsets for initial experimentation and building an understanding of the algorithm and its parameters, but that all the available data should be used for reporting the final performance. It is also useful to take into account and characterize the class imbalance.
When talking about the temporal aspect of data, several things should be taken into account. We advise not to shuffle data samples before training and testing but rather to use temporal information and knowledge on class distribution to postprocess predicted labels, which can increase performance, but must also be clearly reported.
Finally, it is critical to decide whether to use only the data from the same subject to create personalized and, as shown, more precise models or to use all available data from other subjects to create generalized models. Generalized models can have lower individual performance, but can be used for new subjects. This topic represents a full research topic on its own. For example, future research should investigate whether it is possible to create models that can use generalized models as a starting point from which they can be personalized.Would these models lead to better overall performance in comparison to personalized models? Would less personal data be needed to personalize models if generalized modes are used as starting point? Similarly, the unavoidable question is, can we somehow profit from both generalized and personalized models? Can we combine them in some beneficial way?
### Training aspects
When talking about the choice of cross-validation as shown in Sect. 4.4, the leave-one-out approach leads to higher performance than if data are trained in temporal order using time series cross-validation. However, the L1O approach is not realistic for training data in real time, while TSCV is intended for such scenarios. Training models online as data are being acquired is one of the necessary next steps for ML models on IoT devices, and thus TSCV will have to become the standard method.
Data partitioning has two parameters that can also play a significant role in performance, namely the window size used to extract features and the window step size. Their optimal choice can depend on each use case, the features extracted and their properties and complexity, latency requirements, and available computational resources. Here, we show how window step size can influence performance, with different patterns for false alarms, sensitivity, and precision, and how it has different impacts on duration- or episode-level classification. The results show the complexity of the window step size parameter, indicating that it is beneficial to test it before choosing one value and that it must be reported to make results comparable with the literature. One research avenue that we have not considered here as it is outside the scope of this work, but which can be potentially very beneficial, is optimizing the window size parameter for each feature individually.
### Performance estimation aspects
Here we proposed to use two performance metrics, one at the duration-level and one at the episode-level. Each of them has certain advantages, and thus their values should be interpreted carefully. Nevertheless, together they provide a
Figure 7: Performance with respect to different window step sizes. Steps of 0.5, 1, 2 and 4s were used, with window size of 4s.
full picture of the detection characteristics of the algorithm analyzed.
For example, EPOCH, a duration-based metric, cares about the duration of the events and thus weighs long events more importantly. This means that if a signal contains one very long seizure event and some shorter ones, the accuracy with which the long event is detected will dominate the overall scoring. In epilepsy detection, as in many applications, the duration of the event can vary dramatically; therefore, this must be taken into account. For this reason, OVLP, an episode-level performance metric, is much easier to interpret. However, such an episode-level metric is more permissive and tends to produce much higher sensitivities. It can also be implemented so that if an event is detected in close proximity to the reference annotation, it is considered correctly detected, which can further increase the performance values.
Nowadays, in the literature, duration-level-based performance is still the most popular, but there are trends of moving toward more event/episode-based performance measures [13]. Currently, there is no standardization. Until then, the performance metrics used, as well as post-processing that has been utilized to smooth the labels, must be clearly described.
Similarly, the method to achieve the overall performance measure from the individual CV folds must be documented. We recommend that overall performance be calculated by temporally appending all fold predictions in time, rather than as the average of all fold performances. For example, if one CV fold (or small portion, as in Fig. refig:dataPredictionExample) has extremely high number of false positives, but all other ones have good performance, it will affect overall estimation of e.g., precision much less (due to averaging over all folds), leading to potentially overestimated performance.
## 6 Conclusion
In this work, we have characterized the influence of a wide range of important methodological choices for epilepsy detection systems. When choosing a subset of the dataset for training, performance can be highly overestimated compared to training in the entire long-term data set. Thus, for real-life performance estimation, using all long-term data is necessary. Similarly, using the leave-one-seizure-out cross-validation approach can improve detection performance, but it is not realistic for online data training, as it uses future data. Thus, we recommend using a time-series cross-validation approach instead, with macro-averaging rather than microaveraging. Training on a generalized level can be challenging due to its subject-specific nature, leaving personalized models outperforming generalized ones. Furthermore, performance metrics must reflect users' needs and be sufficiently sensitive to guide algorithm development. Consequently, we encourage the usage of both episode-based and duration-based performance metrics, which together can give a more nuanced picture of algorithm performance. Finally, whatever choices are made, to further increase the comparability and reproducibility of results, it is essential that all choices and parameters are well reported.
|
2308.05542 | Robust Asymmetric Loss for Multi-Label Long-Tailed Learning | In real medical data, training samples typically show long-tailed
distributions with multiple labels. Class distribution of the medical data has
a long-tailed shape, in which the incidence of different diseases is quite
varied, and at the same time, it is not unusual for images taken from
symptomatic patients to be multi-label diseases. Therefore, in this paper, we
concurrently address these two issues by putting forth a robust asymmetric loss
on the polynomial function. Since our loss tackles both long-tailed and
multi-label classification problems simultaneously, it leads to a complex
design of the loss function with a large number of hyper-parameters. Although a
model can be highly fine-tuned due to a large number of hyper-parameters, it is
difficult to optimize all hyper-parameters at the same time, and there might be
a risk of overfitting a model. Therefore, we regularize the loss function using
the Hill loss approach, which is beneficial to be less sensitive against the
numerous hyper-parameters so that it reduces the risk of overfitting the model.
For this reason, the proposed loss is a generic method that can be applied to
most medical image classification tasks and does not make the training process
more time-consuming. We demonstrate that the proposed robust asymmetric loss
performs favorably against the long-tailed with multi-label medical image
classification in addition to the various long-tailed single-label datasets.
Notably, our method achieves Top-5 results on the CXR-LT dataset of the ICCV
CVAMD 2023 competition. We opensource our implementation of the robust
asymmetric loss in the public repository: https://github.com/kalelpark/RAL. | Wongi Park, Inhyuk Park, Sungeun Kim, Jongbin Ryu | 2023-08-10T12:41:08Z | http://arxiv.org/abs/2308.05542v1 | # Robust Asymmetric Loss for Multi-Label Long-Tailed Learning
###### Abstract
In real medical data, training samples typically show long-tailed distributions with multiple labels. Class distribution of the medical data has a long-tailed shape, in which the incidence of different diseases is quite varied, and at the same time, it is not unusual for images taken from symptomatic patients to be multi-label diseases. Therefore, in this paper, we concurrently address these two issues by putting forth a robust asymmetric loss on the polynomial function. Since our loss tackles both long-tailed and multi-label classification problems simultaneously, it leads to a complex design of the loss function with a large number of hyper-parameters. Although a model can be highly fine-tuned due to a large number of hyper-parameters, it is difficult to optimize all hyper-parameters at the same time, and there might be a risk of overfitting a model. Therefore, we regularize the loss function using the Hill loss approach, which is beneficial to be less sensitive against the numerous hyper-parameters so that it reduces the risk of overfitting the model. For this reason, the proposed loss is a generic method that can be applied to most medical image classification tasks and does not make the training process more time-consuming. We demonstrate that the proposed robust asymmetric loss performs favorably against the long-tailed with multi-label medical image classification in addition to the various long-tailed single-label datasets. Notably, our method achieves Top-5 results on the CXR-LT dataset of the ICCV CVAMD 2023 competition. We opensource our implementation of the robust asymmetric loss in the public repository: [https://github.com/kalepark/RAL](https://github.com/kalepark/RAL).
## 1 Introduction
Multi-label classification, which predicts more than one label from a single image, has received lots of interest in recent years. Especially in the field of medical image recognition, several studies [1, 27, 43, 32] have been conducted to tackle the problem of the coexistence of multiple symptoms in a single radiology image. However, these multi-label classification studies have overlooked another critical issue: the long-tailed distribution of medical data. [7, 5, 43, 18] That is the long-tailed distribution problem. In general, multi-label data, the more classes there are used, the more long-tailed the distribution. In this case, for classes with fewer labels (tail labels), the performance of the model will drop significantly, and the model will be biased to the data of head labels with more training data. Therefore, it is challenging to generalize the learned model in practice. To solve this imbalance data distribution, there have been studies that re-sample[2, 30, 35] or re-weight[39, 19, 12] the data to make the model learn more from the tail labels, but these studies have not been addressed the long-tailed problem in a multi-label classification environment.
Another issue is that many studies[38, 15, 37] exploit ad
Figure 1: Label distribution of the CXR-LT Dataset[17]. Typical radiological images have a lengthy tail distribution since there are few positive samples for certain classes. Further single radiological image contains multiple classes in most cases. These long-tailed and multi-label data is common but critical issues in real-world medical image recognition task.
ditional resources to tackle the long-tailed distribution and multi-label classification problems. Using larger models or increasing computational complexity can help such problems, but there might be limitations in that their high budget reduces the practical applicability. Therefore, in this paper, we propose a robust asymmetric loss that does not require additional resources to learn the multi-label long-tailed medical data. The proposed loss is based on asymmetric weighting, which ensures that the importance of the negative sample's loss is regarded differently from that of positive samples so that even hard negative samples can be robustly learned. Compared to the existing cross-entropy, focal loss[23], asymmetric loss[4], and balanced loss[11], the proposed robust asymmetric focal loss effectively learns long-tailed multi-label data reliably while being less sensitive to hyper-parameters. Specifically, we re-weight the negative samples adopting the hill loss[42] so that ours performs favorably against the hard negative samples while being robust to the settings on the variety of hyper-parameters. To this end, we expand the asymmetric loss using a Taylor series-based approach[20] to account for the negative loss. The Taylor series ensures that negative samples below a certain threshold are not used for training so that the stable gradient can be passed to the deep neural networks.
We evaluate the proposed method on CXR-LT, a long-tailed multi-label classification medical dataset, and demonstrate that ours improve the performance of the classification task over existing methods. Notably, our method achieved 0.351 mAP, which is within the Top-5 of the final ranking in the ICCV CVAMD 2023 competition. Furthermore, we evaluate that our robust asymmetric loss works well on long-tailed distributions, even on single-label medical datasets. For this evaluation, we utilize the ISIC2018 and APTOS2019 datasets and show that our method achieves considerably better performance compared to the existing methods.
We present the contributions of this paper as follows.
* We propose robust asymmetric loss, which is effective for long-tailed multi-label classification. The proposed loss can be finely tuned but is not sensitive to hyper-parameter settings.
* We improve the performance of long-tailed multi-label classification without additional training data, model
Figure 2: Examples of estimated probabilities of the BCE, ASL, and our RAL from the CXR-LT[17] and ISIC2018[10] datasets. Example result of our proposed methods, BCE and ASL[4] from CXR-LT Dataset[17] and ISIC2018 Dataset[10]. The color red denotes the positive labels. The model trained by BCE loss has a tendency to overfit the single label. On the other hand, models trained by ours exhibit higher probabilities for multiple labels, implying that ours are more reliable for the long-tailed multi-label classification task.
parameters, and computational budget.
* We achieved Top-5 results in the CVAMD2023 competition on the long-tailed multi-label CXR-LT dataset. In addition, we confirm that the proposed method works well on single-label medical image classification as well as the multi-label dataset.
## 2 Related Work
**Multi-Label** classification has been extensively studied to predict more than one class label[25, 16]. Recently, to understand the correlation between multiple labels, several studies have introduced network architectures that enable the model to predict the inherent relation between features and corresponding labels. Most of the studies have used Graph Convolution Network(GCN)[9, 40, 8] that learn label's feature relation in the graph structure. Subsequently, semantic representation in images using attention mechanism has been researched in many studies[22, 36]. On the other hand, there have been training algorithms to tackle the multi-label classification, such as investigation on the weight re-weighting and class frequency[11, 29, 33]. More recently, asymmetric loss[4, 20] has been introduced to optimize the imbalanced positive and negative losses.
**Long-tailed** distribution has been regarded as a practical problem for real-world machine learning applications. Studies that address the problem of the long-tailed distribution can be divided into two categories. First, re-sampling methods[41, 21, 14] have been introduced to under-sample or over-sample the data according to its class distribution to construct the balanced training set. Re-weighting strategy assigns different weights to the samples to adjust the long-tailed distribution[23, 11]. However, there is an ambiguity in applying the re-sampling methods to multi-label datasets. When a single image contains both head and tail class labels, it is difficult to determine whether the sample should be over- or under-sampled. For this reason, re-sampling methods are hardly applicable to the multi-label classification so we encounter the problem that most gradients are computed from negative samples. To address this issue, the second category, studies[13, 29, 33] for dealing with the long-tailed class distribution exploiting the loss functions, are being researched. The focal loss[23] is the landmark method to tackle the long-tailed class distribution using the loss function. Focal loss adds the modulating factors(_i.e_. focusing and balance parameters) to the cross-entropy loss so that it can mitigate the long-tailed distribution by controlling such modulating factors. Then, to further tailor the loss function, asymmetric loss[20, 31] was proposed, which determines the focusing parameters of negative and positive loss separately. Recently, the loss function has been expanded to polynomial functions[20] to use only several principle terms in computing the gradients. On the other hand, the Hill loss[42] was proposed to prevent the gradient from being too large in certain samples. The Hill loss performed well for multi-label classification, but they did not apply their method to complex formulas unfolded as polynomial. Therefore, in this paper, we apply the Hill term loss to the formula extended to polynomials to learn a long-tailed multi-label classification model more accurately and robustly.
## 3 Method
In this section, we introduce our robust asymmetric loss function. We first describe the existing long-tailed and multi-class losses as the background of our loss function. Then, we introduce the robust asymmetric loss function by adding the Hill loss term to the polynomial function.
Figure 3: Grad-CAM visualization [34] from the model trained by the proposed RAL.
### Long-tailed and Multi-label Classification Loss
Traditionally, multi-label classification tasks use the Binary Cross-Entropy (BCE) Loss as:
\[\mathcal{L}_{\mathcal{BCE}}=-\sum_{i=1}^{K}\left(y_{i}L_{i}^{+}+(1-y_{i})L_{i}^{ -}\right) \tag{1}\]
\[\left\{\begin{array}{l}\mathcal{L}^{+}=\log(\hat{y})\\ \mathcal{L}^{-}=\log(1-\hat{y})\end{array}\right., \tag{2}\]
where \(L_{i}^{+}\) and \(L_{i}^{-}\) are positive and negative sample losses and \(y\) and \(\hat{y}\) denote the ground-truth and estimated probability for the class labels.
However, since this BCE function computes the same weights for all class samples in training data with the long-tailed distribution, it excessively focuses on learning the head classes with a large number of training samples. This problem is addressed by the focal loss[23]\(\mathcal{L}_{Focal}\) with balancing the positive and negative losses as:
\[\left\{\begin{array}{l}\mathcal{L}_{Focal}^{+}=\alpha_{+}(1-\hat{y})^{ \gamma}\log(\hat{y})\\ \mathcal{L}_{Focal}^{-}=\alpha_{-}\hat{y}^{\gamma}\log(1-\hat{y})\end{array} \right., \tag{3}\]
where \(\alpha_{+}\) and \(\alpha_{-}\) represent the balancing parameter and \(\gamma\) denotes the focusing parameter that is the key hyper-parameters of the focal loss function. Controlling the hyper-parameters, the focal loss can balance the head- and tail-class samples. However, this focal loss has a weakness in that positive and negative losses share the same focusing parameter \(\gamma\). Therefore asymmetric weighting approach[31] for the loss function alleviates this problem by assigning different focusing parameters as:
\[\left\{\begin{array}{l}\mathcal{L}_{ASL}^{+}=(1-\hat{y})^{\gamma+}\log(\hat {y})\\ \mathcal{L}_{ASL}^{-}=\hat{y}_{\tau}^{-}\log(1-\hat{y}_{\tau})\end{array}\right. \tag{4}\]
\[\hat{y}_{\tau}=max(\hat{y}-\tau,0),\]
where \(\gamma^{+}\) and \(\gamma^{-}\) are the positive and negative focusing parameters and \(y_{\tau}\) denotes the rectified probability thresholded by \(\tau\). This ASymmetric Loss (ASL) is efficient for optimizing the training of positive and negative samples separately and able to mitigate the gradient vanishing problem due to too small a value of \(\hat{y}\) in the negative loss.
### Robust Asymmetric Loss
Asymmetric loss can be expanded to the polynomial equation using the Taylor series[24] based method[20]. In a polynomial equation, using several principle low-order terms can improve the performance of multi-label classification tasks. This is because the higher-order terms in the polynomial form can be regarded as noise or redundant, so using only a few low-order terms is effective. Therefore, the asymmetric polynomial loss is formulated as follows:
\[\left\{\begin{array}{l}\mathcal{L}_{APL}^{+}=y\sum_{m=1}^{M}\alpha_{m}(1- \hat{y})^{m+\gamma^{+}}\\ \mathcal{L}_{APL}^{-}=(1-y)\sum_{n=1}^{N}\beta_{n}\hat{y}_{\tau}^{n+\gamma^{- }}\end{array}\right., \tag{5}\]
where \(M\) and \(N\) are parameters that determine the number of low-order terms to be used in the positive and negative losses, and \(\alpha_{m}\) and \(\beta_{n}\) stand for the balance parameter of each term in the positive and negative losses. This
\begin{table}
\begin{tabular}{c c c c c c} \multirow{2}{*}{Label} & \multicolumn{2}{c}{Positive} & \multicolumn{2}{c}{Positive} \\ \cline{2-5} & \#Sample(K) & \multicolumn{1}{c}{Portion(\%)} & \multicolumn{1}{c}{Label} & \multicolumn{1}{c}{\#Sample(K)} & \multicolumn{1}{c}{Portion(\%)} \\ \hline Atelectasis & 67.6 & 10.6 & Mass & 5.5 & 0.9 \\ Calcification & 4.3 & 0.7 & No Finding & 41.8 & 6.6 \\ Cardiomegaly & 76.9 & 12.1 & Nodule & 7.6 & 1.2 \\ Consolidation & 16.0 & 2.5 & Pleural Effusion & 69.2 & 10.8 \\ Edema & 38.6 & 6.1 & Pleural Other & 0.6 & 0.1 \\ Emphysema & 4.3 & 0.7 & Pleural Thickening & 3.3 & 0.5 \\ Cardiomediastinum & 30.1 & 4.7 & Pneumomediastinum & 0.7 & 0.1 \\ Fibrosis & 1.1 & 0.2 & Pneumonia & 49.1 & 7.6 \\ Fracture & 11.9 & 1.9 & Pneumoperitoneum & 0.5 & 0.1 \\ Hernia & 4.0 & 0.6 & Pneumothorax & 14.9 & 2.4 \\ Infiltration & 10.2 & 1.6 & Emphysema & 2.4 & 0.4 \\ Lung Lesion & 2.5 & 0.4 & Support Devices & 89.1 & 14 \\ Lung Opacity & 79.9 & 12.6 & Tortuous Aorta & 3.4 & 0.6 \\ \hline \end{tabular}
\end{table}
Table 1: Specification of the CXR-LT dataset. It shows that the samples are heavily distributed in a few classes while several classes have very few samples.
Asymmetric Polynomial Loss (APL)[20] has the advantage of controlling the positive and negative losses on a term-by-term basis, but it also has the significant drawback of requiring a large number of hyper-parameters to be configured by the user. Optimizing such a large number of hyper-parameters can be a time-consuming process and often leads to overfitting the models.
To be less sensitive to optimizing the numerous hyper-parameters, we introduce robust asymmetric loss. It is noticeable that, especially in multi-label data, the number of negative samples is much larger than that of positives, so making the negative loss less sensitive is the most decisive factor in the long-tailed multi-label classification task. Therefore, we adopt the Hill loss[42] so that we prevent an excessively large gradient of the negative loss in the learning process. Adding the Hill loss term to APL, we define our Robust Asymmetric Loss (RAL) as:
\[\left\{\begin{array}{l}\mathcal{L}_{RAL}^{+}=y\sum_{m=1}^{M}\alpha_{m}(1- \hat{y})^{m+\gamma^{+}}\\ \mathcal{L}_{RAL}^{-}=\psi(\hat{y})\cdot(1-y)\sum_{n=1}^{N}\beta_{n}\hat{y}_{ \tau}^{n+\gamma^{-}}\end{array}\right. \tag{6}\]
\[\psi(\hat{y})=\lambda-\hat{y},\]
where \(\psi\) denotes the Hill loss term and \(\lambda\) is set to \(1.5\) value. Our RAL is robust to the change of numerous hyper-parameters due to the less sensitive negative loss in the training process. In the negative loss, when \(\hat{y}\) is close to 0, that is, the estimated probability of the training data is close to the correct negative answer, the gradient value is already small. Therefore, in this case, the hyper-parameter is not sensitive. On the other hand, when the estimated probability is around 0, which is a hard negative sample, the gradient value is too large, making the network training sensitive to the hyper-parameter settings. Our RAL loss regularizes the gradient of these hard negative samples to make them less sensitive to hyper-parameters. As we expand the asymmetric loss to polynomial form, there is an unavoidable problem of setting too many hyper-parameter, so we propose RAL with Hill loss term to alleviate such a problem.
## 4 Experiments Setup
### Dataset and Metrics
**CXR-LT.** The 377,110 CXRs in the ICCV CVAMD 2023 Dataset(CXR-LT) dataset, which is included in the competition, have at least one label in 26 clinical findings. The class labels consist of the "No Finding" class and 12 new disease labels introduced from mimic-cxr-jpg, which cover chest X-rays(CXR). The detailed specification of the CXR-LT dataset can be found in Table 1. We randomly divide the image sets into a test and training set with a ratio of 8:2 for the CXR-LT dataset.
**ISIC2018 and APTOS2019.** The ISIC2018 dataset has \(10,015\) skin images with 7 lesion classes, and the APTOS dataset includes \(3,662\) diabetic retinopathy images with 5 disease classes. For these APTOS2019 and ISIC2018 datasets, we follow the same protocol of the previous study[28].
**Metric.** To evaluate our methods for the CXR-LT dataset, we use three metrics such as mean Average Precision(mAP), mean Area Under Curve(mAUC), and F1-Score considering the multi-label dataset. For APTOS2019 and ISIC2018, which is the single-label dataset, we use two metrics as Accuracy and F1-Score. The details of these three datasets are specified in Table 2. The imbalance ratio for measuring the significance of the long-tailed distribution is denoted as \(N_{max}/N_{min}\), where \(N\) is the number of each class sample in each class.
### Implementation details.
We use the ConvNeXT-B[26] as the backbone for the proposed loss. We resize the input images as \(384\times 384\) and exploit the data augmentation schemes following the previous[3, 9]. We train our networks using the Adam optimizer with 0.9 momentum and 0.001 weight decay. The batch size is 256, and the initial learning rate is set to \(1e-4\). Our networks are trained on PyTorch version 1.11.0 with RTX A5000 GPUs.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & classes & Samples & Imbalance Ratio \\ \hline CXR-LT & 26 & 377,110 & 142 \\ APTOS2019 & 7 & 10,015 & 58 \\ ISIC2018 & 5 & 3,662 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The details of long-tailed medical datasets.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Image size & mAP & mAUC & mF1 \\ \hline CE & 224 & 0.301 & 0.808 & 0.218 \\ & 384 & 0.314 & 0.813 & 0.227 \\ & 224 & 0.304 & 0.807 & 0.231 \\ Focal loss[23] & 384 & 0.295 & 0.803 & 0.224 \\ & 224 & 0.307 & 0.808 & 0.225 \\ & 384 & 0.317 & 0.811 & 0.237 \\ & 224 & 0.314 & 0.815 & 0.225 \\ & 384 & 0.323 & 0.817 & 0.233 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental comparison on the loss functions with ours.
## 5 Experimental results
In this section, we show the experimental results to validate the effectiveness of RAL. We first compare the proposed RAL with previous state-of-the-art loss functions such as focal loss, LDAM, and ASL. We then dissect the proposed loss function into its component level to demonstrate its robustness. In this experiment, RAL performs well consistently for variations of numerous hyper-parameters. We also show that the proposed RAL works favorably on both multi- and single-label long-tailed medical image classification tasks. Further, we validate that ours is robust to several noisy conditions.
### Comparison on Loss Functions
In this subsection, we compare our RAL with other loss functions on three datasets such as ISIC2018, APTOS2019, and CXR-LT. In this part, we compare the proposed RAl with other methods on three datasets: ISIC2018, APTOS2019, and CXR-LT. Table3 and 4 show the result submitted to the CVAMD 2023 competition site using our RAL at the development phase. In this result, our RAL achieves competitive performance compared to others. Ours performs well on most classes consistently in Table 4. Further, our proposed RAL works well on single-label long-tailed datasets, such as ISIC2018 and APTOS2019. It outperforms the other methods in such datasets in Table 3. These findings of the experimental results highlight the competitiveness of our RAL in diverse long-tailed medical image classification datasets.
### Ablation Study
For a more in-depth analysis of the proposed method, we broke RAL into three components in our ablation study. The three components are focal, asymmetric, and Hill loss where we apply the polynomial expansion to the Hill loss. All results of this ablation study are taken using the ConvNeXt-B model with \(384\times 384\) image size in Table 6. Through this ablation study, we demonstrate that each component of our RAL is effective for the long-tailed multi-label classification task.
### Robustness Analysis
We carry out further experiments to investigate the robustness of RAL. In our experiment, we introduce Gaussian Blur, Salt-Pepper, and Speckle noise to the original images of the CXR-LT dataset, as shown in Fig.5. To validate that our method performs well even with noisy conditions,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Label & BCE & APL & RAL (Ours) & Label & BCE & APL & RAL (Ours) \\ \hline \hline Atelectasis & 0.578 & 0.599 & 0.610 & Mass & 0.159 & 0.200 & 0.222 \\ Calcification & 0.120 & 0.137 & 0.151 & No Finding & 0.445 & 0.477 & 0.479 \\ Cardiomegaly & 0.626 & 0.633 & 0.648 & Nodule & 0.148 & 0.205 & 0.234 \\ Consolidation & 0.203 & 0.209 & 0.224 & Pleural Effusion & 0.801 & 0.813 & 0.821 \\ Edema & 0.527 & 0.552 & 0.562 & Pleural Other & 0.015 & 0.048 & 0.059 \\ Emphysema & 0.258 & 0.313 & 0.334 & Pleural Thickening & 0.065 & 0.094 & 0.111 \\ Cardiomediastinum & 0.155 & 0.166 & 0.173 & Pneumomediastinum & 0.103 & 0.138 & 0.203 \\ Fibrosis & 0.099 & 0.121 & 0.131 & Pneumonia & 0.289 & 0.306 & 0.167 \\ Fracture & 0.175 & 0.223 & 0.270 & Pneumoperitoneum & 0.134 & 0.143 & 0.524 \\ Hernia & 0.483 & 0.509 & 0.560 & Pneumothorax & 0.394 & 0.478 & 0.483 \\ Infiltration & 0.058 & 0.061 & 0.075 & Emphysema & 0.391 & 0.459 & 0.544 \\ Lung Lesion & 0.054 & 0.059 & 0.079 & Support Devices & 0.892 & 0.906 & 0.913 \\ Lung Opacity & 0.579 & 0.601 & 0.613 & Tortuous Aorta & 0.055 & 0.052 & 0.056 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental results of the development phase of the CVAMD 2023 competition. Our RAL works well on most cases compared to the other loss functions.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{method} & \multicolumn{2}{c}{ISIC2018} & \multicolumn{2}{c}{APTOS2019} \\ & \multicolumn{2}{c}{Accuracy F1-score} & \multicolumn{2}{c}{Accuracy F1-score} \\ \hline CE & 0.850 & 0.716 & 0.812 & 0.608 \\ Focal loss[23] & 0.861 & 0.735 & 0.815 & 0.629 \\ LDAM[6] & 0.849 & 0.728 & 0.813 & 0.620 \\ ASL\({}^{\dagger}\)[4] & 0.854 & 0.734 & 0.820 & 0.660 \\ RAL (Ours) & 0.852 & 0.740 & 0.826 & 0.673 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Experimental results of the ICIS2018 and APTOS2019 datasets. \({\dagger}\) denotes the result from our implementation with the official code: [https://github.com/Alibaba-MIIL/ASL](https://github.com/Alibaba-MIIL/ASL).
we compare our method's mAUC to that of Binary Cross-Entropy(BCE) Loss and ASymmetric Loss(ASL). Under the noisy condition, our method outperforms the others by about \(1\)\(3\%\) as shown in Table 8. Therefore, our RAL has been empirically demonstrated to be more robust than existing methods when images are impacted by noisy conditions.
### Performance Analysis on Hyper-parameters
We conduct more experiments for the performance analysis on hyper-parameter settings used in RAL. We compare ours with Asymmetric Polynomial Loss (APL)[20] in this experiment. We employ the same hyper-parameters for ours and APL.
First, we adjust the polynomial coefficient value of \(\alpha_{m}\) in Eq.6 for our RAL and APL. Figure 4 shows that, across all three datasets, our RAL is generally less sensitive to changes in the polynomial coefficient than APL, leading to less variance in the performance.
Furthermore, we evaluate the balance parameter \(\beta\), which governs the negative loss. Fig.6(a) shows that for all the \(\beta\) values, our RAL outperforms APL in terms of AUC score, highlighting the resilience to the hyper-parameter settings of our RAL. In addition, we experimented with the balance parameter \(\beta\), which controls the negative loss. Fig.6(b) shows the F1 and AUC score for different \(\lambda\) values; it can be found that the best performance is obtained at \(\lambda=1.5\) consistent with the previous study[42].
## 6 Result on CVAMD2023 Competition
Our method results in the Top-5 of the ICCV CVAMD 2023 competition's final rankings. To get this result, we scale the input image to \(1024\times 1024\) and use ConvNeXT-B
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Focal Loss & Asymmetric & Hill & mAP & mAUC \\ \hline ✓ & ✗ & ✗ & 0.295 & 0.803 \\ ✓ & ✓ & ✗ & 0.307 & 0.815 \\ ✓ & ✓ & ✓ & 0.323 & 0.817 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Experimental result of the ablation study of the proposed RAL.
Figure 4: Experimental result on the evaluation of polynomial coefficient. The Y-axis F1 Diff shows the difference from the best F1 score for each method (_i.e_. APL and our RAL). Therefore, a value less than 0 indicates a larger difference from the best result and is sensitive to the polynomial coefficient hyper-parameter. In this result, RAL is less sensitive against the APL considerably.
Figure 5: Examples of noisy images such as Guassian Blur, Speckle, and Salt-Papper.
Figure 6: Experimental result of ours according to the hyper-parameters \(\beta\) and \(\lambda\). In (a), we conduct an evaluation to compare our RAL with APL with regard to \(\beta\) in Eq.6, the weight that regularizes the negative loss. In (b), we evaluate F1 and AUC scores in relation to \(\lambda\) in Eq.6, which is utilized to avoid significant gradients in the negative loss.
models from [26]. We increase the input image size in the test phase using the checkpoint file saved during the development phase. We configure hyper-parameters in the same values as Sec.4.2. In the final score of the competition, ours recorded 0.351 mAP, 0.837 mAUC, and 0.256 mF1 scores, which are included in the Top-5 ranking. With the efficient loss function design, we show improved performance on the multi-label long-tailed classification of the CVAMD 2023 challenge, which does not use additional model parameters or inference complexity. Table 9 shows the test phase results of our three submissions.
## 7 Conclusion
In this paper, we introduce the Robust Asymmetric Loss (RAL) for long-tailed multi-label classification tasks on medical images. Our proposed RAL trains the model more robustly against the various hyper-parameters without additional resources. RAL shows competitive results on the long-tailed single- and multi-label datasets compared to previous state-of-the-art loss functions. We especially achieve a Top-5 ranking in the CVAMD 2023 competition using our method. We think that future research can benefit from our findings and incorporate ours into their work.
|
2302.06010 | Rod-climbing rheometry revisited | The rod-climbing or Weissenberg effect in which the free surface of a complex
fluid climbs a thin rotating rod is a popular and convincing experiment
demonstrating the existence of elasticity in polymeric fluids. The interface
shape depends on the rotation rate, fluid elasticity, surface tension, and
inertia. By solving the equations of motion in the low rotation rate limit for
a second-order fluid, a mathematical relationship between the interface
deflection and the fluid material functions, specifically the first and second
normal stress differences, emerges. This relationship has been used in the past
to measure the climbing constant, a combination of the first ($\Psi_{1,0}$) and
second ($\Psi_{2,0}$) normal stress difference coefficients from experimental
observations of rod-climbing in the low inertia limit. However, a quantitative
reconciliation of such observations with the capabilities of modern-day
torsional rheometers is lacking. To this end, we combine rod-climbing
experiments with both small amplitude oscillatory shear flow measurements and
steady shear measurements of the first normal stress difference from commercial
rheometers to quantify the values of both normal stress differences for a
series of polymer solutions. Furthermore, by retaining the oft-neglected
inertial terms, we show that the climbing constant
$\hat{\beta}=0.5\Psi_{1,0}+2\Psi_{2,0}$ can be measured even when the fluids,
in fact, experience rod descending. A climbing condition derived by considering
the competition between elasticity and inertial effects accurately predicts
whether a fluid will undergo rod-climbing or rod-descending. The analysis and
observations presented in this study establish rotating rod rheometry as a
prime candidate for measuring normal stress differences in polymeric fluids at
low shear rates that are often below commercial rheometers' sensitivity limits. | Rishabh V. More, Reid Patterson, Eugene Pashkovski, Gareth H. McKinley | 2023-02-12T21:53:34Z | http://arxiv.org/abs/2302.06010v1 | # Rod-climbing rheometry revisited
###### Abstract
The rod-climbing or "Weissenberg" effect in which the free surface of a complex fluid climbs a thin rotating rod is a popular and convincing experiment demonstrating the existence of elasticity in polymeric fluids. The interface shape and steady-state climbing height depend on the rotation rate, fluid elasticity (through the presence of normal stresses), surface tension, and inertia. By solving the equations of motion in the low rotation rate limit for a second-order fluid, a mathematical relationship between the interface deflection and the fluid material functions, specifically the first and second normal stress differences, emerges. This relationship has been used in the past to measure the climbing constant, a combination of the first \((\Psi_{1,0})\) and second \((\Psi_{2,0})\) normal stress difference coefficients from experimental observations of rod-climbing in the low inertia limit. However, a quantitative reconciliation of such observations with the capabilities of modern-day torsional rheometers is lacking. To this end, we combine rod-climbing experiments with both small amplitude oscillatory shear flow measurements and steady shear measurements of the first normal stress difference from commercial rheometers to quantify the values of both \(\Psi_{1,0}\) and \(\Psi_{2,0}\) for a series of polymer solutions. Furthermore, by retaining the off-neglected inertial terms, we show that the "climbing constant" \(\hat{\beta}=0.5\Psi_{1,0}+2\Psi_{2,0}\) can be measured even when the fluids, in fact, experience rod descending. A climbing condition derived by considering the competition between elasticity and inertial effects accurately predicts whether a fluid will undergo rod-climbing or rod-descending. Our results suggest a more general description, "rotating rod rheometry" instead of "rod-climbing rheometry," to be more apt and less restrictive. The analysis and observations presented in this study establish rotating rod rheometry as a prime candidate for measuring normal stress differences in polymeric fluids at low shear rates that are often below commercial rheometers' sensitivity limits.
## I Introduction
The knowledge of a fluid's rheological properties is an essential prerequisite for predicting the flow of a complex fluid in any desired application. A simple steady shear flow measurement is enough for generalized Newtonian fluids as the shear-rate\(-\)dependent viscosity \(\eta(\hat{\gamma})\) is the only rheological property required to resolve the flow dynamics in a specific geometry of interest. However, the presence of fluid elasticity requires using multiple deformation protocols to build a thorough understanding of the materials' rheological properties. Simple steady homogeneous shear flow, which is the most widely used test protocol, provides (in principle) quantitative information about the shear-rate dependence of three independent materials functions, viz., the viscosity \(\eta(\hat{\gamma})\), the first normal stress difference \(N_{1}(\hat{\gamma})\) and the second normal stress difference \(N_{2}(\hat{\gamma})\). These normal stress differences are identically zero in Newtonian fluids. They are associated with nonlinear viscoelastic effects and hence, are negligibly small in linear viscoelastic measurements [1]. Their first appearance comes as a second-order effect in the shear rate, represented using the first \((\Psi_{1})\) and the second \((\Psi_{2})\) normal stress coefficients such that \(N_{1}(\hat{\gamma})=\Psi_{1}\hat{\gamma}^{2}\) and \(N_{2}(\hat{\gamma})=\Psi_{2}\hat{\gamma}^{2}\), respectively [2; 3]. However, due to limitations on the torque and axial force transducer sensitivities of commercial rheometers, it is typically only possible to measure the material functions over a limited range of shear rate values, which can be determined _a priori_ from the transducer sensitivity limits [4].
In a strongly non-Newtonian fluid, \(N_{1}\) can be comparable or even larger than the shear stress, \(\sigma\), at high shear rates. Consequently, \(N_{1}\) can typically be measured for many complex fluid systems using the very sensitive force re-balance transducer technology, which is now available in many commercial rheometers. Using a cone-and-plate (CP) geometry with a radius \(R\) and cone angle \(\theta_{0}\), the first normal stress difference \(N_{1}(\hat{\gamma})\) can be measured directly from the axial force \(F_{CP}\) acting on either the cone or the plate using [2]:
\[N_{1}(\hat{\gamma})=2F_{CP}(\hat{\gamma})/\pi R^{2}, \tag{1}\]
where \(\hat{\gamma}=\Omega/\theta_{0}\) and \(\Omega\) is the rotational speed of the conical fixture. At low shear rates, in a simple fluid with fading memory[1], \(N_{1}\) is expected to vary quadratically through the analytical relationship
\[\lim_{\hat{\gamma}\to 0}\frac{N_{1}}{\hat{\gamma}^{2}}=\Psi_{1}|_{\hat{ \gamma}\to 0}=\Psi_{1,0}=\lim_{\omega\to 0}\frac{2G^{\prime}(\omega)}{ \omega^{2}}. \tag{2}\]
Here the storage modulus \(G^{\prime}(\omega)\) can be measured with relatively high accuracy in small amplitude oscillatory shear (SAOS) flow at an oscillatory frequency \(\omega\). The subscript \({}_{0}\) (e.g., \(\Psi_{1,0}\)) on a rheological property such as \(\Psi_{1}\) denotes its value in the limit of a zero shear rate.
On the other hand, the measurement of \(N_{2}\) poses many difficulties compared to \(\eta\) or \(N_{1}\), resulting in far less attention to its accurate determination. Over the years, many techniques have been proposed to determine \(N_{2}\)[5] experimentally. The most widespread approach is to use a cone-and-plate geometry for the direct measurement of \(N_{1}(\hat{\gamma})\) using eq. 1 in conjunction with another technique that measures a combination of \(N_{1}\) and \(N_{2}\). The complementary techniques are discussed in great detail in the review by Maklad
and Poole [5] and include parallel-plate (PP) thrust measurement, offset cone-and-plate fixtures with distance adjustment, a cone-and-plate geometry with pressure distribution measurement, cone-and-partitioned plate, plate-and-ring geometry, and cone-and-ring geometry. Then an estimate of \(N_{2}\) can be obtained by an appropriate subtraction of the two independent experimental measurements. However, because \(N_{2}\) is often a small percentage of the value of \(N_{1}\), such approaches are fraught with experimental difficulties.
Although the determination of \(N_{2}\) using the combination of CP thrust and any of the supplementary measurements seems straightforward, many practical challenges arise, e.g., reduced values of the measured thrust due to inertia and/or secondary flows [6], amplified uncertainty due to subtracting two nearly equal values of the measured normal stress differences, differentiation of experimental data concerning various other parameters, or building a much more complicated experimental setup to measure pressure gradients (or several thrust measurements) directly[5]. However, the most commonly encountered limitation in measuring \(N_{1}\) or \(N_{2}\) is that the torque or thrust measuring transducers have minimum sensitivity limits below which they cannot detect the forces or torques exerted by a complex fluid under shear. For example, the state-of-the-art ARES-G2 rheometer (TA instruments) has a practical lower sensitivity limit of 0.001 N (0.1 gm-force) for the axial force \(F_{CP}\) in Eq. 1. So, a 40 mm CP geometry, for instance, will not be sensitive to a first normal stress difference below approximately \(N_{1}\lessapprox 1.6\) Pa.s\({}^{2}\). Practically, this means that quantitative estimation of the asymptotic quadratic behavior of \(N_{1}\) and \(N_{2}\) (or equivalently the first and second normal stress coefficients \(\Psi_{1}|_{\hat{\gamma}\to 0}=\Psi_{1,0}\) and \(\Psi_{2}|_{\hat{\gamma}\to 0}=\Psi_{2,0}\)) cannot be achieved using the above-mentioned techniques for many complex fluids of interest such as polymer solutions or concentrated suspensions.
It is well known that the normal stress differences can also lead to secondary flows due to the breaking of axisymmetry, for instance, in pipes of non-circular cross-sections. These secondary flows arise because normal stress differences lead to tensile (or compressive) stresses acting along the streamlines and vortex lines in the flow (depending on the signs of \(N_{1}\) and \(N_{2}\)). Similarly, non-zero contributions to the total stress acting on a deformable or free surface (for instance, flow down an open inclined trough or around a rotating rod, also lead to secondary flows). In such flows, the deformable shear-free surface acts as a very sensitive pressure "gauge," and hence, is a popular way to demonstrate visually and unequivocally the presence of normal stress differences in complex fluids. With careful work, the tilted trough (TT) experiment has been used to quantify \(N_{2}\) by measuring the free surface deflection in the steady shear flow generated by gravity when the open trough is inclined at an angle [7; 8]. The tilted trough technique offers great potential but requires a dedicated experimental facility, large volumes of fluid, and a complicated data-processing technique to extract \(N_{2}\). In addition, only a narrow range of shear stresses can be probed, making it difficult to accurately determine \(\Psi_{2,0}\) using the TT [5].
To overcome the above-mentioned limitations in reliably measuring \(\Psi_{1,0}\) and \(\Psi_{2,0}\) in the present study, we revisit the well-known rod-climbing effect. Following the original pioneering work by D.D. Joseph and co-workers on the rod-climbing rheometer [9; 10; 11; 12], measurements of the climbing height and how it varies with rotation rate have been used to estimate \(\Psi_{2,0}\) in polyisobutylene (PIB) Boger fluids with the same components but slightly different PIB concentrations (0.24 wt. % [13] and 0.1 wt. % [14]). However, this flow configuration has not received much attention in the ensuing three decades following these earlier studies, mainly because a reliable reconciliation of rod-climbing experiments with material functions measured in modern-day rheometers is still lacking. To this end, we present a protocol to robustly measure both \(\Psi_{1,0}\) and \(\Psi_{2,0}\) using a combination of rod-climbing, normal force measurements in steady shear, and SAOS measurements. The results presented here show that rod-climbing data can serve as an inexpensive supplement to data from commercial rheometers to enable measurements of both the first and second normal stress difference of a complex fluid in the low shear rate limit, which is typically beyond the sensitivity limits of commercial rheometers.
Figure 1: The rod-climbing or “Weissenberg” effect [15]. The free surface of an elastic fluid climbs a thin rotating rod with a radius \(a\) and an angular velocity \(\Omega\). The interface shape and the climbing height (\(\Delta h(\Omega,a)\)) compared to the static rise near the rod (\(h_{s}(a)\)) is primarily determined by the rate of rotation, the normal stresses in the fluid, along with surface tension and inertia.
The climbing height in the "rod-climbing" experiment
The "rod climbing" or Weissenberg effect [15] is illustrated in Fig. 1, where the free surface of a fluid climbs a thin rotating rod and serves as an indisputable experiment demonstrating the presence of non-linear elasticity in polymeric fluids. The presence of normal stresses in a fluid under shear leads to the idea of a streamwise or 'hoop' stress in a fluid experiencing a torsional shear flow around a thin rotating rod. These hoop stresses pull fluid elements radially inward toward the rotating rod. As a result of this secondary flow, the deformable free surface near the rod ascends to a height at which the additional hydrostatic pressure pushing the fluid downwards and outwards exactly balances the hoop stress pulling the fluid inwards. The interface shape \(h(\Omega,r,\alpha)\) and the climbing height \(h(\Omega,a,\alpha)\), where \(\Omega\) is the rod rotation speed, \(a\) is the rod radius, and \(\alpha\) is the contact angle, generally depend on the rod rotation rate and the fluid elasticity, as quantified by the two normal stress differences, surface tension, and inertia. The functional dependence can be determined by solving the governing equations of motion using a domain perturbation technique in the low rotation rate limit for a second-order fluid [9; 10; 11]. A detailed derivation is presented in Appendix A. The final solution (Eq. 21 in Appendix A) for the interface shape \(h(\Omega,r,\alpha)\) gives the following mathematical relationship between the climbing height \(h(\Omega,a,\alpha)\) and the material functions characterizing the fluid in terms of a specific combination of both normal stress differences called the "climbing constant" \(\hat{\beta}=0.5\Psi_{1,0}+2\Psi_{2,0}\)[10]:
\[h(\Omega,a,\alpha)=h_{s}(a,\alpha)+\frac{a}{2\left(\Gamma\rho g \right)^{1/2}}\left[\frac{4\hat{\beta}}{4+\sqrt{B\rho}}-\frac{\rho a^{2}}{2 +\sqrt{B\rho}}\right]\Omega^{2} \tag{3}\] \[+O(\Omega^{2}\alpha+\Omega^{4}).\]
Here \(h_{s}(a,\alpha)\) is the static climbing height arising from capillarity effects (in a fluid with surface tension \(\Gamma\) and contact angle \(\alpha\)). \(\rho\) is the fluid density, \(a\) is the rod radius which rotates with an angular speed \(\Omega\), and \(Bo=\rho ga^{2}/\Gamma\) is the Bond number with \(g\) being the acceleration due to gravity. Eq. 3 establishes the foundation of rod-climbing rheometry [5].
**Figure 2** a) The rod-climbing rheometer includes a cylindrical beaker filled with the fluid of interest. A thin rod of radius \(a\) is submerged with its axis aligned with the axis of the beaker. As the rod rotates with an angular velocity \(\Omega\), the fluid interface climbs (or descends) if the climbing condition [see Sec. IV.4.2 Fig. 7] is satisfied (not satisfied). The fluid is illuminated with a strong background light, and still images of the interface are captured using a digital camera (Nikon EOS 7D DSLR). b) In a weakly elastic fluid, dominant inertial effects compared to the fluid elasticity result in a local dip in the free surface that we define as "rod-descending". In a more strongly elastic fluid, rod-climbing due to large normal stress differences can be visualized; for example, by c) shining a laser sheet in a plane perpendicular to the camera and d) illuminating the fluid using a strong backlight. The bulk of the measurements in this work are performed using this latter illumination setup. Scale bars in (b)-(d) are all 5 mm.
## III Methods and materials
The schematic of our experimental setup is presented in Fig. 2. A stress-controlled rheometer (TA instrument's AR-G2 Magnetic Bearing Rheometer) was modified to function as a rotating-rod rheometer. A precisely machined hollow steel tube of known diameter \(2a=9.525\) mm was attached concentrically to an 8 mm parallel plate geometry to serve as a thin cylindrical rod that can be rotated in a fluid reservoir. The polymeric fluid (see description below) was contained in a cylindrical glass beaker of diameter 100 mm and depth 50 mm. The original study by Joseph _et al._[10] recommends the beaker-to-rod diameter ratio to be at least ten so that edge effects on the climbing height measured in the low rotation speed regime are minimal[13; 14]. The rods were fully immersed in the beaker to a depth of \(10a\approx 45\) mm. It has been shown that the immersion depth of the rod into the fluid does not affect the rod-climbing height [13]. The position of the beaker was manually adjusted to align it concentrically with the rotation axis of the rotating rod. Using the AR-G2 rheometer motor to impose the rotation rate allowed us to control the rod rotating speed \(\Omega\) accurately. Photographs taken with a Nikon EOS 7D DSLR camera were used to capture the free surface shape and the climbing height at various rotational speeds. The photographs were used to measure the climbing heights of the fluid-free surface near the rod \(h(\Omega,a,\alpha)\). The photographs capture the interface with a spatial resolution of 0.05 mm/pixel, which can be determined from the magnification of the lens and the pixel resolution of the captured images.
Eq. 3 for the rod-climbing height at the rod surface (\(r=a\)) was derived assuming a semi-infinite fluid container and a domain perturbation approach (See Appendix A for a detailed derivation). All the higher-order terms in Eq. 3 involve computing secondary motions from complex boundary value problems. However, it has been shown that computing these higher-order terms is not necessary at low rotation rates and the expression
\[\begin{split}\Delta h(\Omega,a)&=h(\Omega,a,\alpha )-h_{s}(a,\alpha)\\ &\approx\frac{a}{2\left(\Gamma\rho g\right)^{1/2}}\left[\frac{4 \hat{\beta}}{4+\sqrt{Bo}}-\frac{\rho a^{2}}{2+\sqrt{Bo}}\right]\Omega^{2} \end{split} \tag{4}\]
is a good approximation for the changes in the free surface height in the small rotation speed limit [9; 10]. This means the change (or perturbation) in the climbing height \(\Delta h(\Omega,a)=h(\Omega,a,\alpha)-h_{s}(a,\alpha)\) is approximately independent of the contact angle \(\alpha\) in the small \(\Omega\) limit and scales linearly with \(\Omega^{2}\). From the rod-climbing experiment (Fig. 2), we measure the change in the climbing height due to rod rotation \(\Delta h(\Omega,a)\) compared to the static rise height \(h_{s}(a,\alpha)\) and plot it as a function of the imposed rotation speed. From this plot of \(\Delta h(\Omega,a)\) vs. \(\Omega^{2}\), we compute the slope \(\mathrm{d}\Delta h/\mathrm{d}\Omega^{2}\) in the quadratic regime corresponding to low values of \(\Omega^{2}\) and equate it to the theoretical slope
\[\frac{\mathrm{d}\Delta h(\Omega,a)}{\mathrm{d}\Omega^{2}}\approx\frac{a}{2 \left(\Gamma\rho g\right)^{1/2}}\left[\frac{4\hat{\beta}}{4+\sqrt{Bo}}-\frac{ \rho a^{2}}{2+\sqrt{Bo}}\right] \tag{5}\]
obtained from Eq. 4. If the fluid density and surface tension are known, one can compute the climbing constant \(\hat{\beta}=0.5\Psi_{1,0}+2\Psi_{2,0}\) from the slope of the data. This calculated value of \(\hat{\beta}\) can then be used in conjunction with an independent measurement of \(\Psi_{1,0}\) from a complementary method to calculate \(\Psi_{2,0}\) as
\[\Psi_{2,0}=\frac{1}{2}\hat{\beta}-\frac{1}{4}\Psi_{1,0}=-\frac{\Psi_{1,0}}{4} \left(1-\frac{2\hat{\beta}}{\Psi_{1,0}}\right). \tag{6}\]
As discussed in the Introduction, direct measurements of the axial force in a CP geometry used to measure \(N_{1}\) using Eq. 1 cannot probe very small shear rates because the normal force signal exerted by the fluid is often weaker than the lower sensitivity limits of the force transducer. Hence, measuring \(G^{\prime}(\omega)\) (with relatively high accuracy) through SAOS deformations in a concentric cylinders geometry and using the asymptotic limit in Eq. 2, i.e., \(\Psi_{1,0}\simeq\lim_{\omega\to 0}\frac{2G^{\prime}}{\omega^{2}}\) is a practical and superior solution, as will be shown later in Sec. IV.
Further insights into the competition between the elastic effects (i.e., the first term in the square brackets in Eq.5 \(-\) denoted Term I), which encourage rod-climbing, and the inertial effects, which encourage a decrease in the height (the second term in the square brackets in Eq.5 \(-\) denoted Term II), can be obtained from Eq. 5. These insights aid in extending the usefulness of the rod-climbing rheometer beyond the moderately viscous and elastic fluids tested decades ago [13; 14; 16]. To achieve this expansion in utility, we do not make any simplification to Eq. 5 and, unlike previous studies [13; 14; 16], retain the contribution due to inertial effects (Term II). From Eq. 5, one can conclude that weakly elastic fluids with relatively smaller values of \(\hat{\beta}\) can undergo rod-descending (i.e., Term I \(<\) Term II in Eq.5), that is, the climbing height decreases with increasing \(\Omega\) from the initial static rise \(h_{s}(a,\alpha)\) due to the dominance of inertial effects. However, the linear relationship in \(\Delta h(\Omega,a)\) vs. \(\Omega^{2}\) in the low rotation rate limit is still valid and can be used to measure \(\hat{\beta}\) (and consequently \(\Psi_{2,0}\)) even in weakly viscoelastic fluids, e.g., dilute or semi-dilute polymer solutions. We test this hypothesis in Sec. IV. In the extreme case of a Newtonian fluid when \(\hat{\beta}=0\) we expect
\[\frac{\mathrm{d}\Delta h(\Omega,a)}{\mathrm{d}\Omega^{2}}\approx\frac{a}{2 \left(\Gamma\rho g\right)^{1/2}}\left[-\frac{\rho a^{2}}{2+\sqrt{Bo}}\right], \tag{7}\]
which sets the Newtonian inertial rod-dipping limit on the slope of the \(\Delta h(\Omega,a)\) vs. \(\Omega^{2}\) plot (in the low \(\Omega\) limit). Any deviation of \(\mathrm{d}\Delta h/\mathrm{d}\Omega^{2}\) from Eq. 7, denotes a presence of a non-zero climbing constant \(\hat{\beta}\), and consequently, the presence of finite normal stress differences. Eq. 5 can also be utilized to derive a "climbing condition" in terms of two dimensionless quantities: a dimensionless normal stress difference ratio \(\psi_{0}=-\Psi_{2,0}/\Psi_{1,0}\) and the inertoelastic quantity \(\rho a^{2}/\Psi_{1,0}\); both of which are independent of the flow kinematics in the problem. We defer a more detailed discussion on this climbing condition to Sec. IV.4.
For this study, we use polymeric solutions of polyisobutylene (PIB) (Molecular weight \(\approx 10^{6}\) g/mol) dissolved in a paraffinic oil (Lubrizol Inc.). We perform all our measurements at a constant temperature \(T=20\)\({}^{\circ}\)C. The Newtonian solvent oil has a steady state viscosity \(\eta_{s}=18.07\) mPa.s at 20 \({}^{\circ}\)C. The polymer intrinsic viscosity is measured to be \([\eta]=3.69\) dL/g, and this can be used to estimate the critical overlap concentration of the polymer solute C\({}^{*}\simeq 0.77/\left[\eta\right]\)[17] = 0.23 wt. %. The solutions were all measured to have a constant density \(\rho=873.1\) kg/m\({}^{3}\) and surface tension \(\Gamma=29.7\) mN/m. The material properties of the test fluids used are summarized in Table 1. We vary the dissolved concentration of polymer in the solution to change the viscoelastic properties. We work with three semi-dilute solutions (C \(>\) C\({}^{*}\)): 3 wt. %, 2 wt. %, and 1 wt. %, respectively, and one close to C\({}^{*}\): 0.3 wt. %. These polymeric solutions are all shear-thinning, with their viscoelasticity decreasing at lower concentrations, as shown in the next section.
## IV Results and Discussion
The rotating rod rheometry protocol involves
1. determining the value of \(\Psi_{1,0}\) from SAOS data obtained over a range of temperatures to construct a time-Temperature superposition ((TS) master curve and then using the asymptotic result obtained from simple fluid theory \(\Psi_{1,0}=\lim_{\omega\to 0}2G^{\prime}/\omega^{2}\) (Sec. IV.1),
2. calculating the climbing constant \(\hat{\beta}=0.5\Psi_{1,0}+2\Psi_{2,0}\) by measuring the surface deflection and determining the slope \(\mathrm{d}\Delta h/\mathrm{d}\Omega^{2}\) of perturbations to the static interface \(\Delta h(\Omega,a)\) vs. \(\Omega^{2}\) for small \(\Omega\). This value is then equated to the theoretical result (Eq. 5) (Sec. IV.2), and
3. calculating the second normal stress coefficient using the relationship \(\Psi_{2,0}=(\hat{\beta}-0.5\Psi_{1,0})/2\) (Sec. IV.3).
We first present the results of applying this protocol to the four PIB-based fluids described above. In addition, we derive a modified "climbing condition" (Sec. IV.4) and present observations which support the use of the rotating rod experiment to probe \(\Psi_{2,0}\) even in fluids that exhibit a local dip in the free surface that we define as "rod-descending" (Sec. IV.5).
### \(\Psi_{1,0}\) measurements using small amplitude oscillatory shear
Fig. 3 shows the results obtained from a small amplitude oscillatory shear (SAOS) flow experiments using a concentric cylinder geometry for the 3 wt. % and 1 wt. % solutions. We perform time-Temperature superposition (tTS) to construct a master curve of the storage and loss moduli as functions of the reduced oscillation frequency \(\omega_{r}=a_{T}\omega\), denoted \(G^{\prime}(\omega_{r})\) and \(G^{\prime\prime}(\omega_{r})\), respectively. This allows us to extend the range of measurements to sufficiently low frequencies to observe the terminal scaling expected. Here \(\omega\) is the oscillatory frequency, and \(a_{T}\) is the temperature-dependent horizontal shift factor. For the limited range of temperatures 10 \({}^{\circ}\)C \(\leq T\leq 80\)\({}^{\circ}\)C studied here, we find the vertical shift factor \(b_{T}\approx 1\) for these PIB solutions, and there is no need to shift the \(G^{\prime}(\omega_{r})\) and \(G^{\prime\prime}(\omega_{r})\) data vertically. In addition, we observe that even though the solutions are in the semi-dilute regime, the generalized Rouse-Zimm model expressed in the form [18]
\[G^{\prime}(\omega_{r})=G_{c}\frac{\omega_{r}\tau_{Z}\mathrm{sin}\left[\chi \mathrm{atan}(\omega_{r}\tau_{Z})\right]}{[(1+(\omega_{r}\tau_{Z})^{2})]^{Z/2}}, \tag{8a}\] \[G^{\prime\prime}(\omega_{r})=G_{c}\frac{\omega_{r}\tau_{Z}\mathrm{cos}\left[\chi \mathrm{atan}(\omega_{r}\tau_{Z})\right]}{[(1+(\omega_{r}\tau_{Z})^{2})]^{Z/2}} \tag{8b}\]
does a good job of fitting the SAOS data, as shown in Fig. 3. Here the fitting parameters are the characteristic modulus \(G_{c}\), the Zimm relaxation time \(\tau_{Z}\), and \(\chi=(1-1/3\nu)\) with \(\nu\) being the solvent quality exponent. The best-fit parameter values for the generalized Rouse-Zimm model are tabulated in Table 2.
\begin{table}
\begin{tabular}{l c c c c} \hline C (wt. \%) & \(\rho\) (kg/m\({}^{3}\)) & \(\Gamma\) (mN/m) & \(\eta_{s}\) (mPa.s) & C\({}^{*}\) (wt. \%) \\ \hline
0.30 – 3.00 & 873.1 & 29.7 & 18.07 & 0.23 \\ \hline \end{tabular}
\end{table}
Table 1: PIB polymer solution properties.
We can now use Eq. 2 to evaluate the expected value of the first normal stress difference coefficient \(\Psi_{1,0}\) in the zero shear limit. Using the generalized Rouse-Zimm model (Eq. 8a) and the expectation from simple fluid theory (Eq. 2) we obtain,
\[\Psi_{1,0}=2G_{c}\,\tau_{2}^{2}\chi, \tag{9}\]
which allows us to calculate \(\Psi_{1,0}\) from the linear viscoelastic master curve. The calculated values for all the solutions used in this study are provided in Table 2. However, it should be noted that the asymptotic value of \(\Psi_{1,0}\) as defined in Eq. 2, can also be calculated solely from a master curve of the SAOS data by a careful regression analysis of the empirical data. This is especially useful if a statistically good fit with available models, such as the Zimm model, is not possible; provided there is a discernible quadratic regime in which \(G^{\prime}(\omega_{r})\sim\omega_{r}^{2}\) at low frequencies. Observing a clear quadratic scaling from the SAOS data at a fixed temperature can often be
\begin{table}
\begin{tabular}{c c c c c c c} \hline C (wt.\%) & \(G_{c}\) (Pa) & \(\tau_{c}\) (s) & \(\chi\) & \(\nu\) & : & \(\Psi_{1,0}\) (Pa.s\({}^{2}\)) \\ \hline
3.00 & 2.88 & 11.62 & 0.579 & 0.791 & : & 534.0 \\
2.00 & 1.03 & 7.01 & 0.514 & 0.686 & : & 051.9 \\
1.00 & 0.39 & 1.43 & 0.337 & 0.503 & : & 0.527 \\
0.30 & 0.20 & 0.52 & 0.189 & 0.411 & : & 0.021 \\ \hline \end{tabular}
\end{table}
Table 2: Generalized Rouse-Zimm model fit parameters for the linear viscoelastic properties obtained from master curves of the small amplitude oscillatory shear (SAOS) flow data for various PIB solutions, as well as the first normal stress coefficient obtained from Eq. 9.
Figure 3: Small amplitude oscillatory shear (SAOS) measurements at a strain amplitude \(\gamma_{0}=1\) % of a) 3 wt. % and b) 1 wt. % PIB solutions used in this study. Time-temperature superposition using a reference temperature \(T_{0}=20\)\({}^{\circ}\)C is employed to construct a master curve with a lateral shift factor \(a_{T}\) giving the reduced frequency \(\omega_{r}=a_{T}\omega\). The generalized Rouse-Zimm model (Eq. 8) does an excellent job of fitting the SAOS data, and the corresponding fits are shown using black lines. The fitting parameter values for the generalized Rouse-Zimm model are tabulated in Table 2.
difficult for certain weakly elastic fluids, as sufficiently low frequencies might not be accessible due to rheometer sensitivity limits. Hence, performing time-temperature superposition can be useful, as well as using a constitutive model so that SAOS measurements can be robustly extrapolated into the region where \(\tau_{\Omega}\omega_{\nu}\ll 1\) to calculate \(\Psi_{1,0}\) accurately. Results in Table 2 show that we can expect \(\Psi_{1,0}\) to vary over four orders of magnitude by diluting the solutions over a factor of 10. Thus, varying the PIB concentration is expected to have an equivalent dramatic impact on the rod-climbing; indeed, we observe the same sensitivity level as discussed in the following subsection.
### Measurements of the climbing constant \(\hat{\beta}\) using rod-climbing observations
We turn our attention to the rod-climbing experiments in this subsection. The experimental data and representative photographs of the 3 wt. % solution undergoing rod-climbing are presented in Fig. 4. When \(\Omega=0\), we observe a finite climbing height due to meniscus wetting as shown in Fig. 4a-i. This is the static climbing height \(h_{s}(a,\alpha)\) as explained in Eq. 4. This meniscus height and shape can be adequately described by solving the Young-Laplace equation for the interface with the knowledge of the contact angle determined from the photograph, but this will not be pursued here as our focus is on measuring the _change_ in the climbing height \(\Delta h(\Omega,a)\) due to rotation of the rod. As we rotate the rod with a rotation speed \(\Omega\), \(\Delta h(\Omega,a)\) is expected to initially be proportional to \(\Omega^{2}\) at low rotation rates, as shown in Fig. 4b. However, it is difficult to predict _a priori_ the maximum allowable rotational speed of the rod (denoted \(\Omega_{max}\)) above which the experimental observations deviate from the quasi-linear relationship given by Eq. 5. In practice, one can estimate \(\Omega_{max}\) from the experimental data _a posteriori_ as shown in Fig. 4c with the condition that the modified Froude number \(Fr=\Omega^{2}L/g<1\). Here \(L\) is a characteristic length, which can be taken to be \(L\approx\sqrt{\mathrm{g}\mathrm{d}\Delta h/\mathrm{d}\Omega^{2}}[10]\)\(\approx\frac{1}{2}\sqrt{a\beta\sqrt{\frac{g}{\mathrm{pt}}}}\) using Eq. 5. From the rod-climbing observations, using a conservative condition that \(Fr<1\), we find \(\Omega_{max}\approx 2\), 6, 15, and 30 rad/s for the 3 wt. %, 2 wt. %, 1 wt. %, and 0.3 wt. % PIB solutions, respectively.
The observed values of \(\Delta h(\Omega,a)\) lie on a straight line when plotted against \(\Omega^{2}\) at values \(\Omega<\Omega_{max}\) as shown in Fig. 4b, c. The photographs of the interface shape presented in Fig. 4a-i, ii reveal that the interface shape \(h(\Omega,r,\alpha)\) is concave for \(\Omega\lesssim\Omega_{max}\). This shows that in the low \(\Omega\) regime, increasing the rotation rate results in small perturbations to the static interface shape, which increases the interface height linearly with \(\Omega^{2}\) as predicted by Eq. 5. These perturbations to the interface shape \(\Delta h(\Omega,a)\) are positive if elastic
Figure 4: a) Still images illustrating rod-climbing in the 3 wt. % PIB solution with increasing rod rotation speeds i) \(\Omega=0\) rad/s, ii) \(\Omega=1\) rad/s, iii) \(\Omega=5\) rad/s, and \(\Omega=10\) rad/s. The change in the interface height at the rotating rod (\(\Delta h(\Omega,a)\)) compared to the static rise \(h_{s}(a)\) for 3 wt. % PIB solution: b) In the low \(\Omega\) regime (\(\Omega<1\) rad/s), \(\Delta h\) varies linearly with \(\Omega^{2}\) as predicted by theory (Eq. 4). The slope of the curve in this regime is used to calculate the climbing constant \(\hat{\beta}\). c) At higher \(\Omega\), the higher order terms in the perturbation expansion cannot be neglected and lead to secondary flows in the fluid bulge that result in the deviation in linearity from a plot of \(\Delta h\) vs. \(\Omega^{2}\).
effects dominate over inertial effects, i.e., if Term I \(>\) Term II in Eq. 5. In other words, we should observe rod-_climbing_, which is true for concentrated solutions with C = 3 wt. %, 2 wt. %, and 1 wt. % as shown in Fig. 5. On the other hand, the perturbations to the static interface shape \(\Delta h(\Omega,a)\) are negative if elastic effects are weaker than inertial effects, i.e., Term I \(<\) Term II in Eq. 5. In other words, we should observe rod-_descending_, which is true for the semi-dilute solution with C = 0.3 wt. % as shown in Fig. 5. Thus, from Eq. 5, it is evident that we can predict whether a given complex fluid will undergo rod-climbing or descending if its material functions are known. We will return to this discussion in Sec. IV.4, where we analyze Eq. 5 in further detail and derive a "climbing condition" to predict whether the interface of a given viscoelastic fluid will climb or descend a thin rotating rod immersed in it.
For larger values of rotation speeds \(\Omega^{2}>\Omega^{2}_{max}\) when rod-climbing is observed (Term I \(>\) Term II in Eq. 5), the height increment \(\Delta h(\Omega,a)\) increases non-linearly with \(\Omega^{2}\) and the asymptotic relationship presented in Eq. 5 is no longer valid. The interface shape changes from concave to convex in the large \(\Omega\) regime as depicted in Fig. 4a-iii. Consequently, this transition from a concave to a convex interface can also be used as an _in situ_ condition to determine \(\Omega_{max}\) more accurately (in conjunction with the criterion involving the modified Froude number), and this experimentally motivated approach is utilized in this study. With further increase in the rod rotation rate beyond \(\Omega_{max}\), the interface shape assumes a rotating blob-like shape, which emerges distinctively from the larger pool of stationary or slowly rotating fluid at a point with a slope discontinuity as shown in Fig. 4a-iv. Eventually, at very high rotation speeds, this bolus of fluid becomes unstable with unsteady secondary motions resulting in a band of fluid rising up and down the rod in a wave-like manner. Further increases in \(\Omega\) completely disrupt the climbing fluid blob into smaller pendant drops that are thrown radially outwards from the rod.
From our rod-climbing experiments as well as previous studies [10; 12; 13], we can conclude that there is an accessible range of small rotation rates \(\Omega<\Omega_{max}\approx\sqrt{g\mathrm{d}\Delta h/\mathrm{d}\Omega^{2}}\) such that the second-order fluid approximation is valid and the climbing fluid interface height scales linearly with \(\Omega^{2}\). Hence, we can equate the slope \((\mathrm{d}\Delta h/\mathrm{d}\Omega^{2})_{exp}\) determined experimentally in the \(\Omega<\Omega_{max}\) regime to the predictions of the second-order fluid theory in Eq. 5 to evaluate the climbing constant
\[\hat{\beta}_{exp}=\frac{4+\sqrt{Bo}}{4}\left[\left(\frac{\mathrm{d}\Delta h} {\mathrm{d}\Omega^{2}}\right)_{exp}\frac{2\left(\Gamma\rho g\right)^{1/2}}{a} +\frac{\rho a^{2}}{2+\sqrt{Bo}}\right]. \tag{10}\]
The values of \(\hat{\beta}_{exp}\) thus obtained from measurements for the various PIB solutions are presented in Table 3. We observe a significant reduction in the climbing constant \(\hat{\beta}\) that varies over four orders of magnitude by diluting the PIB concentration from 3 wt. % to 0.3 wt. % as was anticipated in Sec. IV.1 from a similar dramatic four orders of magnitude of reduction in the \(\Psi_{1,0}\) values.
### Reconciling rod-climbing measurements with SAOS to determine \(\Psi_{2,0}\)
Reptation theory predicts that the normal stress difference ratio \(\psi_{0}=-\Psi_{2,0}/\Psi_{1,0}\) for semi-dilute and concentrated entangled solutions has the value \(\psi_{0}=2/7\) or 1/7 depending on whether the independent alignment assumption is made or not [19]. The precise
Figure 5: Change in the interface height at the rotating rod (\(\Delta h(\Omega,a)\)) compared to the static rise \(h_{s}(a)\) for various PIB solutions utilized in this study. The 0.3 wt. % solution does not satisfy the condition for climbing (Sec. IV.4 Fig. 7); hence, the interface height decreases with the rotation rate. However, this rod-descending regime still varies linearly with \(\Omega^{2}\) as predicted by the domain perturbation solution for a second-order fluid in the low rotational speed limit. The dashed line depicts the lower bound on rod-descending expected for a purely Newtonian fluid without any elasticity, which arises solely due to inertial effects. The \(\Delta h\) vs. \(\Omega^{2}\) curves of any fluid with finite non-zero normal stresses will lie above this lower bound.
value of \(\psi_{0}\) critically affects the level of rod climbing expected. Specifically, if \(\psi_{0}=1/4\), then from Eq. 6, it is clear that the climbing constant is \(\hat{\beta}=0\).
However, in dilute solutions without entanglement effects, one can expect \(\psi_{0}=0\) as has been confirmed for some Boger fluids experimentally [13; 20] and thus conclude that rod-climbing is driven primarily from the first normal stress difference. This becomes clear if we rearrange Eq. 6 as
\[\Psi_{1,0}=\frac{2\hat{\beta}}{1+4\Psi_{2,0}/\Psi_{1,0}}=\frac{2\hat{\beta}}{1- 4\psi_{0}} \tag{11}\]
Then by substituting \(\psi_{0}=0\) (or equivalently \(\Psi_{2,0}=0\) in the first inequality of Eq. 11, one obtains \(\Psi_{1,0}=2\hat{\beta}\). In this limit from the rod-climbing measurements of \(\hat{\beta}_{exp}\), one could calculate the expected value of the first normal stress difference coefficient to be \(\Psi_{1,0}=2\hat{\beta}_{exp}\). This _a priori_ estimate of \(\Psi_{1,0}=2\hat{\beta}_{exp}\) can be interpreted as the _lower bound_ on \(\Psi_{1,0}\) obtained from rod-climbing measurements alone because any finite positive \(\psi_{0}\) value would result in a larger computed value of \(\Psi_{1,0}\). In other words, in the absence of second normal stress effects in the fluid, the lower bound value of \(\Psi_{1,0}=2\hat{\beta}_{exp}\) obtained above should be enough to achieve a rod-climbing height measured in the experiments. However, the presence of a non-zero second normal stress difference diminishes the rod-climbing abilities of the fluid, as a result of which, a higher value of \(\Psi_{1,0}\) is required to achieve the climbing height given by the measured climbing constant \(\hat{\beta}_{exp}\).
This naive _a priori_ estimate of \(\Psi_{1,0}=2\hat{\beta}_{exp}\) for the various PIB solutions utilized in this study is represented in Fig. 6 by the dashed lines. If \(\Psi_{2,0}\) is indeed zero (as has been observed for some dilute Boger fluids experimentally [13; 20]), we would expect these dashed lines (i.e., the lower bound prediction of \(\Psi_{1,0}\) extracted from rod-climbing experiments) to coincide exactly with the dashed-dotted lines in Fig. 6, which depict the actual values of \(\Psi_{1,0}\) obtained in Sec. IV A from the asymptotic quadratic scaling of the SAOS data. However, there is a significant offset in these two _independent_ experimental estimates; hence, the initial assumption that \(\Psi_{2,0}=0\) is incorrect. A finite non-zero \(\Psi_{2,0}\) affects rod-climbing, and in particular, negative values of \(\Psi_{2,0}\) increase the value of \(\Psi_{1,0}\) that are consistent with a given experimental observation \(\hat{\beta}_{exp}\) (see Eq. 11). This additional contribution can be obtained by solving Eq. 6 with the independent knowledge of (i) \(\Psi_{1,0}\) from the SAOS measurements in Sec. IV A, and (ii) \(\hat{\beta}_{exp}\) from the rod-climbing measurements in Sec. IV B. The values of \(\Psi_{2,0}\) thus obtained for various PIB solutions are summarized in Table 3.
As an additional independent check, we also directly measure the material function \(N_{1}(\hat{\gamma})\) using a 40 mm \(2^{\circ}\) cone-and-plate (CP) geometry, and this data is also presented in Fig. 6. As discussed in the Introduction, due to the lower sensitivity limit of \(F_{CP,min}\approx 0.001\) N (0.1 gm-force) on the axial thrust measurement, \(N_{1}(\hat{\gamma})\) values \(\lesssim 1.6\) Pa.s\({}^{2}\) cannot be measured using a 40 mm \(2^{\circ}\) CP geometry. This limit is shown by a horizontal dotted line in Fig. 6. Thus, Fig. 6 pictorially illustrates that the asymptotic second-order regime in which \(N_{1}\left(\hat{\gamma}\right)\simeq\Psi_{1,0}\hat{\gamma}^{2}\) cannot be directly accessed using axial force measurements with this CP geometry due to the lower sensitivity limits of the normal force transducer. Larger plates, of course, lower this bound, as is evident by considering Eq. 1, but the largest available plates (\(R_{max}\approx 60\) mm) only result in lowering the dotted line by a factor of 2.25.
From Table 3, we observe that the values of \(\psi_{0}\) obtained for all the PIB solutions investigated here lie between the two limiting values 2/7 (\(\approx 0.285\)) and 1/7 (\(\approx 0.143\)) predicted by reptation theory [19] (with and without the independent alignment approximation, respectively). Very careful measurements with distributed pressure measurements across a cone and plate have shown that the typical \(\psi_{0}\) values of semi-dilute and concentrated entangled polystyrene solutions are similar and close to the reptation prediction of 2/7 [21]. A value \(1/7<\psi_{0}<2/7\) is typical for semi-dilute polystyrene solutions [21].
The exact limiting condition required at low shear rates to observe rod-climbing can be derived analytically by neglecting the inertia term in Eq. 5 and rearranging to show that for rod climbing to be observed, we require[14; 22; 23]
\[\hat{\beta}>0\implies\psi_{0}<0.25. \tag{12}\]
Hence, if the independent alignment approximation is exactly obeyed so that \(\psi_{0}\approx 0.285\), rod-climbing will not be observed and vice-versa. Prima facie, this seems like a serious limitation on the utility of rod-climbing as a technique for measuring \(\Psi_{2,0}\). Furthermore, for the least viscoelastic 0.3 wt. % PIB solution, our independent measurements of rod-climbing, the SAOS tTS master curve, and \(N_{1}(\hat{\gamma})\) all suggest \(\psi_{0}=0.205\), a value which satisfies the asymptotic rod-climbing condition in Eq. 12; however, Fig. 5 reveals that
\begin{table}
\begin{tabular}{c c c c c} \hline C (wt. \%) & \(\hat{\beta}_{exp}\) (Pa.s\({}^{2}\)) & \(\psi_{0}=-\frac{\Psi_{1,0}}{\Psi_{1,0}}\) & \(\Psi_{1,0}\) (Pa.s\({}^{2}\)) & \(\Psi_{2,0}\) (Pa.s\({}^{2}\)) \\ \hline
3.00 & 10.03 & 0.241 & 534.0 & \(-129.0\) \\
2.00 & 0.710 & 0.243 & 051.9 & \(-012.6\) \\
1.00 & 0.018 & 0.233 & 0.527 & \(-0.123\) \\
0.30 & 0.002 & 0.205 & 0.021 & \(-0.004\) \\ \hline \end{tabular}
\end{table}
Table 3: Measurements of the experimental climbing constant \(\hat{\beta}_{exp}\) from the rod-climbing rheometry and \(\Psi_{2,0}\) by combining rod-climbing rheometry with the SAOS measurements from conventional rheometer (TA instruments ARES-G2) for the various PIB polymer solutions utilized in this study.
the 0.3 wt. % PIB solution undergoes rod-descending instead of rod-climbing. These experimental observations motivate a more complete understanding of Eq. 5, and this is considered in the following subsection.
### The climbing condition
Here we modify the rod-climbing condition in Eq. 12 by accounting for the combined effects of inertia and elasticity. In doing so, we extend the utility of rod-climbing experiments for measuring \(\Psi_{2,0}\) to cases where rod-descending is observed. The first step is to return our attention to Eq. 5, which predicts the small perturbations in the interface height with increasing \(\Omega^{2}\) in the small rotation speed limit. The first term in Eq. 5 given by \(4\hat{\beta}/(4+\sqrt{Bo})\) (Term I) suggests that positive perturbations to \(h_{s}(a,\alpha)\) will be observed if the fluid has significant elasticity, i.e., \(\hat{\beta}>0\). On the other hand, the second term in Eq. 5\(\rho a^{2}/(2+\sqrt{Bo})\) (Term II) suggests that negative perturbations to \(h_{s}(a,\alpha)\) arising from fluid inertial effects will be observed. As a result, the fluid interface may climb or descend a rotating rod depending on whether Term I \(>\) Term II or vice versa. From the climbing condition, \(\mathrm{d}\Delta h(\Omega,a)/\mathrm{d}\Omega^{2}>0\), after
Figure 6: Reconciling rod-climbing measurements of normal stress differences measurements with conventional rheometry for: a) 3 wt. % PIB solution, and b) 1 wt. % PIB solution. \(\Psi_{1,0}\) can be estimated from the SAOS data using the simple fluid asymptotic theory, which gives \(\Psi_{1,0}=\lim_{a\to 0}2G^{\prime}/\omega^{2}\) (shown by dashed-dotted lines). Another estimate for the same property can be obtained from the rod-climbing measurements by first assuming \(\Psi_{2,0}=0\) Pa.s\({}^{2}\), which gives \(\Psi_{1,0}=2\hat{\beta}\) (shown by dashed lines). These two estimates must match exactly in the absence of a second normal stress difference in the fluid. However, a finite non-zero \(\Psi_{2,0}\) is present, as indicated by the significant separation between the two estimated curves. Thus, the rod-climbing rheometer measurements for \(\hat{\beta}_{exp}\) in conjunction with the SAOS master curve data can be used to estimate \(\Psi_{2,0}=\frac{1}{2}\hat{\beta}_{exp}-\frac{1}{4}\Psi_{1,0}\) as tabulated in Table 3. Normal force measurements of \(N_{1}(\hat{\gamma})\) from a 40 mm 2\({}^{\circ}\) CP geometry are also shown with filled squares. The normal force measurements cannot access the anticipated second-order scaling of \(N_{1}\) at low shear rates due to limits on the sensitivity of the axial force transducer.
rearranging the various terms in Eq. 5 we can obtain the following condition for rod-climbing to be observed:
\[\psi_{0}<\frac{1}{4}\left(1-\frac{4+\sqrt{Bo}}{2(2+\sqrt{Bo})}\frac{\rho a^{2}}{ \Psi_{1,0}}\right). \tag{13}\]
This rod climbing constraint incorporates the competition between inertial and elastic effects, as depicted in Fig. 7. The ratio \(\rho a^{2}/\Psi_{1,0}\) represents the relative contributions of fluid inertia and elasticity. Recognizing that for a second order fluid, we can write the relaxation time as \(\tau=\Psi_{1,0}/(2\eta_{0})\), we can rewrite this ratio in terms of a Deborah number \(De=\tau\Omega\) and a Reynolds number \(Re=\rho\Omega a^{2}/\eta_{0}\) or alternatively in terms of the elasticity number \(El=De/Re=(\tau\Omega)/(\rho\Omega a^{2}/\eta_{0})=\Psi_{1,0}/(2\rho a^{2})\). The curves in Fig. 7 show the condition of Eq. 13 for three different values of the Bond number \(Bo=\rho ga^{2}/\Gamma\), including the value \(Bo=4.7\) appropriate for our PIB solutions. Because of the functional form of the fractional term involving \(Bo\) in Eq. 13, the boundary between rod-climbing and rod-descending is only weakly sensitive to gravitational effects and is predominantly controlled by inertial effects. We also show the actual values of \(\psi_{0}\) and \(\rho a^{2}/\Psi_{1,0}\) determined experimentally for the various PIB solutions studied here. The original rod-climbing condition in Eq. 12 is recovered when inertia effects are negligible compared to elasticity effects, i.e., \(\rho a^{2}/\Psi_{1,0}\ll 1\).
Fig. 7 shows that the values of the material functions determined experimentally for the 3 wt. %, 2 wt. %, and 1 wt. % solutions satisfy the rod-climbing condition and should exhibit positive perturbations to the static interface shape under low finite rod-rotation speeds. This is indeed true as depicted in Fig. 5 by the increase in \(\Delta h\) with \(\Omega^{2}\) for these three solutions. On the other hand, the low elasticity of the 0.3 wt. % solution satisfies the rod-descending condition as shown in Fig. 7 and should exhibit negative perturbations to the static interface shape. This is again found to be true in Fig. 5, which shows a decrease in \(\Delta h\) with increasing \(\Omega^{2}\) for the 0.3 wt. % solution. Also, for the 0.3 wt. % solution, the inertoelastic quantity \(\rho a^{2}/\Psi_{1,0}>1\), indicating that inertial effects cannot be ignored. As a result, using the original simplified climbing condition given in Eq. 12 fails to predict the observed rod-descending as it was derived by ignoring inertial effects [23].
As indicated in Fig. 7, Eq. 13 predicts whether the small perturbations \(\Delta h(\Omega,a)\) will be positive or negative in the small \(\Omega\) limit. However, it should be noted that even if the criterion of Eq. 13 predicts rod-descending, the rotating rod experiments can still be useful in measuring \(\Psi_{2,0}\) as long as the fluid wets the rotating shaft, i.e., the contact angle \(\alpha<90^{\circ}\). The negative free surface perturbations \(\Delta h(\Omega,a)\) can be readily calculated from a sequence of rod-climbing photographs of the interface shape for a wetting fluid. Once \(\Delta h(\Omega,a)\) vs. \(\Omega^{2}\) data is available, its slope in the low \(\Omega^{2}\) regime can be equated to Eq. 5 to calculate \(\hat{\beta}_{exp}\) irrespective of its sign. This is especially useful in weakly elastic fluids when inertial effects compete with elastic effects, e.g., for the 0.3 wt. % PIB solution, or if the normal stress ratio exceeds \(\psi_{0}>0.25\), e.g., for fluids following the predictions of reptation theory with the independent alignment approximation [21]. The analysis presented in this section and our measurements for the 0.3 wt. % solution, which undergoes rod-descending, extend the validity of rod-climbing rheometry in principle to a much wider range of complex fluids, provided special care is taken in selecting a rigid rod constructed from a solid material that the fluid wets [24].
Finally, we note that a special (redundant) case arises when \(\hat{\beta}=0\), i.e., we can either have \(\psi_{0}=0.25\) or an inelastic fluid with \(\Psi_{2,0}=\Psi_{1,0}=0\). In this case, if the fluid is viscoelastic, a finite value of \(\Psi_{1,0}\) can first be measured independently as discussed in
Figure 7: The condition for observing rod-climbing vs. rod-descending is plotted for different Bond numbers, \(Bo\), as a competition between the normal stress ratio \(\psi_{0}\) and the dimensionless inertoelastic parameter \(\rho a^{2}/\Psi_{1,0}=1/(2El)\) with \(El\) being the elasticity number. The curves show the condition of Eq. 13 for three different values of the Bond number \(Bo=\rho ga^{2}/\Gamma\) including the value \(Bo=4.7\) expected for the PIB solutions used in this study. The boundary line separates rod-climbing from rod-descending. The 3 wt. %, 2 wt. %, and 1 wt. % solutions satisfy the rod-climbing condition and climb the rotating rod, while the 0.3 wt. % solution does not satisfy the rod-climbing condition; hence, its interface descends close to the rotating rod (See Fig. 5).
Sec. IV.1 using normal force measurements, or from the asymptotic scaling of a viscoelastic master curve for \(G^{\prime}(\omega_{r})\). If no rod-climbing is observed, we can conclude that \(\Psi_{2,0}\approx-0.25\Psi_{1,0}\) in such a fluid. The interface shape will be unperturbed at low rod rotation speeds provided \(\rho a^{2}/\Psi_{1,0}\ll 1\) (i.e., weak inertial effects). If the inertial effects are strong, i.e., \(\rho a^{2}/\Psi_{1,0}>1\), then this fluid will exhibit rod-descending at all rod rotation speeds.
If the independent measurement of the first normal stress difference reveals that the fluid is essentially inelastic, i.e., \(\Psi_{1,0}\approx 0\), then a fluid with \(\hat{\beta}=0\) will simply exhibit Newtonian rod-descending at sufficiently high rotation rates as shown by the dashed pink line in Fig. 5. In this case, we can conclude that \(\Psi_{2,0}\simeq\Psi_{1,0}=0\).
The observations discussed in this section imply that calling this technique "Rod-climbing rheometry" might be misleading as empirical measurements are still useful even when the fluid might experience rod-descending. Hence, a more general description, "rotating rod rheometry," is more suitable than rod-climbing rheometry, as the protocol presented in this study is readily applied irrespective of whether a fluid undergoes rod-climbing or rod-descending.
### Varying fluid elasticity and viscosity
To further corroborate the accuracy of the modified climbing condition presented in Eq. 13 and support the ideas discussed in the previous subsection, we artificially modify the elasticity of the 0.3 wt. % PIB solution to change the relative balance of elastic and inertial effects in the rotating rod experiment. From bead-spring theory for dilute solutions, we know that the elasticity in a polymeric solution scales as \(\Psi_{1,0}\simeq 2\eta_{P}\tau_{\star}\), where \(\eta_{P}\) is the polymer contribution to the viscosity, and \(\tau_{\star}\) is the shear relaxation time. In a dilute solution, we expect \(\Psi_{2,0}=0\). The magnitude of viscoelastic effects can be varied by either 1) increasing the polymer concentration or 2) increasing the relaxation time of the fluid. In the former case, we anticipate that \(\eta_{P}\sim(\mathrm{C}/\mathrm{C}^{*})^{2.4/(3v-1)}\) for a semi-dilute entangled solution in a good solvent with \(\nu=0.588\)[25] but the magnitude of second normal stress difference coefficient will also increase, and the expected functional form of \(\psi_{0}\) is not known. However, increasing the shear relaxation time \(\tau_{\star}\) and remaining in the dilute regime can also be achieved by increasing the solvent viscosity \(\eta_{s}\). According to Rouse-Zimm bead spring theories, the relaxation time will increase linearly with \(\eta_{s}\)[3]. We have utilized technique 1 to vary the elasticity in the PIB solutions so far in this study. It also explains the weak elasticity in the semi-dilute 0.3 wt. % PIB solution compared to more concentrated ones. Technique 2 is the standard recipe for preparing Boger fluids [26] and has been widely used to prepare highly elastic fluids at \(\mathrm{C}\lesssim\mathrm{C}^{*}\) with a constant viscosity [13; 27]. Hence, to augment the elasticity of the 0.3 wt. % PIB solution, we increase the solvent viscosity \(\eta_{s}\) by mixing a viscous polyalphaolefin oil (PAO) with the paraffin solvent, which increases \(\eta_{s}\) and consequently, \(\tau_{Z}\) and \(\Psi_{1,0}\), by around two orders of magnitude. We identify this formulation by the label 0.3 wt. % PIB Boger fluid as it has the same concentration of PIB as the weakly elastic shear thinning 0.3 wt. % PIB solution studied in Sec. IV.1\(-\)IV.3 but a higher viscosity.
Fig. 8 shows the dramatic effect of increasing the fluid elasticity on its rod-climbing ability. For comparison, the results for the
Figure 8: Change in the interface height at the rotating rod (\(\Delta h(\Omega,a)\)) compared to the static rise \(h_{s}(a)\) for the 3.0 and 0.3 wt. % PIB solutions utilized in this study compared with a 0.3 wt. % Boger fluid. Increasing the fluid elasticity dramatically enhances rod climbing in the 0.3 wt. % Boger fluid. The weakly elastic 0.3 wt. % PIB solution does not satisfy the condition for climbing (see Fig. 7), and hence, undergoes rod-descending. The dashed line depicts the rod-descending exhibited by a purely Newtonian fluid without any elasticity, which arises solely due to the inertial effects and hence is the lower bound for the \(\Delta h\) vs. \(\Omega^{2}\) curves. Measurements of \(\Delta h\) vs. \(\Omega^{2}\) curves for a fluid with finite non-zero normal stresses will lie above this lower bound, although the difference is small on the scale shown here (cf. Fig. 5). Insets show the interface shapes for 3 wt. % PIB and 0.3 wt. % Boger fluid at \(\Omega=9\) rad/s.
highly elastic 3 wt. % PIB and weakly elastic 0.3 wt. % PIB solutions are also presented here. The weakly elastic 0.3 wt. % PIB solution, which undergoes rod-descending due to the dominance of inertial effects, now becomes strongly rod-climbing when it is "bogerized" (i.e., converted to a Boger fluid) solely by increasing the solvent viscosity while retaining the same PIB concentration of 0.3 wt. %. In fact, at a rotation rate of 10 rad/s, the 0.3 wt. % Boger fluid now climbs the rod to a greater height than the 3 wt. % semi-dilute entangled fluid. This is because of the large value of \(\Psi_{1,0}\), but the very small value of \(\Psi_{2,0}\) in this dilute solution. We can follow the rotating rod rheometry protocol established in Sec. IV to calculate \(\Psi_{1,0}\) and \(\Psi_{2,0}\) in the 0.3 wt. % Boger fluid. The results are tabulated in Table 4, and \(\hat{\beta}\) increases from a value of \(\hat{\beta}_{exp}\simeq 2\times 10^{-3}\) Pa.s\({}^{2}\) for the 0.3 wt. % PIB solutions to a value \(\hat{\beta}_{exp}\simeq 0.27\) Pa.s\({}^{2}\). Also shown in the table just for comparison are results from two previous rod-climbing measurements [13; 14] of similar PIB-based (PIB molecular weight \(\approx 10^{6}\) g/mol.) Boger fluids at slightly lower concentrations.
Applying the rod-climbing condition Eq. 13 to this 0.3 wt. % Boger fluid reveals that it lies deep in the rod-climbing region in Fig. 7 with \(\psi_{0}=0.13\) and \(\rho a^{2}/\Psi_{1,0}=1.55\times 10^{-2}\), which rationalizes the dramatic transition from rod-descending to rod-climbing that can be engineered into this 0.3 wt. % polymer solution simply by the addition of a viscous solvent.
## V Conclusions
We have revisited the rod-climbing rheometer[12] for measuring normal stress differences in complex fluids that was originally proposed around four decades ago [10]. In doing so, we integrate its performance with modern-day torsional rheometers to facilitate self-consistent predictions of the zero shear rate values of both the first and the second normal stress coefficients \(\Psi_{1,0}\) and \(\Psi_{2,0}\), which are often very challenging to determine accurately. The protocol for rotating rod rheometry presented here involves:
1. Evaluating the first normal stress difference coefficient in the limit of zero shear rate (\(\Psi_{1,0}\)) from SAOS master curve data by using the asymptotic result from simple fluid theory \(\Psi_{1,0}=\lim_{\omega\to 0}2G^{\prime}/\omega^{2}\).
2. Determining the climbing constant \(\hat{\beta}_{exp}\) of the fluid by measuring the rate of change of perturbations to the static interface \(\Delta h(\Omega,a)\) vs. \(\Omega^{2}\), i.e., \((\mathrm{d}\Delta h/\mathrm{d}\Omega^{2})_{exp}\) for small \(\Omega\).
3. By equating the value of \(\hat{\beta}_{exp}\) determined experimentally to the theoretical result (Eq. 5) of the second order fluid theory, the second normal stress coefficient can then be determined from the two independent measurements using the relationship \(\Psi_{2,0}=\frac{1}{2}\hat{\beta}_{exp}-\frac{1}{4}\Psi_{1,0}\).
We have used this protocol to determine \(\Psi_{1,0}\) and \(\Psi_{2,0}\) of several PIB solutions in the concentrated and semi-dilute regimes. We observe \(\psi_{0}=-\Psi_{2,0}/\Psi_{1,0}<0.25\) for all the PIB solutions, so all of them should exhibit rod-climbing according to the original rod-climbing criterion obtained by neglecting inertial effects [23] (Eq. 12). We indeed observe rod-climbing for the more concentrated solutions; however, the least concentrated 0.3 wt. % weakly elastic PIB solution exhibited rod-descending, hinting at substantial inertial effects. Hence, we have modified the rod-climbing condition by considering the relative strength of inertial effects compared to the elastic effects, as quantified by the ratio \(\rho a^{2}/\Psi_{1,0}\) (Eq. 13). The modified rod-climbing condition successfully rationalizes the observed dipping of the free surface for the 0.3 wt. % weakly viscoelastic fluid.
To elucidate the competition between the elastic and inertial effects in determining whether a given fluid will exhibit rod-climbing or rod-descending, we deliberately enhanced the elasticity of the 0.3 wt. % PIB solution by "bogerizing" it through the addition of a more viscous solvent, i.e., we prepared a 0.3 wt. % PIB Boger fluid. The resulting highly elastic fluid has a higher value of \(\Psi_{1,0}\) and a lower value of \(\psi_{0}\). It, therefore, undergoes pronounced rod-climbing due to the dominant effect of elasticity overwhelming inertial effects, in marked contrast to the weakly elastic 0.3 wt. % PIB solution.
Thus, we conclude that if a fluid undergoes rod-descending instead of rod-climbing, it does not conclusively indicate an absence of elastic effects in a fluid. Weak elasticity might still be present but is largely masked by the dominance of inertial effects, resulting in the quadratic, but negative, variation in the free surface height we observed with the 0.3 wt.% PIB fluid. Even in such a case, the weak contributions of \(\Psi_{1,0}\) and \(\Psi_{2,0}\) can still be extracted using the protocol presented in this study from the negative slope \(\mathrm{d}\Delta h/\mathrm{d}\Omega^{2}\).
\begin{table}
\begin{tabular}{l c c c c} \hline C (wt. \%) & \(\hat{\beta}\) (Pa.\(s^{2}\)) & \(\psi_{0}=-\frac{\Psi_{2,0}}{\Psi_{1,0}}\) & \(\Psi_{1,0}\) (Pa.\(s^{2}\)) & \(\Psi_{2,0}\) (Pa.\(s^{2}\)) \\ \hline
0.30 (This study) & 0.27 & 0.13 & 1.16 & \(-0.16\) \\
0.24[14] & 1.68 & 0.11 & 6.00 & \(-0.66\) \\
0.10[13] & 1.28 & 0.01 & 2.65 & \(-0.03\) \\ \hline \end{tabular}
\end{table}
Table 4: Measurements of the climbing constant \(\hat{\beta}\) from the rod-climbing rheometry and \(\Psi_{2,0}\) by combining rod-climbing rheometry with the SAOS measurements from a conventional rheometer (TA instruments ARES-G2) for the 0.3 wt. % PIB Boger fluid (PIB in polyplahaefin and paraffinic oil) used in this study. Results from two previous studies[13; 14] for a similar PIB Boger fluid (PIB in polybutene and 2-chloropropane) but with different concentrations are also tabulated for comparison.
In conjunction with time-Temperature Superposition (tTS) measurements of a linear viscoelastic master curve of \(G^{\prime}(\omega_{r})\), our analysis and results show that the rotating rod experiment can be very useful in extending measurements of the normal stress differences in complex fluids to lower shear rate limits, irrespective of whether they result in rod-climbing or rod-descending, by allowing for the inertial contributions to the interface shape and ensuring the rod material is selected such that the fluid is wetting. Hence, a general description, "rotating rod rheometry" for this technique, is more apt.
## Appendix A: Climbing height of a second-order fluid on a slowly rotating thin rod
Using modern notation and retaining inertial contributions, this appendix reproduces a formal derivation of Eq. 3 using a domain perturbation analysis method from the original works of D. D. Joseph and co-workers [9; 10]. The problem setup is depicted in Fig. 9. A thin rod of radius \(a\) is submerged in a semi-infinite pool of a second-order incompressible fluid with density \(\rho\), surface tension \(\Gamma\), and contact angle \(\alpha\) with the rod. The rod is infinitely long and rotates with a constant angular velocity \(\Omega\). The fluid surface is exposed to atmospheric pressure \(p_{a}\) and deviates from its initial static shape \(z=h_{s}(r,\alpha)\) to a steady profile \(z=h(\Omega,r,\alpha)\) due to the shear flow generated by the rod rotation. The profile \(z=h(\Omega,r,\alpha)\) is determined by the combined action of normal stresses, inertia, surface tension, and gravity. For an axially symmetric velocity \(\mathbf{u}=v(r,z)\mathbf{e}_{\theta}+\mathbf{\bar{u}}\) with \(\mathbf{\bar{u}}=u(r,z)\mathbf{e}_{r}+w(r,z)\mathbf{e}_{z}\) in the cylindrical coordinates \([r,\theta,z]\), the continuity and Navier-Stokes equations with \(\sigma_{ij}\) as the stress tensor for a second-order fluid can be written as:
\[\partial_{r}(ru)+r\partial_{z}(w)=0, \tag{14a}\] \[\rho\left[u\partial_{r}u+w\partial_{z}u-\frac{u^{2}}{r}\right]=-\partial_{r} \Phi+\partial_{r}\sigma_{rr}+\partial_{z}\sigma_{rz}+\frac{1}{r}(\sigma_{rr}- \sigma_{\theta\theta}),\] (14b) \[\rho\left[u\partial_{r}v+w\partial_{z}v+\frac{uv}{r}\right]=\frac{1}{r^{2}} \partial_{r}(r^{2}\sigma_{r\theta})+\partial_{z}\sigma_{z\theta},\] (14c) \[\rho[u\partial_{r}w+w\partial_{z}w]=-\partial_{z}\Phi+\partial_{r}\sigma_{rz}+ \partial_{z}\sigma_{zz}+\frac{1}{r}\sigma_{rz}, \tag{14d}\]
where the stress tensor is given by
\[\mathbf{\sigma}=\eta_{0}\mathbf{A}_{1}-0.5\Psi_{1,0}\mathbf{A}_{2}+(\Psi_{1,0}+ \Psi_{2,0})\mathbf{A}_{1}^{2}. \tag{14e}\]
Figure 9: Schematic of the rotating rod problem in a cylindrical coordinate system \((r,\theta,z)\) with \((\mathbf{e}_{r},\mathbf{e}_{\theta},\mathbf{e}_{z})\) as the unit vectors in the respective directions. A thin rod of radius \(a\) with its axis along the z-axis rotates around its axis with a constant rotational velocity \(\Omega\). \(\mathbf{n}\) and \(\mathbf{t}\) denote the unit vectors normal and tangent to the interface \(z=h(\Omega,r,\alpha)\).
Here \(\Phi=p+\rho gz\) is the pressure head, \(\partial_{l}=\partial/\partial x_{i}\). \(\mathbf{A}_{1}(\mathbf{u})=(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\mathbf{u}^{T})\), and \(\mathbf{A}_{2}(\mathbf{u})=(\mathbf{u}\cdot\mathbf{\nabla})\mathbf{A}_{1}+\mathbf{ A}_{1}\cdot\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\mathbf{u}^{T}\cdot\mathbf{A}_{1}\) are the first two Rivlin-Ericksen tensors. The coefficients \(\eta_{0}\), \(\Psi_{1,0}\), and \(\Psi_{2,0}\) are the viscosity, first and second normal stress difference coefficients of the fluid in the limit of zero shear. The unit normal to the free interface \(z=h(\Omega,r,\alpha)\) is given by \(\mathbf{n}=\frac{-h^{\prime}}{\sqrt{1+h^{\prime 2}}}\mathbf{e}_{r}+\frac{1}{ \sqrt{1+h^{\prime 2}}}\mathbf{e}_{z}\) where \(h^{\prime}=\mathrm{d}h/\mathrm{d}r\). The two orthogonal tangential vectors to the free interface are \(\mathbf{e}_{\theta}\) and \(\mathbf{t}=-\frac{1}{\sqrt{1+h^{\prime 2}}}\mathbf{e}_{r}-\frac{h^{\prime}}{ \sqrt{1+h^{\prime 2}}}\mathbf{e}_{z}\). The solution to Eq. 14 must satisfy the following boundary conditions:
No slip at the rod:
\[\mathbf{u}=a\Omega\mathbf{e}_{\theta}\text{ at }r=a.\] (15a) No flux normal to the interface: \[w-uh^{\prime}=0\text{ at }z=h(\Omega,r,\alpha).\] (15b) No tangential stress at the fluid interface: \[\sigma_{n\theta}=\sigma_{z\theta}-h^{\prime}\sigma_{r\theta}=0\text{ at }z=h(\Omega,r,\alpha),\text{ and} \tag{15c}\] \[\sigma_{nt}=h^{\prime}(\sigma_{zz}-\sigma_{rr})+(1-h^{\prime 2})\sigma_{rz}=0 \text{ at }z=h(\Omega,r,\alpha).\] (15d) The normal stress jump at the interface is balanced by the surface tension ( \[\Gamma\] ) force: \[p_{a}-\Phi+\sigma_{zz}-h^{\prime}\sigma_{rz}+\rho gh=\frac{\Gamma}{r}\left[ \frac{rh^{\prime}}{\sqrt{1+h^{\prime 2}}}\right]^{\prime}\text{ at }z=h(\Omega,r,\alpha).\] (15e) Contact angle condition: \[h^{\prime}(\Omega,r,\alpha)=\cot(\alpha)\text{ at }r=a.\] (15f) Finally, the solution approaches the hydrostatic solution with a flat free interface as \[r\rightarrow\infty\] : \[h(\Omega,r,\alpha)\to 0\text{ as }\mathbf{u}\to 0\text{, and }\Phi\to 0.\] (15g) The free surface problem described by Eq. 14 and 15 can be solved using the domain perturbation method under the condition that the total fluid domain volume is conserved, i.e., \(\int_{r=a}^{r\rightarrow\infty}rh(\Omega,r,\alpha)\mathrm{d}r=0\). The solution then can be expanded as a power series: \[\begin{bmatrix}\mathbf{u}\\ \mathbf{\sigma}\\ \Phi\\ h\end{bmatrix}=\sum_{i}\begin{bmatrix}\mathbf{u}^{(i)}\\ \mathbf{\sigma}^{(i)}\\ \Phi^{(i)}\\ h^{(i)}\end{bmatrix}\Omega^{i}.\] (16) Substituting Eq. 16 in the governing equations Eq. 14, we get the following zero-th order governing equations: \[(\mathbf{u}^{(0)}\cdot\mathbf{\nabla})\mathbf{u}^{(0)}=-\nabla\Phi^{(0)}+\mathbf{ \nabla}\cdot\mathbf{\sigma}^{(0)},\] (17a) \[\mathbf{\nabla}\cdot\mathbf{u}^{(0)}=0.\] (17b) Solution to Eq. 17a, 17b along with the zero-th order boundary conditions Eq. 15 is given by \[\mathbf{u}^{(0)}=\mathbf{0}\text{, }\mathbf{\sigma}^{(0)}=\mathbf{0}\text{, }\Phi^{(0)}=p_{a}\text{ and }\] recovers the static interface rise \[h^{(0)}=h_{s}(r,\alpha)\] which can be computed by numerically solving: \[\frac{\Gamma}{r}\left[\frac{rh^{\prime}_{s}}{\sqrt{1+h^{\prime 2}_{s}}} \right]^{\prime}=\rho gh_{s}\] (17c) subjected to \[h^{\prime}_{s}(r=a)=\cot(\alpha)\text{ and }h_{s}\to 0\text{ as }r\rightarrow\infty.\] The zero-th order solution obtained by solving Eq. 17 can be used to solve the following first-order governing equations: \[\mathbf{u}^{(0)}\cdot\mathbf{\nabla}\mathbf{u}^{(1)}+\mathbf{u}^{(1)}\cdot\mathbf{ \nabla}\mathbf{u}^{(0)}=-\nabla\Phi^{(1)}+\mathbf{\nabla}\cdot\mathbf{\sigma}^{(1)},\] (18a) \[\mathbf{\nabla}\cdot\mathbf{u}_{1}=0. \tag{18b}\]
Solution to Eq. 18a, 18b along with the first order boundary conditions Eq. 15a-g is given by \(\textbf{u}^{(1)}=\frac{\omega^{2}}{r}\textbf{e}_{\theta}\), \(\mathbf{\sigma}^{(1)}=\eta_{0}(\mathbf{\nabla}\textbf{u}^{(1)}+\mathbf{\nabla}\textbf{u}^{( 1)^{T}})=-\eta_{0}a^{2}/r^{2}(\textbf{e}_{r}\textbf{e}_{\theta}+\textbf{e}_{ \theta}\textbf{e}_{r})\), and \(\Phi^{(1)}=0\). \(h^{(1)}=0\) since \(h\) is an even function of \(\Omega\), and can be computed by solving:
\[\Gamma\left({{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \}}}}}}}}}}}}}}}}}}}}}} \ \]\}}}}}} \} \} \ \}} \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
where
\[g_{1}=\frac{4c_{1}}{4-B\omega}\left[\frac{1}{r^{2}}-\frac{2a^{ \sqrt{B\omega}-2}}{\sqrt{B\omega}r^{\sqrt{B\omega}}}\right], \tag{19h}\] \[g_{2}=\frac{16c_{2}-4a^{2}c_{1}}{16-B\omega}\left[\frac{1}{r^{4}}- \frac{4a^{\sqrt{B\omega}-4}}{\sqrt{B\omega}r^{\sqrt{B\omega}}}\right],\] (19i) \[g_{3}=-\frac{16a^{2}c_{2}}{36-B\omega}\left[\frac{1}{r^{6}}- \frac{a^{\sqrt{B\omega}-6}}{\sqrt{B\omega}r^{\sqrt{B\omega}}}\right],\] (19j) \[g_{4}=-\frac{\sqrt{B\omega}c_{3}}{2}\,r^{\sqrt{B\omega}}\left[ \ln\left(r/a\right)+1/\sqrt{B\omega}\right],\] (19k) \[g_{5}=-\frac{B\omega a^{2}c_{3}}{4(\sqrt{B\omega}+1)r^{\sqrt{B \omega}}}\left[\frac{1}{r^{2}}-\frac{\sqrt{B\omega}+2}{\sqrt{B\omega}a^{2}} \right],\] (19l) \[c_{1}=\frac{\rho a^{2}}{(4-B\omega)\Gamma},\] (19m) \[c_{2}=-\frac{4a^{2}\dot{\beta}}{(16-B\omega)\Gamma},\] (19n) \[\text{and }c_{3}=-4a^{\sqrt{B\omega}-4}c_{2}/\sqrt{B\omega}-2a^{ \sqrt{B\omega}-2}c_{1}/\sqrt{B\omega}. \tag{19o}\]
Thus, we can find \(h^{(2)}\simeq a^{4}(H_{0}+H_{1}+...)\).
Solutions to the third and fourth-order problems can also be obtained but will not be pursued here. In the third order, a correction to the azimuthal velocity component arises, which does not contribute to the free surface shape alteration as \(h\) is an even function of \(\Omega\). At the fourth order, the fluid motion departs from the simple Couette type \(\textbf{u}=a\Omega\textbf{e}_{\theta}\), and velocity corrections in both the axial and radial directions come into the picture. Thus, a discernible secondary flow in the (\(r\),\(z\)) plane is observed. The free surface profile is also altered at fourth order. The solutions at third and fourth order depend on additional material constants beyond the three already involved till the second-order problem, which introduces additional unknowns to be determined. Hence, we stop our analysis at the second order and derive Eq. 3 from the results obtained so far.
In summary, in the small \(\Omega\) limit, we can approximate the interface shape as:
\[h(\Omega,r,\alpha) =h^{(0)}+h^{(1)}\Omega+h^{(2)}\Omega^{2}+O(\Omega^{3})\] \[=h_{s}(\Omega,\alpha)+h^{(2)}(\Omega,r)\Omega^{2}+O(\Omega^{2} \alpha+\Omega^{4})+...\] \[=h_{s}(\Omega,\alpha)+\left[\frac{4a^{2}\dot{\beta}}{(16-Bo) \Gamma}\left(\frac{4a^{\sqrt{B\omega}}}{\sqrt{B\omega}r^{\sqrt{B\omega}}}- \frac{a^{4}}{r}\right)\right. \tag{20}\] \[\left.+\frac{\rho a^{4}}{2(4-Bo)\Gamma}\left(\frac{a}{r^{2}}- \frac{2a^{\sqrt{B\omega}}}{\sqrt{B\omega}r^{\sqrt{B\omega}}}\right)\right] \Omega^{2}+O(\Omega^{2}\alpha+\Omega^{4})+...,\]
which at \(r=a\) gives Eq. 3:
\[h(\Omega,a,\alpha)=h_{s}(a,\alpha)+\frac{a}{2\left(\Gamma\rho g \right)^{1/2}}\left[\frac{4\dot{\beta}}{4+\sqrt{B\omega}}-\frac{\rho a^{2}}{2 +\sqrt{B\omega}}\right]\Omega^{2} \tag{21}\] \[+O(\Omega^{2}\alpha+\Omega^{4})+...\]
We use this functional form in the main manuscript for our analysis.
## Author contributions
R.V.M. contributed to Conceptualization, Writing - original draft, Writing - review & editing, Validation, Visualization, Data curation, Investigation, Methodology, Software, and Formal Analysis. G.H.M. contributed to Conceptualization, Writing - review & editing, Methodology, Funding acquisition, Project administration, Resources, and Supervision. R.P. and E.P. contributed to Writing - review & editing, and Resources.
## Conflicts of interest
There are no conflicts to declare.
## Acknowledgements
The authors would like to thank Lubrizol Inc. for funding and providing the polymer fluids used in this study.
|
2303.11645 | On the possibility of mixed axion/neutralino dark matter in specific
SUSY DFSZ axion models | We introduce four supersymmetric (SUSY) axion models in which the strong CP
problem and the $\mu$ problem are solved with the help of the Peccei-Quinn
mechanism and the Kim-Nilles mechanism, respectively. The axion physics
enriches the SUSY model by introducing axion as a dark matter candidate and,
therefore, the lightest supersymmetric particle (LSP) could just be a part of
the total dark matter. For this reason, axion relieves the tensions between
SUSY models and numerous experimental measurements, such as the dark matter
direct detection experiments and the precise measurements of anomalous magnetic
moment of the muon $a_\mu$. In the present paper, we investigate the
constraints imposed by the latest $a_\mu$ measurements and LUX-ZEPLIN (LZ)
experiment on the relic density of the Higgsino-like LSP. Additionally, we
consider the constraints arising from the cosmology of saxions and axinos, and
their impacts on the parameter space of our models are carefully examined. For
the axion constituting the remaining portion of dark matter, we find that the
conventional misalignment mechanism can successfully account for the correct
dark matter relic density observed by the Planck satellite. | Zhong-Jun Yang, Tai-Fu Feng, Xing-Gang Wu | 2023-03-21T07:39:16Z | http://arxiv.org/abs/2303.11645v4 | # On the possibility of mixed axion/neutralino dark matter in specific SUSY DFSZ axion models
###### Abstract
We introduce four supersymmetric (SUSY) axion models in which the strong CP problem and the \(\mu\) problem are solved with the help of the Peccei-Quinn mechanism and the Kim-Nilles mechanism, respectively. The axion physics enriches the SUSY model by introducing axion as a dark matter candidate and, therefore, the lightest supersymmetric particle (LSP) could just be a part of the total dark matter. For this reason, axion relieves the tensions between SUSY models and numerous experimental measurements, such as the dark matter direct detection experiments and the precise measurements of anomalous magnetic moment of the muon \(a_{\mu}\). In the present paper, we consider the constraints from the latest \(a_{\mu}\) data and the XENON-1T bound on the relic density of higgsino-like LSP, and discuss the possibility that axion is the rest of dark matter. We study the production mechanism for axion, and it turns out that the conventional misalignment mechanism can give the correct dark matter relic density observed by the Planck satellite.
Supersymmetry, Axion, Dark matter pacs: 12.60.Jv, 14.80.Mz, 95.35.+d
###### Contents
* I Introduction
* II Axion models
* II Axion properties in general
* II Framework of our models
* II.1 Superpotential
* II.2 Axion properties in our models
* II.3 Discrete \(R\) symmetries \(Z_{n}^{R}\) to protect \(U(1)_{PQ}\)
* II.3 Physical spectrum of our models
* II.4 Effective theory of our models
* III Dark matter candidates in our models
* III.1 Constraints from the \(a_{\mu}\) measurements and XEXON-1T experiment
* III.2 Conventional misalignment mechanism
* III.3 Kinetic misalignment mechanism
* III.4 The velocity of axion from PQ violation
* IV Numerical analysis
* IV.1 Numerical analysis for KMM
* IV.2 Numerical analysis for CMM
* IV.2.1 Numerical analysis for the model \(\rm E_{I}\)
* IV.2.2 Numerical analysis for the models \(\rm B_{I}\), \(\rm B_{II}\) and \(\rm B_{III}\)
* IV.3 Constraints from \(g_{A\gamma\gamma}\) on \(f_{A}\)
* V Conclusion
Introduction
The discovery of the 125 GeV Higgs boson at the Large Hadron Collider (LHC) in 2012 [1; 2] is an important moment in the history of human exploration, and the Standard Model (SM) of particle physics becomes the most successful theory to date. However, this elegant theory is still plagued by several problems, and for this reason many theoretical physicists are calling for the new physics (NP) beyond the SM. The Minimal Supersymmetric Standard Model (MSSM), which is a famous SUSY extension of the SM, could solve the so-called hierarchy problem that besets the SM, and simultaneously unifies the gauge coupling constants. Unfortunately, even in this model, the strong CP problem in the QCD sector has not been well understood yet. We were puzzled by this problem for a long time until Peccei and Quinn put forward their genius idea which is called the Peccei-Quinn (PQ) mechanism [3]. Reviews on this mechanism can be found in Refs.[4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. Under the global \(U(1)_{PQ}\) symmetry, a pseudo-Nambu-Goldstone boson axion denoted by \(A\) in our work appears [14; 15], and this particle provides a large number of valuable research topics in phenomenology and cosmology. Particularly, this particle can serve as the dark matter we have been looking for for many years. Since this mechanism was proposed, various models based on the SM and involving PQ mechanism, such as PQWW [14; 15], DFSZ [16; 17], and KSVZ [18; 19], have sprung up. Many of them, however, were quickly ruled out by experiments, while the invisible axion models are still alive. In a SUSY model, such an invisible axion could also be introduced by establishing DFSZ or KSVZ type interactions. The DFSZ type with the Kim-Nilles mechanism [20], for example, is particularly attractive, since it's also a solution to the \(\mu\) problem. In MSSM, the Higgsino mass parameter \(\mu\) is introduced through the superpotential term \(\mu H_{u}H_{d}\), where \(H_{u}\) and \(H_{d}\) are the two Higgs doublet superfields. Theoretically, the natural value of this SUSY-conserving \(\mu\) parameter would seem to be of the order of the Planck scale. For consistency with phenomenology, however, \(\mu\) is required to be of the order of the weak scale. This is known as the SUSY \(\mu\) problem. To solve this problem, the Kim-Nilles mechanism first forbids the \(\mu\) term via PQ symmetry, and then regenerates it once the PQ symmetry is spontaneously broken.
Axion also brings lots of new considerations and restrictions to the model. For instance, the domain wall (DW) number of the model and the quality of axion are needed to be taken into account. The DW problem [13] could be understood by analyzing the periodic
potential of axion. As the temperature of Universe falls to a value \(T\sim\Lambda_{\rm QCD}\), the axion potential is lifted by the non-perturbative QCD effects and the axion mass switches on. Since the axion \(A\), being an angular variable, can take values in the interval \([0,2\pi v_{A})\) and the period of axion potential is \(2\pi v_{A}/N_{DW}\), there would be \(N_{DW}\) degenerate vacua. This leads to different vacua in different patches of the Universe, and then DWs will form at the boundaries between regions of different vacua. These DWs give nothing but disappointing cosmological events such as another inflation. In the pre-inflationary scenario [13], however, the model is free from topological defects, so there would be no DW problem. Otherwise, the DW number \(N_{DW}\) needs to be one to avoid this cosmological problem. The quality problem [21; 22; 23; 24; 25; 26; 13] originates from the fact that the PQ symmetry, which is a global symmetry, is not fundamental in a quantum field theory. As we know, global symmetry is not respected by quantum gravitational effects [27; 28]. Therefore, this PQ symmetry has to be preserved to a great degree of accuracy as long as we consider it to be the key to solving the strong CP problem.
In the framework of SM, there is also no candidate for dark matter. Up to date, the dark matter relic density is about \(\Omega_{DM}h^{2}\approx 0.12\) which is observed by the Planck satellite [29]. For the sake of explaining this observation, the dark matter candidate must appear in the NP models. In SUSY models, for example, the LSP, which is a weakly interacting massive particle (WIMP), can be the dark matter candidate. This WIMP (typically the lightest neutralino) could give a very appropriate annihilation cross section for cold dark matter freeze-out production mechanism, and this is known as the "WIMP miracle". However, the dark matter direct detection experiments have ruled out a large amount of parameter space that was once considered promising. Besides, the latest measurements of the anomalous magnetic moment of the muon \(a_{\mu}\) given by BNL and FNAL [30] show that the discrepancy between the measured and SM predicted \(a_{\mu}\) is \(4.2\sigma\), which is also a problem that needs to be solved in the NP models.
Considering all the above, we introduce four SUSY axion models, in which all these problems can be overcome or alleviated. The rest of this paper is organized as follows: In Sec.II, we give an introduction to our models, and the general properties of axion are also displayed in this section. In Sec.III, we give the analysis of the constraints from \(a_{\mu}\) and direct detection experiments on the abundance of the LSP, which, in this paper, is only a part of the total dark matter. In addition, we review the conventional misalignment mechanism
(CMM) [31; 32; 33] and kinetic misalignment mechanism (KMM) [34; 35] for exploring the relic density of the axion, the rest of total dark matter. In Sec.IV, we present our numerical results for the KMM and CMM cases, and discuss the cosmology of the saxions and the axinos as well as the constraints from the effective coupling constant \(g_{A\gamma\gamma}\) on the axion decay constant \(f_{A}\). Sec.V is reserved as a summary.
## II Axion Models
### Axion properties in general
Following Refs.[13; 36], in the general QCD axion models, the PQ symmetry can be spontaneously broken by the non-zero vacuum expectation values (VEVs) of some complex scalars \(\phi_{s}\) with PQ charge \(Q_{s}\). Before the breaking of PQ symmetry, the Lagrangian is invariant under the PQ transformation \(\phi_{s}\to e^{iQ_{s}\alpha}\phi_{s}\). These complex scalars can be written as
\[\phi_{s}=\frac{1}{\sqrt{2}}(v_{s}+\rho_{s})e^{ia_{s}/v_{s}}, \tag{1}\]
where \(\rho_{s}\) and \(a_{s}\) are radial and angular fields, respectively. The scalar part of PQ current can then be written as
\[j_{\rm PQ}^{\mu}=-i{\sum_{s}}Q_{s}(\phi_{s}^{\dagger}\partial^{\mu}\phi_{s}- \phi_{s}\partial^{\mu}\phi_{s}^{\dagger})={\sum_{s}}Q_{s}v_{s} \partial^{\mu}a_{s}=v_{A}\partial^{\mu}A, \tag{2}\]
where
\[v_{A}^{2}={\sum_{s}}Q_{s}^{2}v_{s}^{2} \tag{3}\]
and \(A\) is the pseudo-Nambu-Goldstone boson axion field:
\[A=\frac{1}{v_{A}}\sum_{s}Q_{s}v_{s}a_{s}. \tag{4}\]
Note that the "physical" PQ charges of scalars are "unique" and need to be deduced by imposing an orthogonality condition
\[{\sum_{s}}Y_{s}Q_{s}v_{s}^{2}=0, \tag{5}\]
where \(Y_{s}\) is the \(U(1)_{Y}\) charge of the scalar. This condition ensures that there is no kinetic mixing between the physical axion and the \(Z\) boson. In SUSY models, each left-handed
Weyl fermion \(\Psi_{f}\) possesses a PQ charge \(Q_{f}\) determined by the interaction terms with the PQ charged scalars, and the PQ transformations of fermions are \(\Psi_{f}\to e^{iQ_{f}\alpha}\Psi_{f}\).
The anomalous divergence of the PQ current is given by
\[\partial_{\mu}j^{\mu}_{PQ} = \frac{g_{3}^{2}}{16\pi^{2}}NG^{a}_{\mu\nu}\widetilde{G}^{a\mu\nu} +\frac{e^{2}}{16\pi^{2}}EF_{\mu\nu}\widetilde{F}^{\mu\nu}, \tag{6}\]
where \(N\) is the \(U(1)_{\rm PQ}\)-\(SU(3)_{c}\)-\(SU(3)_{c}\) anomaly coefficient and \(E\) is \(U(1)_{\rm PQ}\)-\(U(1)_{\rm EM}\)-\(U(1)_{\rm EM}\) anomaly coefficient. Then, the effective Lagrangian of axion can be written as
\[{\cal L}_{A} \supset \frac{1}{2}\partial_{\mu}A\partial^{\mu}A+\frac{g_{3}^{2}}{16\pi^ {2}}\frac{A}{v_{A}}NG^{a}_{\mu\nu}\tilde{G}^{a\mu\nu}+\frac{e^{2}}{16\pi^{2}} \frac{A}{v_{A}}EF_{\mu\nu}\tilde{F}^{\mu\nu}+\cdots \tag{7}\] \[=\frac{1}{2}\partial_{\mu}A\partial^{\mu}A+\frac{g_{3}^{2}}{32 \pi^{2}}\frac{A}{f_{A}}G^{a}_{\mu\nu}\tilde{G}^{a\mu\nu}+\frac{E}{N}\frac{e^{2 }}{32\pi^{2}}\frac{A}{f_{A}}F_{\mu\nu}\tilde{F}^{\mu\nu}+\cdots, \tag{8}\]
where \(f_{A}\equiv\frac{v_{A}}{2N}\). Under the \(U(1)_{\rm PQ}\) transformation:
\[a_{s} \rightarrow a_{s}+\alpha Q_{s}v_{s}, \tag{9}\] \[A \rightarrow A+\alpha v_{A}, \tag{10}\]
where \(\alpha\) is an infinitesimal parameter, the strong CP problem could be solved.
The low-energy axion interaction Lagrangian under the \(\Lambda_{\rm QCD}\) scale at which the axion mass switches on can be read as
\[{\cal L}_{A} = \frac{1}{2}\partial^{\mu}A\partial_{\mu}A-\frac{1}{2}m_{A}^{2}A^{ 2}+\frac{1}{4}g_{A\gamma\gamma}AF_{\mu\nu}\tilde{F}^{\mu\nu}+\cdots, \tag{11}\]
where \(g_{A\gamma\gamma}=\frac{\alpha_{em}}{2\pi f_{A}}(\frac{E}{N}-1.92)\) and \(\alpha_{em}\) is the electromagnetic coupling constant. The axion mass is given in terms of the axion decay constant \(f_{A}\) by
\[m_{A}\simeq 5.7\frac{10^{12}\,{\rm GeV}}{f_{A}}\mu{\rm eV}. \tag{12}\]
After the PQ symmetry breaking, a discrete subgroup \(Z_{N_{\rm DW}}(=e^{2k\pi i/N_{\rm DW}}\), \(k=0,1,\ldots,N_{\rm DW}-1)\) is left unbroken. \(N_{\rm DW}\) is the DW number that corresponds to the number of inequivalent degenerate minima of the axion potential. With the above definition of anomaly coefficient \(N\), the DW number can be computed [37] as
\[N_{DW}\equiv{\rm minimum\ integer}\left(2N\sum_{s}\frac{n_{s}Q_{s}v_{s}^{2}} {v_{A}^{2}}\right), \tag{13}\]
where \(n_{s}\in Z\).
### Framework of our models
#### ii.2.1 Superpotential
In our SUSY model, the \(\mu\) term is absent, since the Kim-Nilles mechanism have been used to solve the \(\mu\) problem. The superpotential of the model is given by
\[W_{PQ_{1}} = \frac{\lambda_{\mu}}{M_{P}}X^{2}H_{u}H_{d}+y_{u}H_{u}q\bar{u}-y_{d }H_{d}q\bar{d}-y_{e}H_{d}\ell\bar{e}+y_{\bar{\nu}}H_{u}\ell\bar{\nu}+M_{\bar{ \nu}}\bar{\nu}\bar{\nu}, \tag{14}\]
where \(M_{P}=2.4\times 10^{18}\)GeV is the Plank scale. \(X\) is a gauge-singlet chiral superfield, and \(H_{u}\) as well as \(H_{d}\) are the MSSM higgs doublet chiral superfields. The \(q\ \overline{u}\ \overline{d}\ \ell\ \overline{e}\) are the same as the MSSM quark and lepton chiral superfields. The last two terms are the seesaw terms of gauge-singlet neutrino superfield \(\bar{\nu}\). In order to stabilize the potential at large field values, we need to add the \(\lambda\) terms involving only the gauge-singlet chiral superfields into the superpotential. The cubic \(\lambda\) term \(\lambda X^{3}\) would explicitly violate the PQ symmetry, so that we need at least two gauge-singlet chiral superfields to stabilize the potential. The cubic \(\lambda\) terms such as \(\lambda X^{2}Y\), where \(Y\) is also a gauge-singlet chiral superfield like \(X\), will lead to a very small dimensional parameter \(a_{\lambda}\) (corresponding to the soft breaking term of the \(\lambda\) term). Typically, such a dimensional parameter should be of the order of \(m_{\rm soft}\). In this sense, a quadratic \(\lambda\) term is more appropriate, since in this case there is no such concern. In literatures, e.g., Refs.[38; 39; 40], the \(\lambda\) term can take the form \(\frac{\lambda}{M_{P}}X^{3}Y,\frac{\lambda}{M_{P}}X^{2}Y^{2}\) or \(\frac{\lambda}{M_{P}}XY^{3}\), and a lot of researches have been done based on them. In this work, we consider that the \(\lambda\) terms involve three gauge-singlet chiral superfields \(X,Y\) and \(Z\), and the \(\lambda\) terms have the form
\[W_{PQ_{2}} = \frac{\lambda_{1}}{M_{P}}X^{\alpha_{\rm w}}Y^{4-\alpha_{\rm w}}+ \frac{\lambda_{2}}{M_{P}}X^{\beta_{\rm w}}Z^{4-\beta_{\rm w}}, \tag{15}\]
where the indexes \(\alpha_{\rm w}\) and \(\beta_{\rm w}\) could be \(1,2\) or \(3\), the subscript "w" denotes the superpotential \(W\). If \(\alpha_{\rm w}\) or \(\beta_{\rm w}\) is \(0\) or \(4\), the PQ symmetry would be explicitly violated like the \(\lambda X^{3}\) case. To avoid repetitive discussions, we study three cases that are different from each other: \((3,2),(3,1)\) and \((2,1)\) for \((\alpha_{\rm w},\beta_{\rm w})\). Here we have assumed that the PQ charges of \(X,Y\) and \(Z\) are different from each other, since there would be more terms appearing in \(W_{PQ_{2}}\) if that is not the case. There does exist one exception that \(\alpha_{\rm w}=\beta_{\rm w}=3\), however, such a model predicts a massless axino which is not what we expected. In the following, we will refer to these models with \((3,2),(3,1)\) and \((2,1)\) as base models \({\rm B_{I}},{\rm B_{II}}\) and \({\rm B_{III}}\), respectively.
Up until now, we can write down the soft supersymmetry-breaking terms
\[{\cal L}_{\rm soft}= \left(\frac{a_{\mu}}{M_{P}}X^{2}H_{u}H_{d}+\frac{a_{1}}{M_{P}}X^{ \alpha_{\rm w}}Y^{4-\alpha_{\rm w}}+\frac{a_{2}}{M_{P}}X^{\beta_{\rm w}}Z^{4- \beta_{\rm w}}+\ {\rm h.c.}\ \right) \tag{16}\] \[-m_{X}^{2}|X|^{2}-m_{Y}^{2}|Y|^{2}-m_{Z}^{2}|Z|^{2}\]
For the VEV of \(X\) is of the order of \(10^{9-12}{\rm GeV}\), we can chose the reasonable values of \(\lambda_{\mu}\) and \(a_{\mu}\) to solve the \(\mu\) problem:
\[\mu=\frac{\lambda_{\mu}}{M_{P}}\langle X^{2}\rangle,b=\frac{a_{\mu}}{M_{P}} \langle X^{2}\rangle. \tag{17}\]
We can also add pairs of vectorlike quarks and leptons superfields, which are denoted by capital letters (such as \(Q+\overline{Q}\)) to the model, which could be used to solve the DW problem. In this study, we discuss models without as well as with vectorlike superfields. In the latter case, we just add superpotential \(W_{PQ_{3}}\) to model \({\rm B_{I}}\) and denote this extension as model \({\rm E_{I}}\) (we do not add any vectorlike superfields to the model \({\rm B_{II}}\) or \({\rm B_{III}}\) in this work):
\[W_{PQ_{3}}\,=\,\lambda_{Q}XQ\bar{Q}+\lambda_{U}XU\bar{U}+\lambda_{E}XE\bar{E}+ \frac{\lambda_{D}}{M_{P}}X^{2}D\bar{D}+\frac{\lambda_{L}}{M_{P}}X^{2}L\bar{L}. \tag{18}\]
The information about color and the other quantum number of them is shown in Table 1. Besides, these additional chiral superfields in \({\bf 5}+\overline{\bf 5}=D+\overline{D}+L+\overline{L}\) and \({\bf 10}+\overline{\bf 10}=Q+\overline{Q}+U+\overline{U}+E+\overline{E}\) representations of the \(SU(5)\) grand unified theory [41] can preserve the unification of gauge couplings as in the MSSM [36]. Nevertheless, we do not assume that \(SU(5)\) is the unbroken gauge group in the ultraviolet.
\begin{table}
\begin{tabular}{|c c|} \hline Superfields & \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\) \\ \hline \hline \(Q+\overline{Q}\) & \(({\bf 3},{\bf 2},1/6)\) + \((\overline{\bf 3},{\bf 2},-1/6)\) \\ \(U+\overline{U}\) & \(({\bf 3},{\bf 1},2/3)\) + \((\overline{\bf 3},{\bf 1},-2/3)\) \\ \(E+\overline{E}\) & \(({\bf 1},{\bf 1},-1)\) + \(({\bf 1},{\bf 1},1)\) \\ \(D+\overline{D}\) & \(({\bf 3},{\bf 1},-1/3)\) + \((\overline{\bf 3},{\bf 1},1/3)\) \\ \(L+\overline{L}\) & \(({\bf 1},{\bf 2},-1/2)\) + \(({\bf 1},{\bf 2},1/2)\) \\ \hline \end{tabular}
\end{table}
Table 1: Vectorlike pairs of chiral superfields \(\Phi+\overline{\Phi}\) added to the model and their Standard Model gauge transformation properties. These pairs will carry non-zero net PQ charges.
Axion properties in our models
Now we deduce the axion properties for our models. Choosing the PQ charges of \(X\) to be \(Q_{X}=1\), then we get \(Q_{Y}=\frac{\alpha_{w}}{\alpha_{w}-4}\) and \(Q_{Z}=\frac{\beta_{w}}{\beta_{w}-4}\). Considering the first term of \(W_{PQ_{1}}\), the PQ charges of the MSSM Higgses should satisfy
\[Q_{H_{u}}+Q_{H_{d}}\,=\,-2. \tag{19}\]
Using the scalar notation given above, the scalars that acquire VEVs are parameterized as:
\[X=\frac{1}{\sqrt{2}}(v_{X}+\rho_{X})e^{ia_{X}/v_{X}},Y=\frac{1}{ \sqrt{2}}(v_{Y}+\rho_{Y})e^{ia_{Y}/v_{Y}},Z=\frac{1}{\sqrt{2}}(v_{Z}+\rho_{Z})e ^{ia_{Z}/v_{Z}},\] \[H_{u}^{0}=\frac{1}{\sqrt{2}}(v_{u}+\rho_{u})e^{ia_{u}/v_{u}},H_{ d}^{0}=\frac{1}{\sqrt{2}}(v_{d}+\rho_{d})e^{ia_{d}/v_{d}}, \tag{20}\]
where \(H_{u}^{0}\) and \(H_{d}^{0}\) are neutral MSSM Higgs scalars, and \(a_{X},a_{Y},a_{Z},a_{u}\) and \(a_{d}\) are pseudo-scalar bosons that contribute to the axion. In these models, the condition \(v_{X},v_{Y},v_{Z}\gg v_{u},v_{d}\) leads to an invisible axion. Imposing the orthogonality condition of Eq.(5), this model yields:
\[\frac{Q_{H_{d}}}{Q_{H_{u}}}=\frac{v_{u}^{2}}{v_{d}^{2}}\equiv\tan^{2}\beta, \tag{21}\]
where \(s_{\beta}\equiv\sin\beta=v_{u}/v\), \(c_{\beta}\equiv\cos\beta=v_{d}/v\), and \(v=\sqrt{v_{u}^{2}+v_{d}^{2}}\). Combining Eq.(19), we get
\[Q_{H_{u}}=-2c_{\beta}^{2},\qquad Q_{H_{d}}=-2s_{\beta}^{2}. \tag{22}\]
The PQ charge of the neutrino superfield \(\bar{\nu}\) is zero because of the last term of \(W_{PQ_{1}}\), and so that \(Q_{\ell}=-Q_{H_{u}}=2c_{\beta}^{2}\). So far, we have obtained all the PQ charges of MSSM superfields. See Table 2.
The PQ charges \(Q_{\Phi\bar{\Phi}}\) in model \(\rm E_{I}\) can also be determined without difficulty, and they are listed in Table 3.
In addition to the information about PQ charges of superfields, the anomaly coefficients \(N\) and \(E\) of our models can be deduced here
\[N\,=\,n_{g}(\frac{1}{2}2Q_{q}+\frac{1}{2}Q_{\overline{u}}+\frac{1}{2}Q_{ \overline{d}})+\frac{1}{2}(2\Sigma Q_{Q\overline{Q}})+\frac{1}{2}\Sigma Q_{U \overline{U}}+\frac{1}{2}\Sigma Q_{D\overline{D}}, \tag{23}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(X\) & \(Y\) & \(Z\) & \(H_{u}\) & \(H_{d}\) & \(q\) & \(\ell\) & \(\bar{u}\) & \(\bar{d}\) & \(\bar{e}\) & \(\bar{\nu}\) \\ \hline \hline \(PQ\) & \(1\) & \(\frac{\alpha_{w}}{\alpha_{w}-4}\) & \(\frac{\beta_{w}}{\beta_{w}-4}\) & \(-2c_{\beta}^{2}\) & \(-2s_{\beta}^{2}\) & \(Q_{q}\) & \(2c_{\beta}^{2}\) & \(2c_{\beta}^{2}-Q_{q}\) & \(2s_{\beta}^{2}-Q_{q}\) & \(2s_{\beta}^{2}-2c_{\beta}^{2}\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 2: The charges of the Peccei-Quinn symmetry of the superfields.
\[E = n_{g}[3(\frac{2}{3})^{2}(Q_{q}+Q_{\overline{u}})+3(-\frac{1}{3})^{2} (Q_{q}+Q_{\overline{d}})+(-1)^{2}(Q_{l}+Q_{\overline{e}})]+(\pm 1)^{2}(Q_{ \widetilde{H_{u}}}+Q_{\widetilde{H_{d}}}) \tag{24}\] \[+3[(\pm\frac{2}{3})^{2}+(\pm\frac{1}{3})^{2}]\Sigma Q_{Q\overline {Q}}+3(\pm\frac{2}{3})^{2}\Sigma Q_{U\overline{U}}+3(\pm\frac{1}{3})^{2} \Sigma Q_{D\overline{D}}\] \[+(\pm 1)^{2}\Sigma Q_{L\overline{L}}+(\pm 1)^{2}\Sigma Q_{E \overline{E}},\]
where \(n_{g}=3\) is the number of chiral quark and lepton generations. Now we have
\[f_{A} = \frac{v_{A}}{2N},v_{A}=\left[Q_{X}^{2}v_{X}^{2}+Q_{Y}^{2}v_{Y}^{2} +Q_{Z}^{2}v_{Z}^{2}+4s_{\beta}^{2}c_{\beta}^{2}v^{2}\right]^{1/2}, \tag{25}\]
with the contribution of \(4s_{\beta}^{2}c_{\beta}^{2}v^{2}\) being numerically negligible. For the models \(\rm B_{I},B_{II}\) and \(\rm B_{III}\), we have \(N=3,E=6\), and the domain wall number \(N_{DW}\) are \(2N,6N\), and \(6N\) respectively. We also obtain that \(N=\frac{1}{2}\) and \(E=-\frac{2}{3}\) for model \(\rm E_{I}\), in this case the DW number \(N_{DW}=1\), which means there is no DW problem.
#### ii.2.3 Discrete \(R\) symmetries \(Z_{n}^{r}\) to protect \(U(1)_{pq}\)
Since the PQ symmetry is an ungauged symmetry, it is not respected by quantum gravitational effects. As a result, there are some allowed higher-dimensional terms which may spoil the PQ symmetry in superpotential
\[W = \frac{\kappa}{M_{P}^{p-3}}X^{i}Y^{j}Z^{p-i-j}, \tag{26}\]
with dimensionless parameter \(\kappa\). Terms like this may result in the failure of the solution to the strong CP problem, because they can displace the QCD parameter \(\theta\) away from 0. Numerous studies, e.g., Refs.[40; 42; 39], have been performed and many solutions have been put forth to address this quality problem. In these solutions, the scheme resorting to a discrete \(R\) symmetry \(Z_{n}^{R}\), which is widely used to protect the global \(U(1)_{PQ}\) symmetry, is very attractive and, therefore, is also employed in this work.
If the superpotential of a model respects an Abelian discrete \(R\) symmetry \(Z_{n}^{R}\), which can be a subgroup of an anomaly-free continuous \(U(1)\) symmetry that is spontaneously
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(Q_{Q\bar{Q}}\) & \(Q_{U\bar{U}}\) & \(Q_{E\bar{E}}\) & \(Q_{L\bar{L}}\) & \(Q_{D\bar{D}}\) \\ \hline \hline PQ & \(-1\) & \(-1\) & \(-1\) & \(-2\) & \(-2\) \\ \hline \end{tabular}
\end{table}
Table 3: The charges of the Peccei-Quinn symmetry of the vectorlike superfields.
broken by a scalar field VEV of charge \(n\)[43], and such a discrete symmetry can forbid PQ-violating terms up to some mass dimension, the PQ symmetry can be seen as an accidental consequence of the discrete symmetry and the model perhaps give a high quality axion. Under such an assumption, the \(Z_{n}\)\(\times\)\(U(1)_{Y}\)\(\times\)\(U(1)_{Y}\), \(Z_{n}\)\(\times\)\(SU(2)_{L}\)\(\times\)\(SU(2)_{L}\) and \(Z_{n}\)\(\times\)\(SU(3)_{C}\)\(\times\)\(SU(3)_{C}\) anomalies, which denoted by \(A_{1}\)\(A_{2}\) and \(A_{3}\) respectively, should satisfy an anomaly-free condition involving the Green-Schwarz (GS) mechanism [44]:
\[\frac{A_{1}+m_{1}n}{5k_{1}}\,=\,\frac{A_{2}+m_{2}n}{k_{2}}\,=\,\frac{A_{3}+m_{ 3}n}{k_{3}}\,=\,\rho_{\rm GS}. \tag{27}\]
In Eq.(27), \(\rho_{\rm GS}\) is a constant, and \(m_{1}\), \(m_{2}\), \(m_{3}\) are integers. \(k_{1}\) can be arbitrary, and \(k_{2}\), \(k_{3}\) should be positive integer Kac-Moody levels. In this paper, we take an assumption that \(k_{1}=k_{2}=k_{3}=1\) under which gauge coupling unification can be achieved [36]. Then, anomaly-free condition Eq.(27) can be simplified as
\[\frac{1}{5}A_{1}=A_{2}=A_{3}\pmod{n}. \tag{28}\]
The expressions of \(A_{1}\), \(A_{2}\) and \(A_{3}\) read as
\[A_{1}= n_{g}\,(z_{q}+3z_{\ell}+8z_{\bar{u}}+2z_{\bar{d}}+6z_{\bar{e}}-20R)+3 \,(z_{H_{u}}+z_{H_{d}}-2R)\] \[+\Delta_{Q\bar{Q}}+3\Delta_{L\bar{L}}+8\Delta_{U\bar{U}}+2\Delta _{D\bar{D}}+6\Delta_{E\bar{E}},\] \[A_{2}= 4R+n_{g}\,(3z_{q}+z_{\ell}-4R)+z_{H_{u}}+z_{H_{d}}-2R+3\Delta_{ Q\bar{Q}}+\Delta_{L\bar{L}},\] \[A_{3}= 6R+n_{g}\,(2z_{q}+z_{\bar{u}}+z_{\bar{d}}-4R)+2\Delta_{Q\bar{Q }}+\Delta_{U\bar{U}}+\Delta_{D\bar{D}}, \tag{29}\]
and we have taken the same convention as Ref.[36]. The gauginos and the anticommuting coordinates \(\theta_{\alpha}\) have \(Z_{n}^{R}\) charge \(R\), and the superpotential has a total charge \(2R\) (mod \(n\)). Unlike the continuous \(R\) symmetries, for discrete \(R\) symmetries, \(R\) is not always 1 because we have taken all \(Z_{n}^{R}\) charges for all superfields to be integers (mod \(n\)). In Eq.(29), we have also considered the contributions from the vectorlike pairs of chiral superfields \(\Phi+\bar{\Phi}\) if they are added to the model, the contribution \(\Delta_{\Phi\bar{\Phi}}\) is equal to \(z_{\Phi}+z_{\bar{\Phi}}-2R\). For \(Z_{n}^{R}\) charges of MSSM chiral superfields, we can set the \(Z_{n}^{R}\) charge of superfield \(q\), denoted by \(z_{q}\), to be \(z_{q}^{\prime}=0\), since we can redefine the \(Z_{n}^{R}\) charges by adding a multiple of \(6Y\), where \(Y\) is the weak hypercharge. We can do such a shift because the \(U(1)_{6Y}\) is an anomaly-free symmetry, and the \(U(1)_{6Y}\) charges of MSSM superfields are shown in Table 4.
From the superpotential \(W_{PQ_{1}}\), we can obtain the \(Z_{n}^{R}\) charges of the MSSM superfields, and they are shown in Table 5. Using the charges of Table 5, for the models \(\rm B_{I},B_{II},B_{III}\) as
well as \({\rm E_{I}}\), one can check that the anomaly-free conditions Eq.(27) all can be simplified as
\[8x+3h+5R=0({\rm mod~{}n}),\] \[12x+9h-21R=0({\rm mod~{}n}). \tag{30}\]
In the cases that \(W_{PQ_{3}}\) is absent, we consider two ultraviolet completions of the theory: \(SU(5)\) GUT and Pati-Salam \(SU(4)_{C}\otimes SU(2)_{L}\otimes U(1)_{R}\) theory [45]. In the first case, the discrete gauge charges of the multiplets are required to be consistent with \(SU(5)\), therefore they need satisfy the following conditions:
\[z_{q}=z_{\bar{u}}=z_{\bar{e}}({\rm mod~{}n}),\] \[z_{\ell}=z_{\bar{d}}({\rm mod~{}n}). \tag{31}\]
Equivalently, this implies that there exists an integer \(m\) satisfying the following conditions:
\[z^{\prime}_{q}+m=z^{\prime}_{\bar{u}}+(-4)m=z^{\prime}_{\bar{e} }+6m({\rm mod~{}n}),\] \[z^{\prime}_{\ell}+(-3)m=z^{\prime}_{\bar{d}}+2m({\rm mod~{}n}). \tag{32}\]
Simplify these equations by using the charges of Table 5, then we can obtain:
\[2R-h-5m=0({\rm mod~{}n}),\] \[R-2h-2x-5m=0({\rm mod~{}n}). \tag{33}\]
The \(Z^{R}_{n}\) symmetries that satisfy these conditions have been listed in Table 6. From this
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \(X\) & \(H_{u}\) & \(H_{d}\) & \(q\) & \(\ell\) & \(\bar{u}\) & \(\bar{d}\) & \(\bar{e}\) & \(\bar{\nu}\) \\ \hline \hline \(z^{\prime}\) & \(x\) & \(h\) & \(2R-2x-h\) & \(0\) & \(R-h\) & \(2R-h\) & \(h+2x\) & \(2h+2x-R\) & \(R\) \\ \hline \end{tabular}
\end{table}
Table 5: The \(Z^{R}_{n}\) charges, in terms of two integers \(h\) and \(x\), of the MSSM superfields in our models.
table, we can see that the largest \(p\) is 7, and the allowed PQ-violating terms in scalar potential may include the \(\frac{\varphi^{9}}{M_{P}^{2}}\) term, where \(\varphi\) can be \(X,Y\) or \(Z\). So that the axion quality problem may not be solved in some sense. On the other hand, considering the \(SU(4)_{C}\otimes SU(2)_{L}\otimes U(1)_{R}\) embedding, we obtain the required conditions that are:
\[z_{q} = z_{\ell}(\mbox{mod }n),\] \[z_{\bar{u}} = z_{\tilde{\nu}}(\mbox{mod }n),\] \[z_{\tilde{d}} = z_{\tilde{e}}(\mbox{mod }n). \tag{34}\]
Similarly, this implies that there exists an integer \(m\) satisfying the following conditions:
\[z_{q}^{\prime}+m=z_{\ell}^{\prime}+(-3)m(\mbox{mod n}),\] \[z_{\tilde{u}}^{\prime}+(-4)m=z_{\tilde{\nu}}^{\prime}(\mbox{mod n}),\] \[z_{\tilde{d}}^{\prime}+2m=z_{\tilde{e}}^{\prime}+6m(\mbox{mod n}). \tag{35}\]
Using the charges of Table 5, we can obtain:
\[R-h-4m = 0(\mbox{mod n}). \tag{36}\]
The \(Z_{n}^{R}\) symmetries that satisfy these conditions have been listed in Table 7, and the largest \(p\) are 11 or 13 for these models. As declared in Ref.[36], if PQ-violating superpotential terms with \(p=(8,9,10,11\) or 12) are present, one typically should have \(f_{A}\lesssim 4\times 10^{9},3\times 10^{10},10^{11},4\times 10^{11}\) or \(10^{12}\)GeV respectively. However, there are so many possibilities that \(f_{A}\) can get rid of these constraints. The corresponding coupling(s) \(\kappa\), for example, may happen
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(\alpha_{\rm w}\) & \(\beta_{\rm w}\) & \(R\) & \(z_{X}\) & \(z_{Y}\) & \(z_{Z}\) & \(h\) & \(n\) & \(p\) \\ \hline
3 & 2 & 1 & 11 & 17 & 14 & 1 & 24 & 7 \\
3 & 2 & 1 & 23 & 5 & 14 & 1 & 24 & 7 \\
3 & 1 & 1 & 11 & 17 & 21 & 1 & 24 & 6 \\
3 & 1 & 1 & 23 & 5 & 9 & 1 & 24 & 6 \\
2 & 1 & 1 & 11 & 14 & 5 & 1 & 24 & 7 \\
2 & 1 & 1 & 23 & 14 & 17 & 1 & 24 & 7 \\ \hline \end{tabular}
\end{table}
Table 6: Some \(Z_{n}^{R}\) symmetries can be made consistent with \(SU(5)\) embedding.
to have a small magnitude so that \(f_{A}\) could be larger than \(4\times 10^{11}\)GeV, even if \(p=11\) is present. Following Ref.[36], we will therefore not commit to a specific requirement for \(f_{A}\), with the understanding that smaller \(f_{A}\) is safer in some sense.
In the extended model \(\rm E_{I}\), the \(Z_{n}^{R}\) symmetries for the base model \(\rm B_{I}\) also work and give the same \(p\). However, we have no insight into the ultraviolet complexity other than the solution to the domain wall problem. By the way, some superpotential terms that violate baryon or lepton number may predict very rapid proton decay, such as \(H_{u}\ell,q\ell\bar{d},\ell\ell\bar{e},\bar{u}\bar{d}\bar{d},\frac{1}{M_{P}}qqq\ell\) and \(\frac{1}{M_{P}}\bar{u}\bar{u}\bar{d}\bar{e}\). We note that all of the \(Z_{n}^{R}\) symmetries we find can safely forbid these superpotential terms.
### Physical spectrum of our models
In our models, there are some new particles other than the particles within MSSM, including two pseudoscalar particles \(A_{i}^{\prime}\), three scalar particles \(S_{i}\) and three majorana spinors
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\alpha_{\rm w}=3,\beta_{\rm w}=2,R=1,x=-3\) & \(\alpha_{\rm w}=3,\beta_{\rm w}=1,R=1,x=-3\) & \(\alpha_{\rm w}=2,\beta_{\rm w}=1,R=1,x=-3\) & \(\alpha_{\rm w}=2,\beta_{\rm w}=1,R=1,x=-3\) \\ \hline \(z_{Y}\) & \(z_{Z}\) & \(h\) & \(n\) & \(p\) & \(z_{Y}\) & \(z_{Z}\) & \(h\) & \(n\) & \(p\) & \(z_{Y}\) & \(z_{Z}\) & \(h\) & \(n\) & \(p\) \\ \hline
11 & 4 & 31 & 37 & 8 & 11 & 31 & 21 & 44 & 8 & 4 & 31 & 21 & 44 & 9 \\
11 & 4 & 19 & 38 & 9 & 11 & 21 & 45 & 58 & 8 & 4 & 18 & 39 & 49 & 11 \\
11 & 24 & 33 & 40 & 9 & 11 & 41 & 26 & 59 & 8 & 4 & 37 & 24 & 53 & 9 \\
11 & 4 & 20 & 41 & 9 & 11 & 43 & 27 & 62 & 8 & 4 or 32 & 39 & 25 & 56 & 11 \\
11 & 4 or 26 & 21 & 44 & 8 & 11 & 24 & 51 & 67 & 9 & 4 & 21 & 45 & 58 & 10 \\
11 & 27 & 37 & 46 & 8 & 11 & 51 & 31 & 74 & 10 & 4 & 22 & 47 & 61 & 10 \\
11 & 4 & 39 & 49 & 11 & 11 & 53 & 32 & 77 & 11 & 4 & 43 & 27 & 62 & 8 \\
11 & 4 & 41 & 52 & 10 & 11 & 28 & 59 & 79 & 13 & 36 & 23 & 49 & 64 & 8 \\
11 & 4 & 43 & 55 & 9 & 11 & 59 & 35 & 86 & 8 & 4 & 45 & 28 & 65 & 10 \\
11 & 4 or 32 & 25 & 56 & 11 & 11 & 31 & 65 & 88 & 12 & 4 & 24 & 51 & 67 & 9 \\ \hline \end{tabular}
\end{table}
Table 7: Some \(Z_{n}^{R}\) symmetries can be made consistent with a Pati-Salam \(SU(4)_{C}\otimes SU(2)_{L}\otimes U(1)_{R}\) embedding. Under these symmetries, the allowed higher dimensional terms all can satisfy \(p\geq 8\).
\(\hat{a}_{i}\) with nonzero masses.
The tadpole equations involving \(X,Y\) and \(Z\) are
\[m_{X}^{2} + (\frac{1}{4M_{P}^{2}v_{X}^{4}v_{Y}^{2\alpha_{\rm w}}v_{Z}^{2\beta_{ \rm w}}})\left\{\lambda_{1}^{2}v_{X}^{2\alpha_{\rm w}}v_{Y}^{6}v_{Z}^{2\beta_{ \rm w}}\alpha_{\rm w}\left[v_{X}^{2}(-4+\alpha_{\rm w})^{2}+v_{Y}^{2}(-1+\alpha _{\rm w})\alpha_{\rm w}\right]\right.\] \[+ \lambda_{1}\lambda_{2}v_{X}^{\alpha_{\rm w}+\beta_{\rm w}}v_{Y}^{ 4+\alpha_{\rm w}}v_{Z}^{4+\beta_{\rm w}}\alpha_{\rm w}\beta_{\rm w}(-2+\alpha _{\rm w}+\beta_{\rm w})-2M_{P}v_{X}^{2}v_{Y}^{\alpha_{\rm w}}v_{Z}^{\beta_{\rm w }}(a_{1}v_{X}^{\alpha_{\rm w}}v_{Y}^{4}v_{Z}^{\beta_{\rm w}}\alpha_{\rm w}\] \[+ \left.a_{2}v_{X}^{\beta_{\rm w}}v_{Y}^{\alpha_{\rm w}}v_{Z}^{4} \beta_{\rm w})+\lambda_{2}^{2}v_{X}^{2\beta_{\rm w}}v_{Y}^{2\alpha_{\rm w}}v_{ Z}^{6}\beta_{\rm w}\left[v_{X}^{2}(-4+\beta_{\rm w})^{2}+v_{Z}^{2}(-1+\beta_{\rm w })\beta_{\rm w}\right]\right\}=0,\] \[m_{Y}^{2} - \frac{1}{4M_{P}^{2}}\left\{v_{X}^{-2+\alpha_{\rm w}}v_{Y}^{2-2 \alpha_{\rm w}}v_{Z}^{-\beta_{\rm w}}(-4+\alpha_{\rm w})\left[-2a_{1}M_{P}v_{ X}^{2}v_{Y}^{\alpha_{\rm w}}v_{Z}^{\beta_{\rm w}}\right.\right. \tag{37}\] \[+ \left.\left.\lambda_{1}^{2}v_{X}^{\alpha_{\rm w}}v_{Y}^{2}v_{Z}^{ \beta_{\rm w}}\left(v_{Y}^{2}{\alpha_{\rm w}}^{2}+v_{X}^{2}\left(12-7\alpha_{ \rm w}+{\alpha_{\rm w}}^{2}\right)\right)+\lambda_{1}\lambda_{2}v_{X}^{\beta_ {\rm w}}v_{Y}^{\alpha_{\rm w}}v_{Z}^{4}\alpha_{\rm w}\beta_{\rm w}\right]\right\}=0,\] \[m_{Z}^{2} - \frac{1}{4M_{P}^{2}}\left\{v_{X}^{-2+\beta_{\rm w}}v_{Y}^{-\alpha _{\rm w}}v_{Z}^{2-2\beta_{\rm w}}(-4+\beta_{\rm w})\left[-2a_{2}M_{P}v_{X}^{2} v_{Y}^{\alpha_{\rm w}}v_{Z}^{\beta_{\rm w}}\right.\right.\] \[+ \left.\left.\left.\lambda_{1}\lambda_{2}v_{X}^{\alpha_{\rm w}}v_{Y }^{4}v_{Z}^{\beta_{\rm w}}\alpha_{\rm w}\beta_{\rm w}+\lambda_{2}^{2}v_{X}^{ \beta_{\rm w}}v_{Y}^{\alpha_{\rm w}}v_{Z}^{2}\left(v_{Z}^{2}{\beta_{\rm w}}^{ 2}+v_{X}^{2}\left(12-7\beta_{\rm w}+{\beta_{\rm w}}^{2}\right)\right)\right] \right\}=0.\]
The pseudoscalars \(A_{i}^{\prime}\) as well as the axion \(A\) are made up of \(a_{X}\), \(a_{Y}\) and \(a_{Z}\). The mass squared mixing matrix is diagonalized by the matrix \(Z^{A}\):
\[Z^{A}\left(\begin{array}{ccc}M_{a_{X}a_{X}}^{2}&M_{a_{X}a_{Y}}^{2}&M_{a_{X} a_{Z}}^{2}\\ M_{a_{Y}a_{X}}^{2}&M_{a_{Y}a_{Y}}^{2}&M_{a_{Y}a_{Z}}^{2}\\ M_{a_{Z}a_{X}}^{2}&M_{a_{Z}a_{Y}}^{2}&M_{a_{Z}a_{Z}}^{2}\end{array}\right)Z^{A ^{T}}=diag\left(0,m_{A_{1}^{\prime}}^{2},m_{A_{2}^{\prime}}^{2}\right), \tag{38}\]
where \(M_{a_{X}a_{X}}^{2},\ M_{a_{Y}a_{Y}}^{2},\ M_{a_{Z}a_{Z}}^{2},\ M_{a_{X}a_{Y}}^{2}, \ M_{a_{Y}a_{X}}^{2},\ M_{a_{X}a_{Z}}^{2},\ M_{a_{Z}a_{X}}^{2},\ M_{a_{Y}a_{Z}}^ {2}\) and \(M_{a_{Z}a_{Y}}^{2}\) are
\[M_{a_{X}a_{X}}^{2} = v_{Y}^{-\alpha_{\rm w}}v_{Z}^{-\beta_{\rm w}}[2a_{1}M_{P}v_{X}^{ 2+\alpha_{\rm w}}v_{Y}^{4}v_{Z}^{\beta_{\rm w}}{\alpha_{\rm w}}^{2}-v_{X}^{ \beta_{\rm w}}v_{Z}^{4}\beta_{\rm w}(\lambda_{1}\lambda_{2}v_{X}^{\alpha_{\rm w }}v_{Y}^{4}\alpha_{\rm w}(\alpha_{\rm w}-\beta_{\rm w})^{2}\] \[-2a_{2}M_{P}v_{X}^{2}v_{Y}^{\alpha_{\rm w}}\beta_{\rm w})]/(4M_{P}^ {2}v_{X}^{4}),\] \[M_{a_{Y}a_{Y}}^{2} = \frac{v_{X}^{-2+\alpha_{\rm w}}v_{Y}^{2-\alpha_{\rm w}}v_{Z}^{- \beta_{\rm w}}(-4+\alpha_{\rm w})^{2}\left(2a_{1}M_{P}v_{X}^{2}v_{Z}^{\beta_{ \rm w}}-\lambda_{1}\lambda_{2}v_{X}^{\beta_{\rm w}}v_{Z}^{4}\alpha_{\rm w} \beta_{\rm w}\right)}{4M_{P}^{2}},\] \[M_{a_{Z}a_{Z}}^{2} = \frac{v_{X}^{-2+\beta_{\rm w}}v_{Y}^{-\alpha_{\rm w}}v_{Z}^{2- \beta_{\rm w}}(-4+\beta_{\rm w})^{2}\left(2a_{2}M_{P}v_{X}^{2}v_{Y}^{\alpha_{ \rm w}}-\lambda_{1}\lambda_{2}v_{X}^{\alpha_{\rm w}}v_{Y}^{4}\alpha_{\rm w} \beta_{\rm w}\right)},\] \[M_{a_{X}a_{Y}}^{2} = M_{a_{Y}a_{X}}^{2}=v_{X}^{-3+\alpha_{\rm w}}v_{Y}^{3-\alpha_{\rm w }}v_{Z}^{-\beta_{\rm w}}(-4+\alpha_{\rm w})\alpha_{\rm w}[-2a_{1}M_{P}v_{X}^{2}v_{Z}^{ \beta_{\rm w}}\] \[-\lambda_{1}\lambda_{2}v_{X}^{\beta_{\rm w}}v_{Z}^{4}\beta_{\rm w }(-\alpha_{\rm w}+\beta_{\rm w})]/(4M_{P}^{2}),\] \[M_{a_{X}a_{Z}}^{2} = M_{a_{Z}a_{X}}^{2}=v_{X}^{-3+\beta_{\rm w}}v_{Y}^{-\alpha_{\rm w }}v_{Z}^{3-\beta_{\rm w}}(-4+\beta_{\rm w})\beta_{\rm w}[-2a_{2}M_{P}v_{X}^{2}v_{Y}^{ \alpha_{\rm w}}\] \[-\lambda_{1}\lambda_{2}v_{X}^{\alpha_{\rm w}}v_{Y}^{4}\alpha_{\rm w }(\alpha_{\rm w}-\beta_{\rm w})]/(4M_{P}^{2}),\] \[M_{a_{Y}a_{Z}}^{2} = M_{a_{Z}a_{Y}}^{2}=\frac{\lambda_{1}\lambda_{2}v_{X}^{-2+\alpha_{ \rm w}+\beta_{\rm w}}v_{Y}^{3-\alpha_{\rm w}}v_{Z}^{3-\beta_{\rm w}}(-4+ \alpha_{\rm w})\alpha_{\rm w}(-4+\beta_{\rm w})\beta_{\rm w}}{4M_{P}^{2}}. \tag{39}\]
The axion mass eigenstate \(A\approx\frac{Q_{X}v_{X}}{v_{A}}a_{X}+\frac{Q_{Y}v_{Y}}{v_{A}}a_{Y}+\frac{Q_{ Z}v_{Z}}{v_{A}}a_{Z}\), so that we arrive at \(Z_{11}^{A}=\frac{Q_{X}v_{X}}{v_{A}}\), \(Z_{12}^{A}=\frac{Q_{Y}v_{Y}}{v
saxions in the following, are made up of \(\rho_{X},\rho_{Y}\) and \(\rho_{Z}\). The mass squared mixing matrix is diagonalized by the matrix \(Z^{S}\):
\[Z^{S}\left(\begin{array}{ccc}M^{2}_{\rho_{X}\rho_{X}}&M^{2}_{\rho_{X}\rho_{Y}}& M^{2}_{\rho_{X}\rho_{Z}}\\ M^{2}_{\rho_{Y}\rho_{X}}&M^{2}_{\rho_{Y}\rho_{Y}}&M^{2}_{\rho_{Y}\rho_{Z}}\\ M^{2}_{\rho_{Z}\rho_{X}}&M^{2}_{\rho_{Z}\rho_{Y}}&M^{2}_{\rho_{Z}\rho_{Z}}\\ \end{array}\right)Z^{S^{T}}=diag\left(m^{2}_{S_{i}}\right),i=1-3. \tag{40}\]
where \(M^{2}_{\rho_{X}\rho_{X}},M^{2}_{\rho_{X}\rho_{Y}},M^{2}_{\rho_{X}\rho_{Z}},M^{ 2}_{\rho_{Y}\rho_{X}},M^{2}_{\rho_{Y}\rho_{Y}},M^{2}_{\rho_{Y}\rho_{Z}},M^{2}_ {\rho_{Z}\rho_{X}},M^{2}_{\rho_{Z}\rho_{Y}}\) and \(M^{2}_{\rho_{Z}\rho_{Z}}\) are
\[M^{2}_{\rho_{X}\rho_{X}}= v^{-2\alpha_{\rm w}}_{Y}z^{-2\beta_{\rm w}}_{Z}\{2\lambda_{1}^{2} v^{2\alpha_{\rm w}}_{X}v^{6}_{Y}v^{2\beta_{\rm w}}_{Z}(-1+\alpha_{\rm w}) \alpha_{\rm w}(v^{2}_{X}(-4+\alpha_{\rm w})^{2}+v^{2}_{Y}(-2+\alpha_{\rm w}) \alpha_{\rm w})\] \[+\lambda_{1}\lambda_{2}v^{\alpha_{\rm w}+\beta_{\rm w}}_{X}v^{4+ \alpha_{\rm w}}_{Y}v^{4+\beta_{\rm w}}_{Z}\alpha_{\rm w}\beta_{\rm w}(8+\alpha _{\rm w}{}^{2}+2\alpha_{\rm w}(-3+\beta_{\rm w})-6\beta_{\rm w}+\beta_{\rm w}{} ^{2})\] \[+2v^{\alpha_{\rm w}}_{Y}[-a_{1}M_{P}v^{2+\alpha_{\rm w}}_{X}v^{4 }_{Y}z^{2\beta_{\rm w}}_{Z}(-2+\alpha_{\rm w})\alpha_{\rm w}\] \[+v^{\beta_{\rm w}}_{X}v^{\alpha_{\rm w}}_{Y}v^{4}_{Z}\beta_{\rm w }(-a_{2}M_{P}v^{2}_{X}v^{2\beta_{\rm w}}_{Z}(-2+\beta_{\rm w})+\lambda^{2}_{ 2}v^{\beta_{\rm w}}_{X}v^{2}_{Z}(-1+\beta_{\rm w})(v^{2}_{X}(-4+\beta_{\rm w}) ^{2}\] \[+v^{2}_{Z}(-2+\beta_{\rm w})\beta_{\rm w}))]\}/(4M^{2}_{P}v^{4}_{X})\] \[M^{2}_{\rho_{Y}\rho_{Y}}= \left\{v^{-2+\alpha_{\rm w}}_{Y}v^{-2\alpha_{\rm w}}_{Z}v^{-\beta _{\rm w}}_{Z}(-4+\alpha_{\rm w})\left[-2a_{1}M_{P}v^{2}_{X}v^{\alpha_{\rm w}}_ {Y}v^{\beta_{\rm w}}_{Z}(-2+\alpha_{\rm w})\right.\right.\] \[+2\lambda^{2}_{1}v^{\alpha_{\rm w}}_{X}v^{2}_{Y}v^{\beta_{\rm w}}_ {Z}(-3+\alpha_{\rm w})\left(v^{2}_{Y}\alpha_{\rm w}{}^{2}+v^{2}_{X}\left(8-6 \alpha_{\rm w}+\alpha_{\rm w}{}^{2}\right)\right)\] \[\left.\left.+\lambda_{1}\lambda_{2}v^{\beta_{\rm w}}_{X}v^{\alpha _{\rm w}}_{Y}v^{4}_{Z}(-2+\alpha_{\rm w})\alpha_{\rm w}\beta_{\rm w}\right] \right\}/\left(4M^{2}_{P}\right)\] \[M^{2}_{\rho_{Z}\rho_{Z}}= \left\{v^{-2+\beta_{\rm w}}_{X}v^{-\alpha_{\rm w}}_{Y}v^{2-2 \beta_{\rm w}}_{Z}(-4+\beta_{\rm w})\left[-2a_{2}M_{P}v^{2}_{X}v^{\alpha_{\rm w }}_{Y}v^{\beta_{\rm w}}_{Z}(-2+\beta_{\rm w})\right.\right.\] \[+\lambda_{1}\lambda_{2}v^{\alpha_{\rm w}}_{X}v^{4}_{Y}v^{\beta_{ \rm w}}_{Z}\alpha_{\rm w}(-2+\beta_{\rm w})\beta_{\rm w}\] \[\left.\left.+2\lambda^{2}_{2}v^{\beta_{\rm w}}_{X}v^{\alpha_{\rm w }}_{Y}v^{2}_{Z}(-3+\beta_{\rm w})\left(v^{2}_{Z}\beta_{\rm w}{}^{2}+v^{2}_{X} \left(8-6\beta_{\rm w}+\beta_{\rm w}{}^{2}\right)\right)\right]\right\}/\left(4M ^{2}_{P}\right),\] \[M^{2}_{\rho_{X}\rho_{Y}}= M^{2}_{\rho_{Y}\rho_{X}}=-\left\{v^{-3+\alpha_{\rm w}}_{X}v^{3-2 \alpha_{\rm w}}_{Y}v^{-\beta_{\rm w}}_{Z}(-4+\alpha_{\rm w})\alpha_{\rm w} \left[-2a_{1}M_{P}v^{2}_{X}v^{\alpha_{\rm w}}_{Y}v^{\beta_{\rm w}}_{Z}\right.\right.\] \[\left.\left.+2\lambda^{2}_{1}v^{\alpha_{\rm w}}_{X}v^{2}_{Y}v^{ \beta_{\rm w}}_{Z}\left(v^{2}_{Y}(-1+\alpha_{\rm w})\alpha_{\rm w}+v^{2}_{X} \left(12-7\alpha_{\rm w}+\alpha_{\rm w}{}^{2}\right)\right)\right.\right.\] \[\left.\left.+\lambda_{1}\lambda_{2}v^{\beta_{\rm w}}_{X}v^{\alpha _{\rm w}}_{Y}v^{4}_{Z}\beta_{\rm w}(-2+\alpha_{\rm w}+\beta_{\rm w})\right] \right\}/\left(4M^{2}_{P}\right),\] \[M^{2}_{\rho_{X}\rho_{Z}}= M^{2}_{\rho_{Z}\rho_{Y}}=-\left\{v^{-3+\beta_{\rm w}}_{X}v^{- \alpha_{\rm w}}_{Y}v^{3-2\beta_{\rm w}}_{Z}(-4+\beta_{\rm w})\beta_{\rm w} \left[-2a_{2}M_{P}v^{2}_{X}v^{\alpha_{\rm w}}_{Y}v^{\beta_{\rm w}}_{Z}\right.\right.\] \[\left.\left.+\lambda_{1}\lambda_{2}v^{\alpha_{\rm w}}_{X}v^{4}_{Y}v^{ \beta_{\rm w}}_{Z}\alpha_{\rm w}(-2+\alpha_{\rm w}+\beta_{\rm w})+2\lambda^{2}_{ 2}v^{\beta_{\rm w}}_{X}v^{\alpha_{\rm w}}_{Y}v^{2}_{Z}\left(v^{2}_{Z}(-1+ \beta_{\rm w})\beta_{\rm w}\right.\right.\right.\] \[\left.\left.\left.+v^{2}_{X}\left(12-7\beta_{\rm w}+\beta_{\rm w }{}^{2}\right)\right)\right]\right\}/\left(4M^{2}_{P}\right),\] \[M^{2}_{\rho_{Y}\rho_{Z}}= M^{2}_{\rho_{Z}\rho_{Y}}=\frac{\lambda_{1}\lambda_{2}v^{-2+\alpha_{\rm w }+\beta_{\rm w}}_{Y}v^{3-\alpha_{\rm w}}_{Z}v^{3-\beta_{\rm w}}_{Z}(-4+\alpha_{ \rm w})\alpha_{\rm w}(-4+\beta_{\rm w})\beta_{\rm w}}{(4M^{2}_{P})}. \tag{41}\]
Finally, majorana spinors \(\tilde{a}_{1}\), \(\tilde{a}_{2}\) and \(\tilde{a}_{3}\) are the so-called axinos. In the basis (\(\tilde{a}_{X}\), \(\tilde{a}_{Y}\) and
\(\tilde{a}_{Z}\)), the axino mass mixing matrix is diagonalized by \(Z^{\tilde{a}}\):
\[Z^{\tilde{a}}\left(\begin{array}{ccc}M_{\tilde{a}_{X}\tilde{a}_{X}}&M_{\tilde{ a}_{X}\tilde{a}_{Y}}&M_{\tilde{a}_{X}\tilde{a}_{Z}}\\ M_{\tilde{a}_{Y}\tilde{a}_{X}}&M_{\tilde{a}_{Y}\tilde{a}_{Y}}&M_{\tilde{a}_{Y} \tilde{a}_{Z}}\\ M_{\tilde{a}_{Z}\tilde{a}_{X}}&M_{\tilde{a}_{Z}\tilde{a}_{Y}}&M_{\tilde{a}_{Z} \tilde{a}_{Z}}\end{array}\right)Z^{\tilde{a}^{T}}=diag\left(m_{\tilde{a}_{i}} \right),i=1-3. \tag{42}\]
where \(M_{\tilde{a}_{X}\tilde{a}_{X}},M_{\tilde{a}_{X}\tilde{a}_{Y}},M_{\tilde{a}_{X} \tilde{a}_{Z}},M_{\tilde{a}_{Y}\tilde{a}_{X}},M_{\tilde{a}_{Y}\tilde{a}_{Y}},M_ {\tilde{a}_{Y}\tilde{a}_{Z}},M_{\tilde{a}_{Z}\tilde{a}_{X}},M_{\tilde{a}_{Z} \tilde{a}_{Y}}\) and \(M_{\tilde{a}_{Z}\tilde{a}_{Z}}\) are
\[M_{\tilde{a}_{X}\tilde{a}_{X}} = \frac{\lambda_{1}v_{X}^{-2+\alpha_{\rm w}}v_{Y}^{4-\alpha_{\rm w} }(-1+\alpha_{\rm w})\alpha_{\rm w}+\lambda_{2}v_{X}^{-2+\beta_{\rm w}}v_{Z}^{4 -\beta_{\rm w}}(-1+\beta_{\rm w})\beta_{\rm w}}{2M_{P}},\] \[M_{\tilde{a}_{Y}\tilde{a}_{Y}} = \frac{\lambda_{1}v_{X}^{\alpha_{\rm w}}v_{Y}^{2-\alpha_{\rm w}}( 3-\alpha_{\rm w})(4-\alpha_{\rm w})}{2M_{P}},\] \[M_{\tilde{a}_{Z}\tilde{a}_{Z}} = \frac{\lambda_{2}v_{X}^{\beta_{\rm w}}v_{Z}^{2-\beta_{\rm w}}(3- \beta_{\rm w})(4-\beta_{\rm w})}{2M_{P}},\] \[M_{\tilde{a}_{X}\tilde{a}_{Y}} = M_{\tilde{a}_{Y}\tilde{a}_{X}}=\frac{\lambda_{1}v_{X}^{-1+\alpha _{\rm w}}v_{Y}^{3-\alpha_{\rm w}}(4-\alpha_{\rm w})\alpha_{\rm w}}{2M_{P}},\] \[M_{\tilde{a}_{X}\tilde{a}_{Z}} = M_{\tilde{a}_{Z}\tilde{a}_{X}}=\frac{\lambda_{2}v_{X}^{-1+\beta _{\rm w}}v_{Z}^{3-\beta_{\rm w}}(4-\beta_{\rm w})\beta_{\rm w}}{2M_{P}},\] \[M_{\tilde{a}_{Y}\tilde{a}_{Z}} = M_{\tilde{a}_{Z}\tilde{a}_{Y}}=0. \tag{43}\]
From this matrix, we can easily check that there would be a massless axino if \(\alpha_{\rm w}=\beta_{\rm w}=3\).
### Effective theory of our models
In order to study the phenomenology, we can write the low-energy superpotential of the model as
\[W = \mu(1+\frac{\zeta_{\hat{\cal S}_{i}}}{v_{A}/\sqrt{2}}\hat{\cal S }_{i})\hat{H}_{u}\hat{H}_{d}. \tag{44}\]
The supermultiplets \(\hat{\cal S}_{i}\), usually the one we're interested in, have a relationship with the superfields \(\hat{X}\sim\ (\frac{v_{X}+\rho_{X}+ia_{X}}{\sqrt{2}},\tilde{a}_{X})\), \(\hat{Y}\sim\ (\frac{v_{Y}+\rho_{Y}+ia_{Y}}{\sqrt{2}},\tilde{a}_{Y})\) and \(\hat{Z}\sim\ (\frac{v_{Z}+\rho_{Z}+ia_{Z}}{\sqrt{2}},\tilde{a}_{Z})\):
\[\left(\begin{array}{c}\hat{X}\\ \hat{Y}\\ \hat{Z}\end{array}\right) = \left(\begin{array}{ccc}\frac{v_{A}}{\sqrt{2}}\frac{Z_{11}^{A} }{Q_{X}}\\ \frac{v_{A}}{\sqrt{2}}\frac{Z_{12}^{A}}{Q_{Y}}\\ \frac{v_{A}}{\sqrt{2}}\frac{Z_{12}^{A}}{Q_{Z}}\\ \frac{v_{A}}{\sqrt{2}}\frac{Z_{12}^{A}}{Q_{Z}}\\ \end{array}\right)+\left(\begin{array}{ccc}Z_{11}^{\cal S}&Z_{12}^{\cal S}&Z _{13}^{\cal S}\\ Z_{12}^{\cal S}&Z_{22}^{\cal S}&Z_{23}^{\cal S}\\ Z_{13}^{\cal S}&Z_{23}^{\cal S}&Z_{33}^{\cal S}\\ \end{array}\right)^{T}\left(\begin{array}{c}\hat{\cal S}_{1}\\ \hat{\cal S}_{2}\\ \hat{\cal S}_{3}\end{array}\right) \tag{45}\] \[= \left(\begin{array}{c}\frac{v_{A}}{\sqrt{2}}\frac{Z_{11}^{A}}{Q _{X}}\exp\{\sum_{i=1}^{3}\frac{Z_{11}^{A}}{Z_{11}^{A}}\frac{Q_{X}\hat{\cal S} _{i}}{v_{A}/\sqrt{2}}\}\\ \frac{v_{A}}{\sqrt{2}}\frac{Z_{12}^{A}}{Q_{Y}}\exp\{\sum_{i=1}^{3}\frac{Z_{11}^{ \cal S}}{Z_{12}^{A}}\frac{Q_{Y}\hat{\cal S}_{i}}{v_{A}/\sqrt{2}}\}\\ \frac{v_{A}}{\sqrt{2}}\frac{Z_{12}^{A}}{Q_{Z}}\exp\{\sum_{i=1}^{3}\frac{Z_{13}^ {\cal S}}{Z_{13}^{\cal S}}\frac{Q_{Z}\hat{\cal S}_{i}}{v_{A}/\sqrt{2}}\}\\ \end{array}\right).\]
In this way, the effective superpotential coefficients can be expressed as \(\zeta_{\hat{\mathcal{S}}_{i}}=2\frac{Z_{1i}^{\hat{\mathcal{S}}_{i}}}{Z_{1i}^{ \hat{\mathcal{S}}}}Q_{X}\). When \(Z^{\hat{\mathcal{S}}}=Z^{A}\), the \(\hat{\mathcal{S}}_{1}\) is just the axion supermultiplets \(\hat{A}\sim\left(\frac{S+i\hat{A}}{\sqrt{2}},\tilde{a}\right)\), and the corresponding \(\zeta_{\hat{A}}=2Q_{X}\). Here \(S\) and \(\tilde{a}\) are typically not the mass eigenstates. To be concrete, they are
\[S = \sum_{i=1}^{3}Z_{1i}^{A}Z_{1i}^{S}S_{1}+\sum_{i=1}^{3}Z_{1i}^{A}Z_ {2i}^{S}S_{2}+\sum_{i=1}^{3}Z_{1i}^{A}Z_{3i}^{S}S_{3}\equiv C_{S_{1}}S_{1}+C_{S _{2}}S_{2}+C_{S_{3}}S_{3}, \tag{46}\] \[\tilde{a} = \sum_{i=1}^{3}Z_{1i}^{A}\tilde{a}_{1i}+\sum_{i=1}^{3}Z_{1i}^{A} \tilde{Z}_{2i}^{\tilde{a}}\tilde{a}_{2}+\sum_{i=1}^{3}Z_{1i}^{A}Z_{3i}^{\tilde {a}}\tilde{a}_{3}\equiv C_{\tilde{a}_{1}}\tilde{a}_{1}+C_{\tilde{a}_{2}}\tilde {a}_{2}+C_{\tilde{a}_{3}}\tilde{a}_{3}. \tag{47}\]
For \(Z^{\hat{\mathcal{S}}}=Z^{S}\), the \(\hat{\mathcal{S}}_{i}\) is the saxion supermultiplets \(\hat{S}_{i}\sim\left(\frac{S_{i}}{\sqrt{2}}\right)\), and the \(\hat{\mathcal{S}}_{i}\) is the axino supermultiplets \(\hat{\tilde{a}}_{i}\sim(\tilde{a}_{i})\) for \(Z^{\hat{\mathcal{S}}}=Z^{\tilde{a}}\). Based on these symbolic conventions, the superpotential could be rewritten as
\[W = \mu\left\{1+\frac{1}{v_{A}/\sqrt{2}}\left(\zeta_{\hat{A}}\hat{A}+ \zeta_{\hat{S}_{i}}\hat{S}_{i}+\zeta_{\hat{\hat{a}}_{i}}\hat{\tilde{a}}_{i} \right)+\cdots\right\}\hat{H}_{u}\hat{H}_{d}, \tag{48}\]
noticing that we just put them together for simplicity, and usually only one or two terms exist at the same time depending on the particles we studied.
Finally, the Kahler potential gives interactions between the axion and the saxions:
\[\mathcal{L} = (1+\frac{\sqrt{2}\xi}{v_{A}/\sqrt{2}}S)[\frac{1}{2}\partial^{\mu }A\partial_{\mu}A+\frac{1}{2}\partial^{\mu}S\partial_{\mu}S], \tag{49}\]
where \(\xi=\Sigma_{i}Q_{i}^{3}v_{i}^{2}/v_{A}^{2}\) is expected to be \(\mathcal{O}(1)\) but could also be as small as zero [46]. Using Eq.(49), we find the partial width for \(S_{i}\to AA\) to be
\[\Gamma(S_{i}\to AA)\approx\frac{(C_{S_{i}}\xi)^{2}m_{S_{i}}^{3}}{64\pi(v_{A}/ \sqrt{2})^{2}}. \tag{50}\]
## III Dark matter candidates in our models
In this section, we consider the higgsino-like neutralino and axion as dark matter candidates. Beginning with the assumption that higgsino-like neutralino is the LSP, we discuss the constraints of the anomalous magnetic moment of the muon \(a_{\mu}\) and XEXON-1T direct detection experiment on the relic density of the LSP. Then, for axion is the rest of the dark matter that saturates the observed dark matter relic density, we also review the CMM and the KMM so that we can further study which production mechanism is feasible for our models.
### Constraints from the \(a_{\mu}\) measurements and XEXON-1T experiment
In our models, the mixing between fields that belong to and do not belong to the MSSM is very tiny, and the interactions between particles belonging to and not belonging to the MSSM are also \(\frac{1}{f_{A}}\) suppressed. These newly introduced particles affect low-energy phenomenology, but the impacts are extremely small and can be safely ignored. Therefore, our models are almost identical to the MSSM at low energy if we do not consider the axion portion.
In the framework of MSSM, the LSP, which we take as the lightest neutralino denoted by \(\tilde{\chi}\), can be the dark matter candidate. The masses and mixing of neutralinos are determined by the parameters \(M_{1},M_{2},\mu\) and \(\tan\beta\), and that of charginos is determined by the parameters \(M_{2},\mu\) and \(\tan\beta\). As a result, the LSP can be classified as bino-like, wino-like, higgsino-like or other well-tempered types, determined by the parameters mentioned above. The relic density can be obtained via computing the cross section of the annihilation processes or the coannihilation processes with the next-to-lightest supersymmetric particle (NLSP). In the case of bino-like LSP, it turns out that there would be too large relic density to explain the Planck observation. As the LSP is wino-like, considering the effect of "Sommerfeld enhancement" [47], wino-like LSP providing the full amount of dark matter needs \(m_{\tilde{\chi}}\) to be as large as roughly \(2.9\mathrm{TeV}\). In the limit of pure higgsino, the parameter \(\mu\) should also be as large as about \(1.1\mathrm{TeV}\).
In addition, when we consider the new experimental data of anomalous magnetic moment of the muon \(a_{\mu}:=(g-2)_{\mu}/2\), more constraints should be imposed on the parameter space. The SM prediction of muon anomaly is \(a_{\mu}^{sm}=116591810(43)\times 10^{-11}(0.37\mathrm{ppm})\)[48; 49; 50], and the latest averaged experiment value of muon anomaly is \(a_{\mu}^{exp}=116592061(41)\times 10^{-11}(0.35\mathrm{ppm})\)[30]. Therefore, the deviation between experiment and SM prediction is \(\Delta a_{\mu}=a_{\mu}^{exp}-a_{\mu}^{sm}=251(59)\times 10^{-11}\), which is \(4.2\sigma\). This anomaly is an important hint of the existence of the NP models. Within MSSM, both wino-like and higgsino-like LSP cannot fulfill the \((g-2)_{\mu}\) constraints because their masses are too heavy unless they are only responsible for a small part of the total relic density of dark matter. This is consistent with our idea that the dark matter is a mixture of LSP and axion. In this study, we consider the higgsino-like LSP as the candidate of dark matter. In this case, the parameter \(\mu\) is much smaller than \(M_{1}\) and \(M_{2}\). Such a low \(\mu\) is also favored by naturalness arguments in some literatures [38; 40; 51; 52; 53]. \(\mu\sim 100-300\mathrm{GeV}\) (the lower the more natural), for example, is
claimed in Ref.[38], and \(\mu\sim 100-350\)GeV (where \(\mu\gtrsim 100\)GeV is required to accommodate LEP2 limits from chargino pair production searches) is claimed in Ref.[40]. Notably, in these literatures, the topic of "naturalness" mainly focuses on the fine-tuning of parameters in the potential minimization condition
\[\frac{M_{Z}^{2}}{2}=\frac{m_{H_{d}}^{2}-m_{H_{u}}^{2}\tan^{2}\beta}{\tan^{2} \beta-1}-\mu^{2}. \tag{51}\]
Naively, if \(\mu\gg M_{Z}\), the first term on the right side of Eq.(51) must also be large, so that these two terms may largely cancel to give \(M_{Z}\). In this sense, one can have a suspicion that the models with \(\mu\gg M_{Z}\) would be fine tuned. It is also worthy to mention that a larger \(\mu\) could also be found in massive literatures. In particular, the parameter \(\mu\) as large as \(10^{5},10^{6},10^{7}\)GeV have been used in the numerical analysis for a high-scale SUSY in Ref.[54]. With our assumption that the higgsino-like neutralino is the LSP, we prefer a \(\mu\) with low value.
In our numerical analysis, we use the GM2Calc-2.1.0[55] to calculate the contribution to \(a_{\mu}\) up to two-loop order, and the relic density of LSP is evaluated with MicrOMEGAs-5.2.13[56; 57; 58; 59]. These codes are numerically reliable. To find the parameter space not excluded by experiments such as Xenon1T-2018, we scan the parameter space. The following are some of the assumptions we made:
1. For scalar leptons, we take common soft SUSY-breaking parameters and the trilinear coupling \(A_{l}\) to be zero for all three generations, so the mixing is significant only for the scalar taus.
2. The mass of the MSSM pseudoscalar particle \(A^{0}\) is above \(1.5\)TeV, and the masses of scalar quarks are larger than \(3\)TeV.
In the MSSM, the one-loop correction to \(a_{\mu}\) mainly comes from \(\tilde{\chi}^{\pm}-\tilde{\nu}\) and \(\tilde{\chi}-\tilde{\mu}\), hence the mass of scalar muon is a sensitive parameter. Considering the MSSM mass relation \(M_{L_{ii}}^{2}=m_{\tilde{\nu}_{i}}^{2}-M_{W}^{2}\cos(2\beta)\)[60; 61] by which the sneutrino and slepton masses are related, in this work, we scan the following parameter space:
\[\tan\beta\in[30,60],\ \mu\in[100,800]\text{GeV},\] \[M_{1}\in[1.1\mu,20\mu],\ M_{2}\in[1.1\mu,20\mu],\] \[M_{L_{22}}\in[\sqrt{\mu^{2}+M_{W}^{2}},2\text{TeV}],\] \[M_{R_{22}}\in[\sqrt{\mu^{2}+M_{W}^{2}},2\text{TeV}],\]
where \(M_{L}^{2}\) and \(M_{R}^{2}\) are the mass squared parameters of soft terms in MSSM. The resulting parameter space that can satisfy the current experimental limitations are shown in Fig.1-2. In these figures, all points can satisfy the constraint from \(a_{\mu}\) within \(2\sigma\) deviation.
In Fig.1 left panel, the constraints on the mass of the higgsino-like LSP \(m_{\tilde{\chi}}\) are shown. Despite the fact that the Xenon1T-2018 bound is very strict, there are still parameter spaces that are not excluded. The mass of the higgsino-like LSP, which is approximately equal to the parameter \(\mu\), is restricted in the range \([100,500]\rm GeV\). Besides, we also show the relic density of LSP in this panel for the parameter \(\mu=200,300\) and \(400\rm GeV\) respectively. As a crude estimate, the relic density of LSP is limited to no more than \(0.025\).
The restrictions imposed on \(\tan\beta\) when the LSP-proton spin independent scattering cross section \(\sigma_{P}^{SI}\) is under the Xenon1T-2018 bound are given in the right panel in Fig.1. As the parameter \(\mu\) increases, corresponding \(\sigma_{P}^{SI}\) rises concurrently. For smaller \(\mu\), the restriction
Figure 1: The constraints on the mass of the higgsino-like LSP \(m_{\tilde{\chi}}\) (left panel) and parameter \(\tan\beta\) (right panel) from Xenon1T-2018 direct detection experiment and the latest \(a_{\mu}\) measurements. The gray points are excluded by the bound of the Xenon1T-2018 [62] direct detection experiment which is plotted by black solid line in left panel, and the orange, green, cyan and blue points denote the parameter \(\mu\in[100,200]\), \([200,300]\), \([300,400]\) and \([400,500]\rm GeV\) respectively. In addition, the relic density of \(\tilde{\chi}\) with \(\mu=200,300\) and \(400\rm GeV\) is also shown with brown Dotted, Dashed and Dot Dashed lines respectively. All of them are small comparing with the total dark matter abundance which is \(0.12\), and this is consistent with our purpose that LSP is a small part of the total dark matter.
placed on \(\tan\beta\) is lax. When the parameter \(\mu\) is large, however, e.g., \(\mu=450\)GeV, \(\tan\beta\) needs to remain in a small interval of large value to prevent the parameter space from being excluded. This behavior is easy to understand since the most important contribution to \(\sigma_{P}^{SI}\) comes from the exchange of the SM-like Higgs in t-channel, whose coupling could be expressed [63] as
\[C_{h\tilde{\chi}\tilde{\chi}}\propto(1+\sin 2\beta)(\tan^{2}\theta_{W}\frac{M_{W} }{M_{1}-\mu}+\frac{M_{W}}{M_{2}-\mu}). \tag{52}\]
The first term in the second bracket of the above formula can be omitted numerically, so that small \(\mu\) and large \(\tan\beta\) can result in a small \(\sigma_{P}^{SI}\) that would not be excluded by the direct detection experiment.
We also plot the viable parameter spaces of \(M_{1}\) versus \(M_{2}\) and \(M_{L_{22}}\) versus \(M_{R_{22}}\) in the left and right panels in Fig.2 respectively. Decoupling the parameters \(M_{1}\) and \(M_{2}\), this means that the LSP is a pure higgsino, and in this case we can successfully escape the constraint from the Xenon1T-2018 direct detection experiment (\(C_{h\tilde{\chi}\tilde{\chi}}\) originates from gaugino Yukawa couplings of the form \(h^{\dagger}\tilde{h}\tilde{b}\) and \(h^{\dagger}\tilde{h}\tilde{w}\)). On the other hand, for the sake of explaining the latest measurements of \(a_{\mu}\), one needs relatively light masses of loop particles that appear in the involved Feynman diagrams. For these reasons, the resulting viable parameter space in Fig.2 are as expected.
Figure 2: The constraints on parameters \(M_{1,2}\) (left panel) and \(M_{L,R_{22}}\) (right panel) from Xenon1T-2018 direct detection experiment and the latest \(a_{\mu}\) measurements. The colors of the points have the same meaning as in Fig.1.
Finally, three sample points are shown in Table 8. Combining the constraints on these parameters that are shown in Fig.1-2 and considering the rationality of sleptons/sneutrinos masses, we just take the parameter \(\mu=250\)GeV for safety in the following discussion, and the corresponding relic density \(\Omega_{\tilde{\chi}}h^{2}\) is approximately 0.007.
### Conventional misalignment mechanism
Following the dilute instanton gas approximation [64], the axion potential can be expressed as
\[V_{A}(T)=m_{A}^{2}(T)f_{A}^{2}\left(1-\cos\frac{A}{f_{A}}\right),m_{A}(T)\mid_ {T>\Lambda_{QCD}}=\zeta m_{A}(\frac{\Lambda_{QCD}}{T})^{4}. \tag{53}\]
Here \(m_{A}(T)\) is the temperature-dependent mass of axion, and \(m_{A}\) is the zero-temperature axion mass given by Eq.(12). For temperature \(T<\Lambda_{QCD}\), the temperature-dependent axion mass \(m_{A}(T)\) is equal to the zero-temperature axion mass \(m_{A}\). In this study, we assume that the QCD scale \(\Lambda_{QCD}\) is approximately 160MeV and the parameter \(\zeta\) is equal to 0.026 [65]. Such a potential can result in a non-thermal relic density for axion as the cold dark matter.
The equation of motion of the axion in an expanding Universe is
\[\ddot{A}+3H(T)\dot{A}+V_{A}^{\prime}(T)=0, \tag{54}\]
where the Hubble rate \(H(T)\) can be expressed as \(H(T)=(\frac{\pi^{2}}{90}g_{*}(T)\frac{T^{4}}{M_{P}^{2}})^{\frac{1}{2}}\) in a radiation-dominated Universe, and \(V_{A}^{\prime}(T)\) is the first derivative of the potential with respect to the axion field \(A\). Simplifying this equation of motion by using \(\theta(x)\equiv\frac{A(x)}{f_{A}}\) and \(V_{A}(T)\approx\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{ 2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}{2}\frac{1}{ {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} {2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}{2}\frac{1}{2}\frac{1}{2} \frac{1}{2}
\(\frac{1}{2}m_{A}^{2}(T)A^{2}\) (which can be obtained by expanding the potential \(V_{A}(T)\) around \(\frac{A}{f_{A}}=0\)), we get
\[\ddot{\theta}+3H(T)\dot{\theta}+m_{A}^{2}(T)\theta=0. \tag{55}\]
This is the damped vibration differential equation, and the Hubble friction damps the evolution of the axion field as long as the Hubble rate is significantly larger than the mass of axion. We can set \(\dot{\theta}_{i}=0\), since the Hubble friction freezes the motion of axion. However, the initial value \(\theta_{i}\) is not necessary to stay at the minima in early Universe. As the Universe temperature decreases and the axion mass switches on, the axion field starts to oscillate at a temperature \(T_{osc}\) at which \(m_{A}(T_{osc})=3H(T_{osc})\). With a value of \(g_{*}(\Lambda_{QCD})\) is about 26, we can determine that \(T_{osc}>\Lambda_{QCD}\) due to \(m_{A}(\Lambda_{QCD})>3H(\Lambda_{QCD})\).
When \(T<\Lambda_{QCD}\), the energy density of axion can be computed by \(\rho_{A}(T)=m_{A}n_{A}(T)\), where \(n_{A}(T)\) is the axion number density at temperature \(T\). This axion number density is related to a conserved yield
\[\frac{n_{A}(T)}{s(T)}=\frac{n_{A}(T_{osc})}{s(T_{osc})}, \tag{56}\]
where \(s(T)=\frac{2\pi^{2}}{45}g_{s}(T)T^{3}\) is the entropy density at temperature \(T\), and \(n_{A}(T_{osc})=\frac{V_{A}(T_{osc})}{m_{A}(T_{osc})}=\frac{1}{2}m_{A}(T_{osc}) f_{A}^{2}\langle\theta_{i}^{2}\rangle\) can be realized in the post-inflationary scenario[13]. The parameter \(\langle\theta_{i}^{2}\rangle\) is given by
\[\langle\theta_{i}^{2}\rangle=\frac{1}{2\pi}\int_{-\pi}^{\pi}d \theta_{i}f(\theta_{i})\theta_{i}^{2}, \tag{57}\]
where the angle brackets represent the value of the initial condition averaged over \([-\pi,\pi)\). For \(f(\theta_{i})=1\), \(\langle\theta_{i}^{2}\rangle=\frac{\pi^{2}}{3}\simeq 1.81^{2}\) can be easily obtained. In the case \(f(\theta_{i})=(\log[\frac{e}{(1-\frac{\theta_{i}^{2}}{\pi^{2}})}])^{\frac{7}{ 6}}\)[66] that accounting for the anharmonic corrections, this parameter would be \(\langle\theta_{i}^{2}\rangle\simeq 2.96^{2}\). The mass of axion at the temperature \(T_{osc}\) is \(m_{A}(T_{osc})\simeq\frac{\sqrt{g_{s}(T_{osc})}\pi T_{osc}^{2}}{\sqrt{10}M_{P}}\), and the oscillation temperature \(T_{osc}\) could be computed here by using condition \(m_{A}(T_{osc})=3H(T_{osc})\):
\[T_{osc}\simeq(\frac{\sqrt{10}\zeta m_{A}M_{P}\Lambda_{QCD}^{4}}{ \pi g_{s}(T_{osc})^{\frac{1}{2}}})^{\frac{1}{6}}, \tag{58}\]
where \(g_{*}(T_{osc})=g_{s}(T_{osc})\simeq 61.75\) have been used as a reference. Then, the axion relic density in terms of \(f_{A}\) could be expressed as
\[\Omega_{A}h^{2}=\frac{\rho_{A}(T_{0})}{\rho_{tot}(T_{0})}h^{2} \simeq 1.6679\times 10^{-15}(\frac{f_{A}}{\rm GeV})^{\frac{7}{6}}\langle \theta_{i}^{2}\rangle. \tag{59}\]
For that axion is responsible for the rest of total relic density of dark matter or equivalently \(\Omega_{A}h^{2}=0.113\), we need \(f_{A}\simeq 1.11(2.59)\times 10^{11}\)GeV with a parameter \(\langle\theta_{i}^{2}\rangle\simeq 2.96^{2}(1.81^{2})\).
In the pre-inflationary scenario[13], on the other hand, the initial misalignment angle \(\theta_{i}\) has a homogeneous value, so that the expression of relic density of axion would be changed to
\[\Omega_{A}h^{2}=\frac{\rho_{A}(T_{0})}{\rho_{tot}(T_{0})}h^{2} \simeq 1.6679\times 10^{-15}(\frac{f_{A}}{\text{GeV}})^{\frac{7}{6}}f( \theta_{i})\theta_{i}^{2}. \tag{60}\]
In addition, there would be no Domain Wall problem even if the Domain Wall number \(N_{DW}\neq 1\), since topological defects are inflated away and do not contribute to the axion energy density.
### Kinetic misalignment mechanism
In the case \(\dot{\theta}_{i}=0\), we have given a brief review of the CMM, and this mechanism provides a good explanation for axions being the dark matter. On the other hand, nonzero \(\dot{\theta}_{i}\) is also possible and well motivated. In this case, the mechanism for producing axion relic density is called the Kinetic Misalignment Mechanism (KMM), and it is also very appealing in axion studies.
The key point of the KMM is that, when the axion kinetic energy \(\frac{1}{2}\dot{A}^{2}\) is larger than the potential energy barrier \(2m_{A}^{2}(T)f_{A}^{2}\) at the temperature \(T_{osc}\), the axion oscillation will be delayed until \(\frac{1}{2}\dot{A}^{2}=2m_{A}^{2}(T)f_{A}^{2}\) at another temperature \(T_{osc}^{K}\) which is smaller than \(T_{osc}\). This then may yield an appropriate relic density of axion dark matter for a small \(f_{A}\), e.g., \(f_{A}=10^{10}\)GeV, without the assumption that \(\theta_{i}\) accidentally close to the hilltop of the potential. The critical condition for that the KMM is at play is \(\frac{1}{2}\dot{A}^{2}|_{T=T_{osc}}=2m_{A}^{2}(T)f_{A}^{2}|_{T=T_{osc}}\).
Considering the Noether charge associated with the PQ symmetry
\[n_{PQ}=v_{A}\dot{A}=2Nf_{A}\dot{A}=2Nf_{A}^{2}\dot{\theta}, \tag{61}\]
since this charge density is a quantity that redshifts as \(R^{-3}\) in the evolution of the Universe, it is convenient to define a redshift-invariant yield \(Y_{PQ}\equiv\frac{n_{PQ}}{s}\). The condition that the KMM is effective then can be rewritten as \(Y_{PQ}>Y_{PQ}^{\text{crit}}\), where \(Y_{PQ}^{\text{crit}}\) is given by
\[Y_{PQ}^{crit} \equiv 2N\frac{2m_{A}\left(T_{osc}\right)f_{A}^{2}}{s\left(T_{osc} \right)}\simeq 0.20\times 2N\left(\frac{f_{A}}{10^{9}\text{GeV}}\right)^{ \frac{13}{6}}\left(\frac{150\text{MeV}}{\Lambda_{QCD}}\right)^{\frac{2}{3}} \left(\frac{26}{g_{s}\left(T_{osc}\right)}\right)^{\frac{5}{12}}. \tag{62}\]
The energy density of axion at present is
\[\rho_{A}\mid_{T=T_{0}}=m_{A}n_{A}(T_{0})=m_{A}\frac{n_{A}(T_{osc}^{K})}{s(T_{osc} ^{K})}s(T_{0}), \tag{63}\]
where \(n_{A}\left(T_{osc}^{K}\right)\simeq\frac{2n_{PQ}}{2N}|_{T=T_{osc}^{K}}\). Thus, the relic density of axion can be obtained:
\[\Omega_{A}h^{2}\simeq 0.12(\frac{10^{9}\text{GeV}}{2Nf_{A}})(\frac{Y_{PQ}}{40}). \tag{64}\]
In a model with \(N=\frac{1}{2}\), this equation means that \(f_{A}\lesssim 1.24\times 10^{11}\text{GeV}\) should be satisfied if one assumes \(\Omega_{A}h^{2}\simeq 0.113\), and this is easy to understand from Fig.3.
### The velocity of axion from PQ violation
In our model, the scalar potential containing only \(X\), \(Y\) and \(Z\) is given by
\[V_{PQ}^{X\&Y\&Z}\simeq m_{X}^{2}|X|^{2}+m_{Y}^{2}|Y|^{2}+m_{Z}^{2}|Z|^{2}-(\frac{a_{1}}{M_ {P}}X^{\alpha_{\text{w}}}Y^{4-\alpha_{\text{w}}}+\frac{a_{2}}{M_{P}}X^{\beta_ {\text{w}}}Z^{4-\beta_{\text{w}}}+\text{ h.c. })\] \[+|\frac{\lambda_{1}}{M_{P}}X^{\alpha_{\text{w}}-1}Y^{4-\alpha_{ \text{w}}}+\frac{\lambda_{2}}{M_{P}}X^{\beta_{\text{w}}-1}Z^{4-\beta_{\text{ w}}}|^{2}+|\frac{\lambda_{1}}{M_{P}}X^{\alpha_{\text{w}}}Y^{3-\alpha_{\text{w}}}|^{2}\] \[+|\frac{\lambda_{2}}{M_{P}}X^{\beta_{\text{w}}}Z^{3-\beta_{\text{ w}}}|^{2}. \tag{65}\]
Figure 3: The relic density of axion dark matter for Kinetic Misalignment Mechanism in the \(Y_{PQ}\)-\(f_{A}\) plane. The region colored light cyan (light orange) represents the overproduction (underproduction) of axion, and the KMM is ineffective in the light grey region.
If this potential is sufficiently flat, the radial components, which we also call saxions (here they do not represent the mass eigenstates \(S_{i}\)), can be initially displaced from their minima and driven to large field values during inflation. In this case, we can simplify \(V_{PQ}^{X\&Y\&Z}\) as
\[V_{PQ}^{X\&Y\&Z}\simeq |\frac{\lambda_{1}}{M_{P}}X^{\alpha_{\rm w}-1}Y^{4-\alpha_{\rm w}} +\frac{\lambda_{2}}{M_{P}}X^{\beta_{\rm w}-1}Z^{4-\beta_{\rm w}}|^{2}+|\frac{ \lambda_{1}}{M_{P}}X^{\alpha_{\rm w}}Y^{3-\alpha_{\rm w}}|^{2} \tag{66}\] \[+|\frac{\lambda_{2}}{M_{P}}X^{\beta_{\rm w}}Z^{3-\beta_{\rm w}}|^ {2}.\]
We further assume that all saxions take a same large field value \(\mathcal{V}_{i}\) initially and the dimensionless parameters \(\lambda_{1},\lambda_{2}\) satisfy approximately the relationship \(\lambda_{1}=\lambda_{2}=\lambda\), then this potential would provide an effective saxion mass \(m_{\mathcal{V}_{i}}\simeq\frac{3\sqrt{10}}{2}\frac{\lambda}{M_{P}}\mathcal{V }_{i}^{2}\). The initial value of saxion \(\mathcal{V}_{i}\) can be determined via \(\langle\mathcal{V}_{i}^{2}\rangle=\frac{3}{8\pi^{2}}\frac{H_{I}^{4}}{m_{ \mathcal{V}_{i}}^{4}}\)[67; 68; 69; 70; 71; 72; 73], where \(H_{I}\lesssim 1.5\times 10^{13}\)GeV [35] is the maximum Hubble scale during inflation. Without difficulty, \(\mathcal{V}_{i}=\left(\frac{H_{I}^{4}M_{P}^{2}}{60\pi^{2}\lambda^{2}}\right) ^{\frac{1}{6}}\) can be obtained. As the temperature of Universe drops, the saxion becomes free to oscillate when \(H\left(T_{i}\right)=\frac{m_{\mathcal{V}_{i}}}{3}\), where the temperature \(T_{i}\), assuming that saxion oscillation begins after reheating, is given by
\[T_{i}=\left(\frac{90}{g_{*}\left(T_{i}\right)\pi^{2}}\right)^{ \frac{1}{4}}\sqrt{H\left(T_{i}\right)M_{P}}. \tag{67}\]
On the other hand, the PQ violation potential \(V_{PQ\&Y\&Z}^{X\&Y\&Z}\) becomes significant when the radial components get large field values. As we will see, \(V_{PQ\&Y\&Z}^{X\&Y\&Z}\) can induce the rotation of \(X\), \(Y\) and \(Z\) and, then, the nonzero velocity of axion. In order to explain this conveniently, we rewrite \(X\), \(Y\) and \(Z\) as
\[X=\frac{1}{\sqrt{2}}\mathcal{V}_{X}{\rm exp}(i\theta_{X}),Y= \frac{1}{\sqrt{2}}\mathcal{V}_{Y}{\rm exp}(i\theta_{Y}),Z=\frac{1}{\sqrt{2}} \mathcal{V}_{Z}{\rm exp}(i\theta_{Z}). \tag{68}\]
Using these notations, the angular motion and the PQ charge density can be connected by
\[n_{PQ}=2Nf_{E}^{2}\dot{\theta}=Q_{X}\mathcal{V}_{X}^{2}\dot{ \theta}_{X}+Q_{Y}\mathcal{V}_{Y}^{2}\dot{\theta}_{Y}+Q_{Z}\mathcal{V}_{Z}^{2} \dot{\theta}_{Z}, \tag{69}\]
where \(f_{E}\equiv\frac{v_{E}}{2N}\) is the axion effective decay constant and \(v_{E}^{2}\simeq Q_{X}^{2}\mathcal{V}_{X}^{2}+Q_{Y}^{2}\mathcal{V}_{Y}^{2}+Q_{ Z}^{2}\mathcal{V}_{Z}^{2}\). Considering the equations of motion of \(X,Y\) and \(Z\), here we only write down that of \(X\):
\[\ddot{X}+3H\dot{X}+\frac{\partial V_{PQ}^{X\&Y\&Z}}{\partial X^{*}}+\frac{ \partial V_{PQ violate}^{X\&Y\&Z}}{\partial X^{*}}=0. \tag{70}\]
By multiplying this by \(X^{*}\) and subtracting the complex conjugation, we obtain
\[\ddot{\theta}_{X}+2\frac{\dot{\mathcal{V}}_{X}}{\mathcal{V}_{X}} \dot{\theta}_{X}+3H\dot{\theta}_{X}=\frac{i}{\mathcal{V}_{X}^{2}}\left(X^{*} \frac{\partial V_{PQ violate}^{X\&Y\&Z}}{\partial X^{*}}-X\frac{\partial V_{ PQ violate}^{X\&Y\&Z}}{\partial X}\right). \tag{71}\]
In our model, the PQ-violate part of potential reads
\[V_{PQviolate}^{X\&Y\&Z}=-\frac{a_{\kappa}}{M_{P}^{n}}X^{\alpha_{1}}Y^{\alpha_{2}}Z ^{\alpha_{3}}+\frac{C}{M_{P}^{n^{\prime}}}X^{\beta_{1}}X^{*\beta_{2}}Y^{\beta_ {3}}Y^{*\beta_{4}}Z^{\beta_{5}}Z^{*\beta_{6}}+\mbox{ h.c.}, \tag{72}\]
with \(\sum\alpha_{i}-n=3\) and \(\sum\beta_{i}-n^{\prime}=4\). Using Eqs.(71) and (72), we obtain
\[{\cal V}_{X}^{2}\dot{\theta}_{X} = \frac{i}{m_{{\cal V}_{i}}}(X^{*}\frac{\partial V_{PQviolate}^{X \&Y\&Z}}{\partial X^{*}}-X\frac{\partial V_{PQviolate}^{X\&Y\&Z}}{\partial X}) \tag{73}\] \[= \frac{i}{m_{{\cal V}_{i}}}[(\alpha_{1})(\frac{a_{\kappa}}{M_{P}^ {n}}X^{\alpha_{1}}Y^{\alpha_{2}}Z^{\alpha_{3}}-\mbox{ h.c.})\] \[+(\beta_{2}-\beta_{1})(\frac{C}{M_{P}^{n^{\prime}}}X^{\beta_{1}} X^{*\beta_{2}}Y^{\beta_{3}}Y^{*\beta_{4}}Z^{\beta_{5}}Z^{*\beta_{6}}-\mbox{ h.c.})],\]
where we have used the initial condition and omitted the subscript "\(i\)". According to Eq.(69), we can express the PQ charge density right after the beginning of oscillations \(n_{PQ_{i}}\) as
\[n_{PQ_{i}}= \frac{i}{m_{{\cal V}_{i}}}\{(\alpha_{1}Q_{X}+\alpha_{2}Q_{Y}+ \alpha_{3}Q_{Z})(\frac{a_{\kappa}}{M_{P}^{n}}X^{\alpha_{1}}Y^{\alpha_{2}}Z^{ \alpha_{3}}-\mbox{ h.c.}) \tag{74}\] \[+[(\beta_{2}-\beta_{1})Q_{X}+(\beta_{4}-\beta_{3})Q_{Y}+(\beta_{ 6}-\beta_{5})Q_{Z}](\frac{C}{M_{P}^{n^{\prime}}}X^{\beta_{1}}X^{*\beta_{2}}Y^{ \beta_{3}}Y^{*\beta_{4}}Z^{\beta_{5}}Z^{*\beta_{6}}\] \[-\mbox{ h.c.})\}\] \[= \frac{-2}{m_{{\cal V}_{i}}}\{(\alpha_{1}Q_{X}+\alpha_{2}Q_{Y}+ \alpha_{3}Q_{Z})|\frac{a_{\kappa}}{M_{P}^{n}}X^{\alpha_{1}}Y^{\alpha_{2}}Z^{ \alpha_{3}}|\sin(\delta_{a_{\kappa}}+\alpha_{1}\theta_{X}+\alpha_{2}\theta_{Y} +\alpha_{3}\theta_{Z})\] \[+[(\beta_{2}-\beta_{1})Q_{X}+(\beta_{4}-\beta_{3})Q_{Y}+(\beta_{ 6}-\beta_{5})Q_{Z}]|\frac{C}{M_{P}^{n^{\prime}}}X^{\beta_{1}+\beta_{2}}Y^{ \beta_{3}+\beta_{4}}Z^{\beta_{5}+\beta_{6}}|\] \[\times\sin(\delta_{C}+(\beta_{1}-\beta_{2})\theta_{X}+(\beta_{3}- \beta_{4})\theta_{Y}+(\beta_{5}-\beta_{6})\theta_{Z})\},\]
where \(a_{\kappa}=|a_{\kappa}|e^{i\delta_{a_{\kappa}}}\) and \(C=|C|e^{i\delta_{C}}\).
Following Refs.[34; 35], we introduce a quantity \(\epsilon\equiv\frac{n_{PQ_{i}}}{n_{{\cal V}_{i}}}\), where \(n_{{\cal V}_{i}}=\frac{V_{PQ_{i}}^{X\&Y\&Z}}{m_{{\cal V}_{i}}}\). The PQ charge density \(n_{PQ_{i}}\), therefore, can be expressed as \(n_{PQ_{i}}=\epsilon\frac{V_{PQ_{i}}^{X\&Y\&Z}}{m_{{\cal V}_{i}}}=\epsilon\frac {1}{2\sqrt{10}}\frac{\lambda}{M_{P}}{\cal V}_{i}^{4}\), and the yield \(Y_{PQ}\), for saxion oscillation begin after reheating, can also be deduced here
\[Y_{PQ}=\frac{n_{PQ_{i}}}{s\left(T_{i}\right)}\simeq\epsilon\frac{3^{\frac{1}{ 5}}}{20\times 5^{\frac{1}{6}}(2\pi)^{\frac{5}{6}}}\left(\frac{H_{I}^{8}}{\left(g_ {*}\left(T_{i}\right)\right)^{3}\lambda^{10}M_{P}^{8}}\right)^{\frac{1}{12}}. \tag{75}\]
The axion relic density is then given by
\[\Omega_{A}h^{2}\simeq 3.6\times 10^{-5}\epsilon\frac{1}{2N}\left(\frac{10^{9} \mbox{GeV}}{f_{A}}\right)\left(\frac{H_{I}^{8}}{\left(g_{*}\left(T_{i}\right) \right)^{3}\lambda^{10}M_{P}^{8}}\right)^{\frac{1}{12}}. \tag{76}\]
When \(T_{i}\gtrsim T_{R}\), saxion oscillation begin before reheating. In this case, the yield \(Y_{PQ}\) is
\[Y_{PQ}=Y_{PQ}\left(T_{R}\right)=\frac{n_{PQ}\left(T_{R}\right)}{s\left(T_{R} \right)}=\frac{n_{PQ_{i}}}{s\left(T_{R}\right)}\left(\frac{R_{i}}{R_{R}} \right)^{3}=\frac{n_{PQ_{i}}}{s\left(T_{R}\right)}\left(\frac{2\Gamma_{\phi}}{ m_{{\cal V}_{i}}}\right)^{2}\approx\epsilon\frac{1}{20\sqrt{10}}\frac{T_{R}}{ \lambda M_{P}}, \tag{77}\]
where \(\Gamma_{\phi}\approx\frac{\sqrt{g_{*}\left(T_{R}\right)}\pi\ T_{R}^{2}}{2\sqrt{10} }\frac{T_{R}^{2}}{M_{P}}\)[35] and \(\phi\) is the inflaton. Corresponding relic density of axion is given by
\[\Omega_{A}h^{2}\simeq 4.7\times 10^{-5}\epsilon\frac{1}{2N}\left(\frac{10^{9} \text{GeV}}{f_{A}}\right)\frac{T_{R}}{\lambda M_{P}}. \tag{78}\]
Note that, when \(T_{i}=T_{R}\), the expressions in Eqs.(76) and (78) are identical. In this study we just set \(\epsilon=1\) in the following for simplicity.
There are also some constraints on the parameter \(\lambda\) in the KMM case. The first constraint comes from the \(\mathcal{V}_{i}\), which determines the energy density stored in the radial component. The initial energy density \(\frac{3}{4}\frac{\lambda^{2}}{M_{P}^{2}}\mathcal{V}_{i}^{6}\) should be smaller than the total energy density \(3H^{2}\left(T_{i}\right)M_{P}^{2}\), because no one expects another inflation that was induced by saxion. This condition leads to a constraint that \(\mathcal{V}_{i}<\sqrt{10}M_{P}\) or, equivalently, \(\lambda>\frac{H_{I}^{2}}{100\sqrt{6}\pi M_{P}^{2}}\).
Following Ref.[35], the axion field fluctuations can be related to the power spectrum of the cold dark matter isocurvature perturbation
\[\mathcal{P}_{S}\left(k_{*}\right) = \left(\frac{1}{Y_{PQ}}\frac{\partial Y_{PQ}}{\partial\mathcal{V }_{i}}\right)^{2}\frac{H_{I}^{2}}{4\pi^{2}} \tag{79}\] \[= \left(\frac{1}{\mathcal{V}_{i}}\right)^{2}\frac{H_{I}^{2}}{4\pi^ {2}},\]
where we have used Eq.(75) for the oscillation begin after reheating case. From the cosmic microwave background radiation (CMB) constraints we have
\[\beta_{iso}\equiv\frac{\mathcal{P}_{S}\left(k_{*}\right)}{\mathcal{P}_{\zeta} \left(k_{*}\right)+\mathcal{P}_{S}\left(k_{*}\right)}<0.038(95\%\text{CL}), \tag{80}\]
where \(\mathcal{P}_{\zeta}\left(k_{*}\right)\simeq 2.2\times 10^{-9}\). By combining Eq.(79) with Eq.(80), we find the following bound for \(\lambda\)
\[\lambda\lesssim 8.27\times 10^{-15}\frac{M_{P}}{H_{I}}. \tag{81}\]
If we fix the Hubble parameter \(H_{I}=1.5\times 10^{13}\text{GeV}\), the parameter \(\lambda\lesssim 1.35\times 10^{-9}\) is needed.
## IV Numerical analysis
In this section, we numerically examine the possibility that axion is the rest of dark matter for the KMM and CMM cases, and discuss the cosmology of saxions and axinos as
well as the constraints from the effective coupling constant \(g_{A\gamma\gamma}\) on the axion decay constant \(f_{A}\).
Considering the current limits on \(f_{A}\), we find the red-giant bound [74; 75] on the axion-electron coupling \(g_{Aee}<1.3\times 10^{-13}\) sets the most stringent astrophysical constraint which gives [36]
\[f_{A}>\frac{\sin^{2}\beta}{|N|}(3.9\times 10^{9}\text{GeV}). \tag{82}\]
Therefore, the lower bound on axion decay constant \(f_{A}\) for large \(\tan\beta\) is \(f_{A}>\frac{7.8\times 10^{9}}{2N}\)GeV. The upper bounds on \(f_{A}\) in literature usually take a value at which the initial misalignment angle is 1. In this work, however, we restrict our attention to \(f_{A}\leq 10^{12}\)GeV, since the larger the \(f_{A}\) is, the larger the \(p\) is required for a high-quality axion model.
### Numerical analysis for KMM
In our model, the scalar potential is given in Eq.(65). For KMM, we assume for simplicity that parameters in the potential fulfill formulae \(m_{X}=m_{Y}=m_{Z}=m_{\text{xyz}}\), \(\lambda_{1}=\lambda_{2}=\lambda\) and \(a_{1}=a_{2}=a_{\lambda}\). Then, these three parameters will give the VEVs of the \(X,Y\) and \(Z\), and then axion decay constant \(f_{A}=\frac{v_{A}}{2N}\). We discuss the possibility that axion contributes a relic density \(\Omega_{A}h^{2}\simeq 0.113\).
In the saxion oscillation begin after reheating case, the axion relic density \(\Omega_{A}h^{2}\) is independent on \(T_{R}\) and given in Eq.(76). In model \(\text{E}_{\text{I}}\), the axion decay constant \(f_{A}\) can be expressed, in terms of the parameter \(\lambda\), as
\[f_{A}\approx 27.6\times\lambda^{-\frac{5}{6}}\text{ GeV}, \tag{83}\]
where we have used the parameter \(H_{I}=1.5\times 10^{13}\) GeV and the axion relic density \(\Omega_{A}h^{2}\simeq 0.113\). With \(f_{A}\) unchanged, a smaller \(\lambda\) is required for a smaller \(H_{I}\). According to Eq.(83), for \(f_{A}\) in the interval we considered, the parameter \(\lambda\) needs to be of the order of \(10^{-13}\sim 10^{-10}\) and corresponding \(T_{i}\) which can be estimated by using Eq.(67) is of the order of \(10^{12}\) GeV. Although such a small \(\lambda\) can be compatible with the CMB constraint given above, both of the parameters \(m_{\text{xyz}},a_{\lambda}\) are required to be unacceptably small. To clarify this, we temporarily assume that the VeVs satisfy \(\langle X\rangle=\langle Y\rangle=\langle Z\rangle=S\), so the scalar potential can be rewritten
as
\[V = 3m_{\rm xyz}^{2}S^{2}-\frac{4a_{\lambda}}{M_{P}}S^{4}+30\left(\frac{ \lambda}{M_{P}}\right)^{2}S^{6}. \tag{84}\]
There would be a local minimum when \(\left|\frac{a_{\lambda}}{\lambda}\right|^{2}>\frac{135}{8}m_{\rm xyz}^{2}\), and \(S\) can be solved as
\[S^{2} = \frac{8a_{\lambda}+8\sqrt{a_{\lambda}^{2}-\frac{135}{8}\lambda^{2}m _{\rm xyz}^{2}}}{180\lambda^{2}}M_{P}. \tag{85}\]
For that \(\lambda\) is of the order of \(10^{-11}\) and \(f_{A}\sim S\sim 10^{11}\) GeV, parameter \(a_{\lambda}\) is required to be of the order of \(10^{-17}\) GeV, and \(m_{\rm xyz}\) is also significantly small (about \(10^{-7}\) GeV). Although these magnitude estimates are not precise, they are adequate for qualitative analysis. It then turns out that the masses of saxions and axinos are also too small to hold the higgsino-like neutralino is the candidate of dark matter. In the base models \(\rm B_{I}\), \(\rm B_{II}\) and \(\rm B_{III}\), DW problem occurs if \(f_{A}<\frac{H_{I}}{2\pi}\). Besides, when the pre-inflationary scenario conditions are satisfied, one can easily check that small \(\lambda\), which is unacceptable, is also required.
For onset of oscillation before reheating, the parameter \(T_{R}\) being lower than \(T_{i}\), the parameter \(\lambda\) would be even smaller than that of oscillation after reheating case, see Eq.(78). In a word, parameter space for the KMM at play is unfortunately not self-consistent with our previous assumption that \(\Omega_{\tilde{\chi}}h^{2}\simeq 0.007\), since such type of model can not offer a small enough \(\lambda\) or equivalently a flat enough scalar potential.
### Numerical analysis for CMM
Given the failure of the KMM in our models, we will investigate the feasibility of axion as the rest of dark matter via the CMM in the following discussion. We primarily concentrate on the cosmology of saxions and axinos, and there are at least three concerns to consider: 1. Is there any effects on the relic density of LSP due to the decays of saxions and axinos? 2. Is there a cosmological era dominated by saxions or axinos? 3. Whether the relativistic axion from the decays of saxions or other channel satisfies the constraints from dark radiation or not?
In order to discuss these concerns, we firstly study the decays of saxions and axinos. Thanks to the superpotential shown in Eq.(48), the saxions and axinos with heavy mass can decay to the SM or MSSM particles. The partial widths of saxions decaying to higgses or
gauge bosons are [76, 77]
\[\Gamma\left(S_{i}\to hh\right)\approx \Gamma\left(S_{i}\to ZZ\right)\approx\frac{1}{2}\Gamma\left(S_{i} \to W^{+}W^{-}\right)\] \[\approx \frac{\zeta_{S_{i}}^{2}}{16\pi}\left(\frac{\mu^{2}}{v_{A}/\sqrt{ 2}}\right)^{2}\left(1-\frac{m_{A^{0}}^{2}\cos^{2}\beta}{\mu^{2}}\right)^{2} \frac{1}{m_{S_{i}}},\] \[\Gamma\left(S_{i}\to hH\right)\approx \frac{1}{2}\Gamma\left(S_{i}\to ZA^{0}\right)\approx\frac{1}{2} \Gamma\left(S_{i}\to W^{+}H^{-}\right)\approx\frac{1}{2}\Gamma\left(S_{i} \to W^{-}H^{+}\right)\] \[\approx \frac{\zeta_{S_{i}}^{2}}{32\pi}\left(\frac{m_{A^{0}}^{2}\cos \beta}{v_{A}/\sqrt{2}}\right)^{2}\frac{1}{m_{S_{i}}},\] \[\Gamma\left(S_{i}\to HH\right)\approx \Gamma\left(S_{i}\to A^{0}A^{0}\right)\approx\frac{1}{2} \Gamma\left(S_{i}\to H^{+}H^{-}\right)\] \[\approx \frac{\zeta_{S_{i}}^{2}}{16\pi}\left(\frac{\mu^{2}}{v_{A}/\sqrt{ 2}}\right)^{2}\left(1+\frac{m_{A^{0}}^{2}\cos^{2}\beta}{\mu^{2}}\right)^{2} \frac{1}{m_{S_{i}}},\] \[\Gamma\left(S_{i}\to gg\right)\approx \frac{\left(\alpha_{s}C_{S_{i}}\right)^{2}m_{S_{i}}^{3}}{64\pi^{3} \left(v_{A}/\sqrt{2}\right)^{2}}. \tag{86}\]
Saxions and axinos can also decay to charginos or neutralinos, and corresponding decay widths are
\[\Gamma(S_{i}\to neutralinos)\approx\Gamma(S_{i}\to charginos)\approx \frac{\zeta_{S_{i}}^{2}}{64\pi}(\frac{\mu}{v_{A}/\sqrt{2}})^{2}m_{S_{i}},\] \[\Gamma(\tilde{a}_{i}\to neutralinos+Z)\approx\frac{\zeta_{\tilde{a}_ {i}}^{2}}{32\pi}(\frac{\mu}{v_{A}/\sqrt{2}})^{2}m_{\tilde{a}_{i}},\] \[\Gamma(\tilde{a}_{i}\to neutralinos+higgses)\approx 2\times 3 \times\frac{\zeta_{\tilde{a}_{i}}^{2}}{64\pi}(\frac{\mu}{v_{A}/\sqrt{2}})^{2} m_{\tilde{a}_{i}},\] \[\Gamma(\tilde{a}_{i}\to charginos^{\pm}+W^{\mp})\approx\Gamma( \tilde{a}_{i}\to charginos^{\pm}+H^{\mp})\approx\frac{\zeta_{\tilde{a}_{i}}^{2} }{16\pi}(\frac{\mu}{v_{A}/\sqrt{2}})^{2}m_{\tilde{a}_{i}}. \tag{87}\]
Other decay channels have been omitted because they are not the primary decay channels in our assumed parameter space, and they can be found in the literature [76, 77]. Using Eq.(50), we can calculate the branching ratio of the decay process \(S_{i}\to AA\), which is denoted by \(BR_{S_{i}}\)
\[BR_{S_{i}}\;=\;\Gamma\left(S_{i}\to AA\right)/\Gamma_{\rm total}\; \left(S_{i}\right), \tag{88}\]
and it would be used to study the dark radiation constraints in the following. To calculate the decay temperature of saxions and axinos, we assume that decays occurred in the radiation dominated period, hence the decay temperature \(T_{D}\) are given by
\[T_{D}\approx(\frac{90}{\pi^{2}g_{*}(T_{D})})^{\frac{1}{4}}\sqrt{ \Gamma M_{P}}. \tag{89}\]
In order to guarantee that the decays of the saxions and axinos do not affect the relic density of \(\tilde{\chi}\) we given above, saxions and axinos should decay before the LSP freeze-out. When parameter \(\mu=250\)GeV, corresponding LSP freeze-out temperature \(T_{fr}^{\tilde{\chi}}\) is approximated as 10GeV, implying that \(T_{D}>10\)GeV, under the previous assumption, is needed.
Secondly, we consider the probability that there exists a cosmological era dominated by saxions or axinos. In early Universe, the reheating temperature \(T_{R}\) is a very crucial factor that affects the evolution of the Universe. If \(T_{R}\) is larger than the saxions (axinos) decoupling temperature \(T_{dep}\), saxions (axinos) were in the thermal equilibrium. The decoupling temperature \(T_{dep}\) can be approximated as [78; 79; 80; 81]
\[T_{dep}^{S} \approx 1.4\times 10^{9}(\frac{f_{A}}{10^{11}\text{GeV}})^{2} \text{GeV},\] \[T_{dep}^{\tilde{a}} \approx 10^{11}(\frac{0.1}{\alpha_{s}})^{3}(\frac{f_{A}}{10^{12} \text{GeV}})^{2}\text{GeV}, \tag{90}\]
and the yield of saxions and axinos in this case are given by[78]
\[Y_{S_{i}}^{eq}\approx 1.2\times 10^{-3},\ Y_{\tilde{a}_{i}}^{eq}\approx 1.8 \times 10^{-3}. \tag{91}\]
For reheating temperature being smaller than the saxions (axinos) decoupling temperature \(T_{dep}\), saxions (axinos) would never be in thermal equilibrium. In this case, the saxions or axinos could be produced via thermal production or coherent oscillation. In addition, the yield of them may be \(T_{R}\) -dependent or -independent, depending on the model we studied. We can compare the \(T_{D}\) with the saxion/axino-radiation equality temperature \(T_{e}=\frac{4}{3}m(Y^{TP}+Y^{CO})\) to determine whether the saxions- or axinos- dominated Universe exists or not, where \(Y^{TP}\) is the yield from thermal production and \(Y^{CO}\) is the yield from coherent oscillation. If the saxion/axino decay temperature \(T_{D}\) is large than \(T_{e}\), such a cosmological era never occurred.
Finally, we consider constraints from dark radiation. Relativistic axion could be produced from the saxion decay or thermal scattering, and gives contribution to the effective number of neutrinos [78]
\[\Delta N_{eff}\approx\Delta N_{eff}^{TP}+\sum_{i}\frac{18}{r}BR_{S_{i}}g_{*}(T _{D}^{S_{i}})^{-\frac{1}{3}}\frac{m_{S_{i}}(Y_{S_{i}}^{CO}+Y_{S_{i}}^{TP})}{ T_{D}^{S_{i}}}. \tag{92}\]
In Eq.(92), the maximum of \(\Delta N_{eff}^{TP}\) could be estimated as about \(9.5\times 10^{-3}\)[78] which is much smaller than the bound of \(\Delta N_{eff}^{exp}=0.17\)[82] and can be safely neglected. The factor
\(r\simeq\max[1,\Sigma(\frac{T_{e}^{A_{i}}}{T_{D_{i}}^{A_{i}}}+\frac{T_{S}^{A_{i}}}{ T_{D}^{A_{i}}})]\) represents the entropy dilution[78]. We set \(r\) to be 1 in this work, since the \(T_{e}\ll T_{D}\) are satisfied for all the saxions and axinos in the parameter space of our interest. Besides, a factor \(r>1\) provides a smaller \(\Delta N_{eff}\), which would make the dark radiation constraint less significant.
#### iv.2.1 Numerical analysis for the model \(\rm E_{I}\)
In model \(\rm E_{I}\), we firstly make a numerical analysis based on a hypothesis that the parameters satisfy formulae \(m_{X}=m_{Y}=m_{Z}=m_{\rm xyz}\), \(\lambda_{1}=\lambda_{2}=\lambda\), \(a_{1}=a_{2}=a_{\lambda}\) and \(r_{p}\equiv\left(\frac{a_{3}}{\lambda}\right)^{2}/m_{\rm xyz}^{2}\), and then give a more general analysis by scanning parameter space for all these seven parameters.
The thermal production of saxions and axinos are via the anomaly interaction [76; 77; 81]:
\[\mathcal{L}_{anomaly}=\frac{\alpha_{s}}{8\pi f_{A}}(SG_{\mu\nu}^{a}G^{a\mu\nu }-i\bar{\tilde{a}}\frac{[\gamma^{\mu},\gamma^{\nu}]}{2}\gamma_{5}\tilde{g}^{a }G_{\mu\nu}^{a}). \tag{93}\]
These couplings lead to thermally produced densities of saxions and axinos which are dependent on the reheating temperature \(T_{R}\). Unlike the pure DFSZ SUSY axion model such as \(\rm B_{I}\), \(\rm B_{II}\) or \(\rm B_{III}\), where the saxion and axino abundances from thermal production are independent on \(T_{R}\) since the heaviest PQ-charged and gauge-charged matter supermultiplets are the MSSM Higgs doublets \(H_{u},H_{d}\)[83; 84]. The heaviest PQ-charged and gauge-charged matter supermultiplets, in model \(\rm E_{I}\), are Vectorlike quarks just as in the case of the SUSY KSVZ model. Therefore, the yields of saxion and axino are given by [78; 85]
\[Y_{S}^{TP} \approx 1.33\times 10^{-5}g_{3}^{6}\log(\frac{1.01}{g_{3}})(\frac{ 10^{12}\rm GeV}{f_{A}})^{2}(\frac{T_{R}}{10^{8}\rm GeV}),\] \[Y_{\tilde{a}}^{TP} \approx 2\times 10^{-7}g_{3}^{6}\log(\frac{1.211}{g_{3}})(\frac{ 10^{11}\rm GeV}{f_{A}})^{2}(\frac{T_{R}}{10^{4}\rm GeV}). \tag{94}\]
The abundance of saxions could also be produced via coherent oscillations. However, the yield \(Y_{S_{i}}^{CO}\) is quite small for \(f_{A}\leq 10^{12}\rm GeV\), so we can safely neglect them [78].
Under the assumption that there are three free parameters \(\lambda\), \(r_{p}\) and \(m_{\rm xyz}^{2}\), we investigate the allowed parameter space for model \(\rm E_{I}\). To begin, the \(f_{A}\)-\(r_{p}\) and \(m_{i}\)-\(r_{p}\) curve charts are shown in Fig.4, where \(i\) represent \(A_{i}^{\prime}\), \(S_{i}\) and \(\tilde{a}_{i}\).
From the left panel, we find that the appropriate parameters for \(f_{A}\leq 10^{12}\rm GeV\) primarily fall within the ranges where \(m_{\rm xyz}^{2}\) is small and \(\lambda\) is large (e.g., \(m_{\rm xyz}^{2}=10^{6}\rm GeV^{2}\) and \(\lambda=1\)).
Combining Eq.(85), this behavior can be easily understood even though this equation is only an approximate formula. We can also find that \(f_{A}\) may receive the smallest value if the parameter \(r_{p}\) is approximately equal to 40 and such a \(r_{p}\) is very attractive due to that in this case the quality problem is more susceptible to avoidance.
In the right panel, parameter \(r_{p}\) being close to 40 is also very appropriate because we expect that all these particles have relatively heavy masses and, in this way, they can successfully decay to SM or MSSM particles. When \(r_{p}\) take a larger or smaller value, on the other band, the lightest particle that shown in Fig.4, which could be the axino \(\tilde{a}_{1}\) or saxion \(S_{1}\), is usually much lighter than that of \(r_{p}\approx 40\) case. As a result, some decay channels in Eqs.(86) and (87) may close and the relic density of LSP may change since the decay temperature \(T_{D}\) may be much lower than the LSP freeze-out temperature \(T_{fr}^{\tilde{\chi}}\). Besides, we note that the masses of these particles are positively correlated with the parameter \(m_{\rm{xyz}}^{2}\), and curves in the right panel in Fig.4 would be translated up or down in the coordinate system by adjusting parameter \(m_{\rm{xyz}}^{2}\) (For simplicity, we did not show this in Fig.4). Given this reason, \(m_{\rm{xyz}}^{2}\geq 10^{8}{\rm{GeV}}^{2}\) is needed if we expect the masses of these particles are heavier than about 5TeV.
Secondly, we plot the dependence of the decay temperature \(T_{D}\) (left panel) and the saxion/axino-radiation equality temperature \(T_{e}\) (right panel) on \(f_{A}\) in Fig.5, where we have used the parameters \(r_{p}=40\), \(\lambda\in[0.1,1]\) and the saxions(axinos) thermal production yield
\(Y_{S_{i}}^{TP}=\text{Min}[Y_{S_{i}}^{eq},|C_{S_{i}}|^{2}Y_{S}^{TP}]\) (\(Y_{\tilde{a}_{i}}^{TP}=\text{Min}[Y_{\tilde{a}_{i}}^{eq},|C_{\tilde{a}_{i}}|^{2}Y _{\tilde{a}}^{TP}]\)). In this work, we are interested in the circumstance that \(T_{R}\) is smaller than \(10^{9}\text{GeV}\) and, therefore, the parameter \(T_{R}\) for analysis in the right panel is taken to be \(10^{9}\text{GeV}\). We did not draw curves for smaller \(T_{R}\) in Fig.5, since smaller \(T_{R}\) would cause \(T_{e}\) to become smaller and the restrictions on parameters would be more relaxed.
As can be seen in Fig.5, \(T_{e}\) are always smaller than \(T_{D}\) in both cases where \(m_{\text{xyz}}^{2}\) have different values. However, in the case of \(m_{\text{xyz}}^{2}=10^{8}\text{GeV}^{2}\), the decay temperatures of \(S_{1}\) and \(\tilde{a}_{1}\) would be smaller than \(T_{fr}^{\chi}\) if \(f_{A}\) is larger than about \(3.4\times 10^{11}\text{GeV}\). In order not to affect the relic density of LSP we assumed, for a large \(f_{A}\), a large \(m_{\text{xyz}}^{2}\) is also necessary. Based on these analysis, there always exist viable parameter space for \(f_{A}\in[2.7\times 10^{11},10^{12}]\text{GeV}\), in which the saxions and axinos can decay before LSP freeze-out and saxions- or axinos-dominated era would never occur.
Finally, we give the constraint from dark radiation in Fig.6. We used the parameters \(r_{p}=40\), \(\lambda\in[0.1,1]\) and \(m_{\text{xyz}}^{2}\) being equal to \(1\times 10^{8}\text{GeV}^{2}\) or \(5\times 10^{9}\text{GeV}^{2}\). For \(T_{R}\in[10^{6},10^{9}]\text{GeV}\), it is clear that the dark radiation constraint can be easily satisfied for all
Figure 5: The dependence of the decay temperature \(T_{D}\) (left panel) and the saxion/axino-radiation equality temperature \(T_{e}\) (right panel) on \(f_{A}\). Lines colored purple, orange, green, blue, cyan and red belong to \(S_{1},S_{2},S_{3},\tilde{a}_{1},\tilde{a}_{2}\) and \(\tilde{a}_{3}\) respectively. The dashed (dotted) line represents that the parameter \(m_{\text{xyz}}^{2}\) is equal to \(1\times 10^{8}\text{GeV}^{2}\) (\(5\times 10^{9}\text{GeV}^{2}\)), and the black solid line represents the LSP freeze-out temperature \(T_{fr}^{\tilde{\chi}}\) which is approximately \(10\text{GeV}\).
\(f_{A}\in[2.7\times 10^{11},10^{12}]\)GeV.
As a more general case, there are seven free parameters that are actually not restricted by the relational expressions we have imposed. Therefore, the parameter space is much larger than what we discussed above. Clearly, it is difficult to analyze such a complicated parameter space shortly, and scanning the appropriate parameter space with necessary restrictions would be more economical and efficient. Thus, we present the results here resorting to the scans of the following parameter space:
\[\lambda_{1},\lambda_{2}\in[0.01,10],\] \[a_{1},a_{2}\in[10^{3},10^{5}]\text{GeV},\] \[m_{X}^{2},m_{Y}^{2},m_{Z}^{2}\in[10^{6},10^{10}]\text{GeV}^{2}. \tag{95}\]
The restrictions that should be imposed on these parameters are given as follows: 1. The VeVs of \(X,Y\) and \(Z\) are in \([10^{9},10^{12}]\)GeV, so that the resulted axion is an invisible axion. 2. The masses of saxions and axinos are heavier than 2TeV, so that the dominant decay channels to MSSM or SM particles can safely open. 3. The decay temperatures \(T_{D}\) of saxions (axinos) are larger than the \(T_{e}\) of them, so that the saxions- or axinos- dominated era would never occur. 4. The decay temperatures \(T_{D}\) of saxions (axinos) are larger than
Figure 6: The constraint from dark radiation. The dashed black line is the upper bound of \(\Delta N_{eff}^{exp}\)[82], and the dashed (dotted) line represents that the parameter \(m_{\text{xyz}}^{2}\) is equal to \(1\times 10^{8}\text{GeV}^{2}\) (\(5\times 10^{9}\text{GeV}^{2}\)). Lines colored red (or orange) and blue (or cyan) belong to \(T_{R}=10^{9}\)GeV and \(T_{R}=10^{6}\)GeV respectively.
the LSP freeze-out temperature \(T_{fr}^{\chi}\), so that the relic density of LSP would not be affected by saxions or axinos decays. 5. The resulting effective number of neutrinos is under the upper bound given by Ref. [82].
Under these restrictions, the viable parameter space for \(\lambda_{i}\) has been plotted in Fig.7 (\(\rm E_{I}\)).
In this picture, the sampling points are colored with various hues according to the value of \(f_{A}\). For \(\lambda_{1},\lambda_{2}\in[0.1,1]\), as one expects, the smaller \(f_{A}\) gather in the corner of \(\lambda_{1}\sim\lambda_{2}\sim 1\) just like the case of \(\lambda_{1}=\lambda_{2}=\lambda\). In addition, we can also find that the resulted \(f_{A}\) can be much smaller than \(2.7\times 10^{11}\)GeV, so that the smallest \(f_{A}\) that this model can offer could be roughly obtained by such a scan on parameter space. In model \(\rm E_{I}\), since the DW number \(N_{DW}=1\), the axion relic density can be given by Eq.(59) corresponding to post-inflationary scenario. With the assumption that the relic density of axion \(\Omega_{A}h^{2}\) is approximately 0.113, the axion decay constant \(f_{A}\approx 1.11(2.59)\times 10^{11}\)GeV is required for \(\langle\theta_{i}^{2}\rangle\simeq 2.96^{2}(1.81^{2})\)
Figure 7: The parameter space of \(\lambda_{1}\) versus \(\lambda_{2}\) for different models. The sampling points are colored with various hues according to the value of \(f_{A}\).
and this can easily be achieved in the assumed parameter space. On the other hand, the PQ symmetry may be broken before or during the inflation. In such a pre-inflationary scenario, the relic density of axion \(\Omega_{A}h^{2}\approx 0.113\) is more easily to be realized, and we plot the misalignment angle required in Fig.8.
#### iv.2.2 Numerical analysis for the models \(\rm B_{I}\), \(\rm B_{II}\) and \(\rm B_{III}\)
In base models \(\rm B_{I}\), \(\rm B_{II}\) and \(\rm B_{III}\), the axion decay constant \(f_{A}\) is changed to be \(\frac{v_{A}}{2N}\), where \(N=3\). The heaviest PQ-charged and gauge-charged matter supermultiplets are the MSSM Higgs doublets \(H_{u}\) and \(H_{d}\), so the yields of saxion and axino in such types of models, following Ref.[76], are given by
\[Y_{S_{i}}^{\rm TP} \approx 10^{-7}|C_{S_{i}}|^{2}\left(\frac{\mu}{\rm TeV}\right)^{2}\left( \frac{10^{12}\rm GeV}{f_{a}}\right)^{2}, \tag{96}\] \[Y_{\tilde{a}_{i}}^{\rm TP} \approx 10^{-7}|C_{\tilde{a}_{i}}|^{2}\left(\frac{\mu}{\rm TeV}\right)^ {2}\left(\frac{10^{12}\rm GeV}{f_{a}}\right)^{2}. \tag{97}\]
Similar to the \(\rm E_{I}\) case, we present the viable parameter space of the parameter \(\lambda_{1}\) versus the parameter \(\lambda_{2}\) in Fig.7 (\(\rm B_{I}\)), (\(\rm B_{II}\)) and (\(\rm B_{III}\)) for models \(\rm B_{I}\), \(\rm B_{II}\) and \(\rm B_{III}\) respectively. The parameter space scanned and corresponding restrictions are the same as in the \(\rm E_{I}\) case. From these figures, we find that there remain large areas that can offer appropriate \(f_{A}\) in all these base models, and the lower limits of the available \(f_{A}\) can also be roughly estimated. In Table 9, we present some sample points so that more information about these models can be conveyed to the reader.
Based on these analysis, for the relic density of axion \(\Omega_{A}h^{2}\simeq 0.113\), the required initial condition \(\theta_{i}\) in pre-inflationary scenario is shown in Fig.8, and the available \(f_{A}\) in each model has been marked with different transparent colors. From this figure, it is clear that all these models can provide an appropriate \(f_{A}\) that gives the correct axion dark matter relic density. For base models, a \(f_{A}\) as small as \(10^{10}\)GeV can be derived. The initial condition \(\theta_{i}\), meanwhile, should be close to \(\pi\). On the other hand, \(\theta_{i}\lesssim 2.5\) is required for model \(\rm E_{I}\) due to the lower limit of \(f_{A}\).
### Constraints from \(g_{A\gamma\gamma}\) on \(f_{A}\)
Before closing this section, we consider the experimental limitations from \(g_{A\gamma\gamma}\) on \(f_{A}\). For the current bounds, we include limits from the helioscope CAST [86; 87], ADMX [88; 89; 90; 91], CAPP [92], RBF [93], UF [94], HAYSTAC [95; 96], QUAX [97], and ORGAN [98]. We also consider the projections for future sensitivity from the helioscope IAXO [99] and haloscopes (ADMX [100], KLASH [101], MADMAX [102], plasma haloscope [103], and TOORAD [104]). We plot Fig.9 using the axion limit data collected in Ref. [105].
In Fig.9, we show the coupling constant \(g_{A\gamma\gamma}\) as a function of the axion mass \(m_{A}\) and decay constant \(f_{A}\) in the base models (which are the same as in the SUSY DFSZ-I [106] model), and in the extension \(\rm E_{I}\). NONSUSY DFSZ-I [17] model for comparison is also plotted in this figure. All these three base models have \(E/N=2\) which is close to canceling the contribution \(-1.92\) and, therefore, have suppressed coupling compared to the NONSUSY case. This
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Models & \multicolumn{2}{c|}{\(\rm E_{I}\)} & \multicolumn{2}{c|}{\(\rm B_{I}\)} & \multicolumn{2}{c|}{\(\rm B_{II}\)} & \multicolumn{2}{c|}{\(\rm B_{III}\)} \\ \hline Samples & case 1 & case 2 & case 1 & case 2 & case 1 & case 2 & case 1 & case 2 \\ \hline \hline \(\lambda_{1}\) & 0.74 & 2.14 & 0.76 & 3.36 & 0.94 & 8.72 & 0.84 & 7.21 \\ \hline \(\lambda_{2}\) & 0.98 & 4.19 & 0.75 & 3.94 & 0.38 & 3.81 & 0.98 & 4.49 \\ \hline \(a_{1}\)[TeV] & 21.584 & 50.927 & 7.8217 & 97.109 & 16.168 & 2.6443 & 16.039 & 99.894 \\ \hline \(a_{2}\)[TeV] & 21.369 & 92.145 & 13.089 & 45.996 & 6.4091 & 61.504 & 15.595 & 40.330 \\ \hline \(m_{X}^{2}\)[TeV\({}^{2}\)] & 3.6921 & 1.3270 & 2.0396 & 1.7003 & 8.3105 & 1.3887 & 19.376 & 6.8184 \\ \hline \(m_{Y}^{2}\)[TeV\({}^{2}\)] & 1738.7 & 615.01 & 187.74 & 536.27 & 932.00 & 927.96 & 22.229 & 16.492 \\ \hline \(m_{Z}^{2}\)[TeV\({}^{2}\)] & 45.075 & 65.873 & 29.698 & 14.692 & 19.147 & 1.3642 & 1.2364 & 2.6549 \\ \hline \(m_{S_{1}}\)[TeV] & 10.066 & 8.6362 & 6.2595 & 3.2893 & 7.7607 & 10.662 & 5.1166 & 2.9237 \\ \hline \(m_{\tilde{a}_{1}}\)[TeV] & 2.3640 & 2.5596 & 2.0590 & 2.5547 & 2.8342 & 2.1732 & 4.8160 & 3.3864 \\ \hline \(f_{A}\)[\(10^{9}\)GeV] & 160.99 & 78.286 & 25.913 & 10.330 & 19.117 & 6.4615 & 26.829 & 8.0113 \\ \hline \end{tabular}
\end{table}
Table 9: Sample points for different models. The cases 1 and 2 in this table correspond to the smallest \(f_{A}\) in all samples with parameters \(\lambda_{1,2}\in[0.1,1]\) and \([1,10]\) respectively. In the following, we take the values of \(f_{A}\) in case 2 as the lower limits of the available \(f_{A}\) for each model.
implies that these models fortunately have not been ruled out and unfortunately are unlikely to be verified or tested by future experiments. Model E\({}_{\rm I}\), however, with the \(N_{DW}=1\) that avoids the cosmological DW problem provides the smallest \(|N|\) and, therefore, can raise this low-energy effective coupling, in this way making the detection more possible. For \(f_{A}\leq 10^{12}\)GeV, only one value of \(f_{A}\) which is near \(2.4\times 10^{11}\)GeV (corresponding \(m_{A}\approx 2.4\times 10^{-5}\)eV) has been excluded. For pre-inflationary scenario, there remains a large parameter space that is viable. For post-inflationary scenario, we expect a value of \(f_{A}\) is \(1.11(2.59)\times 10^{11}\)GeV corresponding to \(\langle\theta_{i}^{2}\rangle\simeq 2.96^{2}(1.81^{2})\), and this is also viable under these constraints. We also show the projections for future experiments in the Fig.10, which turns out that our model E\({}_{\rm I}\) could be tested in the future.
## V Conclusion
In this paper, we introduce four SUSY models in which the \(\mu\) problem is solved by the Kim-Nilles mechanism and the Strong CP problem is addressed by imposing a global
Figure 8: The misalignment angle required to match the relic density of axion \(\Omega_{A}h^{2}\simeq 0.113\) in pre-inflationary scenario. The grey, red, blue and green lines represent \(f_{A}\) being the values that belong to case 2 of the models E\({}_{\rm I}\), B\({}_{\rm I}\), B\({}_{\rm II}\) and B\({}_{\rm III}\) in the Table 9 respectively, and the solid black line gives the relic density of axion \(\Omega_{A}h^{2}\simeq 0.113\). We have filled the gaps between the upper boundary (\(f_{A}=10^{12}\)GeV) and the lower limits (the colored lines) for these four models in different transparent colors.
\(U(1)_{PQ}\) symmetry. Under the PQ symmetry, a pseudo-Nambu-Goldstone boson axion \(A\) which is also a popular candidate of cold dark matter appears. With the enrichment of model content, however, axion physics imposes many constraints on the SUSY model. For example, the domain wall problem and quality problem should be taken into account. To tackle the domain wall problem, one can assume that the PQ symmetry is broken before or during inflation. Alternatively, the DW number \(N_{DW}\) of model needs to be 1. In three
Figure 10: Model predictions for \(g_{A\gamma\gamma}\) as a function of \(m_{A}\) and \(f_{A}\) in different models, with projections for future experiments.
Figure 9: Model predictions for \(g_{A\gamma\gamma}\) as a function of \(m_{A}\) and \(f_{A}\) in different models, with current bounds from various experiments.
base model B\({}_{\rm I,II,III}\), \(N_{DW}\neq 1\) and, therefore, we discuss the axion production mechanism in pre-inflationary scenario. In model E\({}_{\rm I}\), which is obtained by adding some Vectorlike pairs of chiral superfields \(\Phi+\bar{\Phi}\) to the superpotential of the base model B\({}_{\rm I}\), \(N_{DW}=1\) can be achieved, so that there remains the possibility that axion is produced in post-inflationary scenario. Meanwhile, a global symmetry is not respected by quantum gravitational effects, which leads to the quality problem. We assume that our model has a \(Z_{n}^{R}\) discrete symmetry which can be a subgroup of an anomaly-free continuous \(U(1)\) symmetry, so that the global \(U(1)_{PQ}\) symmetry could be seen as an accidental consequence of the discrete symmetry. We consider that \(Z_{n}^{R}\) symmetries may be made consistent with \(SU(5)\) or Pati-Salam \(SU(4)_{C}\otimes SU(2)_{L}\otimes U(1)_{R}\) embedding and find out that the latter can work well for our models.
We assume that, in this paper, the LSP \(\tilde{\chi}\), which is the lightest neutralino, and axion are the dark matter used to fit the Planck measurement. For the sake of naturalness that we discussed in Sec.III, we prefer that the parameter \(\mu\) of the SUSY theory is relatively small and, therefore, we take the higgsino-like neutralino as the LSP. We plot the viable parameter space that can well explain the latest experimental data of anomalous magnetic moment of the muon \(a_{\mu}\). In these regions, the deviation between experiment and theoretical prediction within \(2\sigma\) can be realized. Meanwhile, some regions are not detected by direct detection experiments such as XENON-1T, but can be tested by future experiments. As a conservative choice of \(\mu=250\)GeV, we obtain that the LSP contributes about \(5.8\%\) of the total dark matter relic density.
For the axion as another part of dark matter, we review the conventional misalignment mechanism and the kinetic misalignment mechanism, as well as deduce the rotation of the axion for our model. In the numerical analysis, we analyze the parameter space in which KMM is at play. It turns out that the saxions and axinos would receive too light masses, and this would cause LSP decay, which would be contrary to our previous assumptions. Therefore, for that masses of these particles are heavy (e.g., above 2TeV), the KMM is ineffective, and the way to produce axion would be through the CMM.
In order to discuss the constraints from the cosmology of saxions and axinos, we analyze the parameter space of all four models by imposing specific relationships on parameters or directly scanning all the parameters involved. For model E\({}_{\rm I}\), we plot the diagrams that compare the decay temperature \(T_{D}\), LSP freeze-out temperature \(T_{fr}^{\tilde{\chi}}\) and saxion/axino-radiation equality temperature \(T_{e}\) with a reheating temperature \(T_{R}=10^{9}{\rm GeV}\). It turns
out that there remains a large parameter space in which saxions- or axinos- dominanted era would never occur, and saxions or axinos all can decay before LSP freeze-out in the considered region of \(f_{A}\). In addition, we consider the constraints from the dark radiation and find that the effective number of neutrinos \(\Delta N_{eff}\) is always under the current bound for \(T_{R}\leq 10^{9}\rm{GeV}\). For base models, we present the results by scanning parameter space, and these models all can give a smaller lower limit of \(f_{A}\) compared to that of model \(\rm{E_{I}}\). We can summarize the lower limits of \(f_{A}\) here which are \(1.03\times 10^{10},6.46\times 10^{9},8.01\times 10^{9}\) and \(7.83\times 10^{10}\rm{GeV}\) for models \(\rm{B_{I,II,III}}\) and \(\rm{E_{I}}\) respectively.
Finally, combining the constraints from experiments that focus on \(g_{A\gamma\gamma}\), we give the analysis of axion production via CMM for our models. In base models, since the DW number \(N_{DW}\neq 1\), the PQ symmetry should be broken before or during inflation and, therefore, all the available \(f_{A}\) that are smaller than \(10^{12}\rm{GeV}\) can provide the appropriate dark matter relic density \(\Omega_{A}h^{2}\simeq 0.113\). On the other hand, \(N_{DW}=1\) can be achieved in model \(\rm{E_{I}}\), so that CMM can work in both the pre-inflationary scenario and the post-inflationary scenario. In pre-inflationary scenario, for \(f_{A}\in[7.83\times 10^{10}\rm{GeV},10^{12}\rm{GeV}]\), only one value of \(f_{A}\) which is near \(2.4\times 10^{11}\rm{GeV}\) (corresponding \(m_{A}\approx 2.4\times 10^{-5}\rm{eV}\)) has been excluded. For the latter scenario, \(f_{A}=1.11(2.59)\times 10^{11}\rm{GeV}\) corresponding to \(\langle\theta_{i}^{2}\rangle\simeq 2.96^{2}(1.81^{2})\) is almost a fixed value that can also be realized in model \(\rm{E_{I}}\).
In summary, within any framework of the base models and the extension \(\rm{E_{I}}\), the LSP and the QCD axion can together give the total dark matter relic density observed by the Plank satellite. The base models may still be invisible to future proposed direct axion searches due to the suppressed axion-photon-photon couplings. However, the model \(\rm{E_{I}}\) possesses an enhanced coupling, which is more likely to be detected by future axion searches. Axion, as a particle that appears very naturally in a strong CP free theory, is very charming in particle physics and cosmology, and we are looking forward to future experimental signals that can uncover its mystery.
**Acknowledgements:** We are very grateful to professor Shu-Min Zhao and associate professor Xin-Yi Zhang from Hebei University, for giving us many useful discussions. This work is supported in part by National Natural Science Foundation of China (NNSFC) under Grant No.12075074, No.12175025 and No.12147102, and by the Chongqing Graduate Research and
Innovation Foundation under Grant No.ydstd1912.
|
2303.00990 | Optimizing the phase sensitivity of a Michelson interferometer with a
two mode squeezed coherent input | A Michelson-type interferometer with two-mode squeezed coherent state input
is considered. Such an interferometer has a better phase sensitivity over the
shot-noise limit by a factor of $e^{2r}$, where $r$ is the squeezing parameter
[Phys. Rev. A 102,022614 (2020)]. We show that when photon loss and noise in
the two arms is asymmetric an optimal choice of the squeezing angle can allow
improvement in phase sensitivity without any increase in input or pump power.
In particular, when loss occurs only in one arm of the interferometer, we can
have improvement in phase sensitivity for photon loss up to 80\%. Hence, a
significant improvement can be made in several applications such as LiDAR,
gyroscopes and measuring refractive indices of highly absorptive/reflective
materials. | Stav Haldar, Pratik J. Barge, Xiao-Qi Xiao, Hwang Lee | 2023-03-02T05:54:50Z | http://arxiv.org/abs/2303.00990v1 | Optimizing the phase sensitivity of a Michelson interferometer with a two mode squeezed coherent input
###### Abstract
A Michelson-type interferometer with two-mode squeezed coherent state input is considered. Such an interferometer has a better phase sensitivity over the shot-noise limit by a factor of \(e^{2r}\), where \(r\) is the squeezing parameter [Phys. Rev. A 102,022614 (2020)]. We show that when photon loss and noise in the two arms is asymmetric an optimal choice of the squeezing angle can allow improvement in phase sensitivity without any increase in input or pump power. In particular, when loss occurs only in one arm of the interferometer, we can have improvement in phase sensitivity for photon loss up to 80%. Hence, a significant improvement can be made in several applications such as LiDAR, gyroscopes and measuring refractive indices of highly absorptive/reflective materials.
## I Introduction
Quantum enhancement of phase estimation using optical interferometers is an active area of research. The phase sensitivity of an interferometer using an ordinary coherent light source scales as \(1/\sqrt{n}\), where \(n\) is the mean number of input photons. This scaling limit which is due to the photon counting error is called the shot-noise limit [1]. Over the last four decades, a lot of efforts have been made to overcome this limit. Largely, there are three distinct approaches depending on whether it uses squeezed states, photon-number states, or some combination of both. The first one, squeezed-state approach, is to combine the ordinary coherent light with squeezed state at the first beam splitter. It is the scheme that was proposed by Caves for gravitational wave detection in the early 1980's [2]. The phase sensitivity is shown to scale as \(e^{-r^{\prime}}/\sqrt{n}\) or higher under certain conditions [3; 4; 5] with the squeezing parameter \(r\). SU(1,1) interferometers introduced by YMK, where the usual beam splitters are replaced by four-wave mixers [6], its coherently-boosted scheme [7; 8; 9], and the two-mode squeezed-vacuum scheme [10] can also be included in this category. The second one, number-state approach, is typically to use fixed number of photons distributed to two input ports of the interferometer. Such an approach was first proposed by Yuen [11] and YMK [6] in the 1980's. Many different correlations between the two-mode number states were proposed to go beyond the shot-noise limit [12; 13; 14; 15; 16; 17; 18]. The phase sensitivity in this case typically scales as \(1/n\), dubbed as the Heisenberg limit [19; 20]. Dual- Fock, or twin-Fock, states [12; 13; 14; 16], intelligent states [21; 22], and N00N states [23; 24; 25] are among the named correlated Fock states. The third category is the approach that combines the squeezed states and the number states. This approach was first proposed by Gerry et al. [26; 27]. More often than not the input state to the interferometer is prepared by an operation--such as photon addition, subtraction, or catalysis--made onto squeezed states to achieve quantum enhancement in phase sensitivity [28; 29]. These approaches might as well be differentiated as gaussian states, non-gaussian states, and non-gaussian operation on gaussian states in more general terms. It has been shown in a recent paper that a coherently boosted two-mode squeezed state, or two-mode squeezed coherent state (TM-SCS) can be used as the interferometer input to achieve shot-noise phase sensitivity [30]. The interferometer is a generic, ordinary SU(2) type with two beam splitters and intensity difference measurement at the output--as opposed to the seeded SU(1,1) type. The phase sensitivity in this case is shown to be \(e^{-2r}/\sqrt{n}\). We have a doubly enhanced phase sensitivity; one with squeezing, the other with amplification. Thus, the value of \(n\) is \(e^{2r}\) times larger than the number of photons in the initial coherent states.
In the present work we consider a Michelson-type interferometer with TMSCS as the input. First in Section II, we investigate the maximum amount of loss and noise tolerable to maintain sensitivity below the shot-noise limit. In particular, we report that by an optimal choice of the squeezing angle or other input phases, the phase sensitivity can be enhanced and noise beyond the 3 dB limit tolerated. Next, in Section III, we analyze the complementary radiation-pressure error. We find that the radiation pressure noise increases with decreasing photon counting fluctuations, so that the standard quantum limit remains intact. In Section IV we discuss our conclusions.
## II Phase sensitivity of a Michelson interferometer
Here we describe the phase sensitivity of a Michelson interferometer (MI) with loss and noise in both arms (See Fig. 1). The relative phase \(\phi\), acquired by the photons in one arm (variable arm) compared to the other arm (reference arm) can be measured by its modulation of the photon number difference at the two outputs of the interferometer (D1 and D2 in Fig. 1). This can in turn be used to estimate the path difference between the two arms. The higher the sensitivity of the photon number difference to the changes in relative phase the more precise the interferometer. Here, in order to model a realistic setting we assume loss and noise in both the reference arm and variable arm of the interferometer. |
2305.04878 | How to Leverage High Altitude Platforms in Green Computing? | Terrestrial data centers suffer from a growing carbon footprint that could
contribute with $14\%$ to global CO2 emissions by 2040. High Altitude Platform
(HAP) is a promising airborne technology that can unleash the computing
frontier in the stratospheric range by hosting a flying data center. HAP
systems can endorse the sustainable green operation of data centers thanks to
the naturally low atmospheric temperature that saves cooling energy and its
large surface that can host solar panels covering energy requirements.
Throughout this article, we define the operation limitations of this innovative
solution and study the energy-efficiency-related trade-offs. Then, we shed
light on the significance of the scalability of the data center-enabled HAP
architecture by investigating potential bottlenecks and proposing different
deployment scenarios to avoid network congestion. We also highlight the
importance of the management agility of the data center-enabled HAP system by
defining effective management techniques that yield high-performing data
centers. Our results demonstrate that deploying a single data center-enabled
HAP can save $12\%$ of the electricity costs. | Wiem Abderrahim, Osama Amin, Basem Shihada | 2023-05-08T17:21:40Z | http://arxiv.org/abs/2305.04878v1 | # How to Leverage High Altitude Platforms in Green Computing?
###### Abstract
Terrestrial data centers suffer from a growing carbon footprint that could contribute with \(14\%\) to global CO2 emissions by 2040. High Altitude Platform (HAP) is a promising airborne technology that can unleash the computing frontier in the stratospheric range by hosting a flying data center. HAP systems can endorse the sustainable green operation of data centers thanks to the naturally low atmospheric temperature that saves cooling energy and its large surface that can host solar panels covering energy requirements. Throughout this article, we define the operation limitations of this innovative solution and study the energy-efficiency-related trade-offs. Then, we shed light on the significance of the scalability of the data center-enabled HAP architecture by investigating potential bottlenecks and proposing different deployment scenarios to avoid network congestion. We also highlight the importance of the management agility of the data center-enabled HAP system by defining effective management techniques that yield high-performing data centers. Our results demonstrate that deploying a single data center-enabled HAP can save \(12\%\) of the electricity costs.
## I Introduction
Climate change is a momentous threat that humankind faces because it severely affects the health of the planet and jeopardizes all forms of life. Therefore, prompt and collective actions should be taken to decelerate its effects by reducing excessive carbon emissions worldwide. One of the major industries that is raising serious concerns in this regard is the data center industry; which is anticipated to contribute with \(14\%\) to global CO2 emissions by 2040 [1]. Indeed, data centers will consume around \(13\%\) of the worldwide electricity by 2030; which is predominately generated by carbon-intensive fossil fuels [2]. Moreover, these large-scale computing infrastructures; which are already behind major energy-efficiency issues, are expected to expand exponentially to process and store our continuously-growing data generated by data-intensive applications emerging in different fields such as artificial intelligence, smart cities and telecommuting [3]. For example, the total number of installed instances such as virtual machines and containers in global data centers increased from about 150 million in 2016 to 500 million in 2021 [4]. Moreover, Amazon has deployed large-scale data centers in 25 geographic regions to support 80 availability zones and Google has deployed more than 18 data centers in Americas, Asia, and Europe recently [1]. Given this alarming situation, joint efforts from academia and industry are required to improve the energy-efficiency of data centers beyond recommending the commonly adopted approaches that are based on renewable energy usage or carbon offset mechanisms [5].
Pragmatically, non-traditional computation paradigms are needed to provide innovative and tangible solutions that vigorously tackle the energy-efficiency issues of current data centers. Different technologies can be incorporated with the terrestrial data center to contribute to environmental sustainability by providing carbon-neutral integrated data centers. Underwater data centers are currently publicized as the future of data storage thanks to their high reliability. Moreover, satellite data centers have attracted growing interest recently as the newly defined computing frontier. However, most of these data center's architectures present several drawbacks ranging from limited coverage for underwater data centers to high costs, additional delays, and limited payload for satellite data centers. Interestingly, high-altitude platforms (HAP) seem a good trade-off between these incorporated technologies because they offer larger coverage areas than underwater systems, support more important payloads, and guarantee easier maintenance and lower delays than satellites.
Therefore, we believe that the data center-enabled High Altitude Platform (HAP) is a distinguishable green alternative to traditional terrestrial data centers, given the encouraging merits of the HAPs. Indeed, HAP can be a core futuristic airborne network component that will unleash the networking frontier in the stratospheric range at an altitude between 17 km and 20 km [6, 7]. HAP offers several unique advantages, especially from energy and communication perspectives. Firstly, the HAP location at the stratosphere saves the cooling energy thanks to the naturally low atmospheric temperature (between \(-50^{\circ}\)C and \(-15^{\circ}\)C). Hence, a data center-enabled HAP can offload some workload from terrestrial data centers, saving the associated cooling energy. Moreover, HAP can host large solar panels that harvest substantial amounts of solar energy thanks to HAP's large surface and its location above the clouds. The HAP feeds the servers with the solar energy harvested during the daytime and stored in the Lithium-Sulphur batteries during the nighttime. Hence, the harvested solar energy can cover amply the computational power required by the data center' servers; while the necessary energy conversion and management strategies are efficiently applied [6, 7]. Secondly, the HAP's location at high altitudes offers LoS communication links with several receivers thanks to the the availability of large terrestrial footprint and the absence of obstructions in the horizon. Hence, HAPs can establish reliable direct links with a large number of terrestrial base stations [6, 7]. These advantages enable the data center-enabled HAP to offer a rich panoply of computing services that range
from supporting internet of things applications to intelligent transportation systems as depicted in Fig.1. Moreover, the data center-enabled HAP improves the dependability properties of these computing services by boosting the reliability of the terrestrial data center in urban areas and its maintainability in disaster areas. Besides, the flying data center hosted in the HAP guarantees the availability of the computing services in rural and remote areas in addition to under-connected areas such as maritime area.
In this work, we study the data center-enabled HAP architecture from different aspects related to energy-awareness, scalability and management agility while defining the limits and conditions under which this solution can be leveraged for green computing. First, we investigate the major settings and trade-offs that impact the performance of the data center-enabled HAP in terms of energy saving. Then, we analyse the potential bottlenecks of this architecture and we provide various deployment scenarios to circumvent the anticipated scalability issues. Finally, we advocate different management techniques to preserve the desired operation of the data center-enabled HAP over long-duration missions.
## II Energy-awareness of Data center-enabled HAP
In this section, we study the data center-enabled HAP from an energy-awareness perspective. Specifically, we analyze the conditions and trade-offs that impact the energy-saving benefits of the data center-enabled HAP. We conduct all the simulations based on realistic parameters detailed in Table I.
### _Energy Saving_
A breakdown of the energy consumed by a data center shows that the cooling infrastructure and the computational infrastructure are the main components that absorb the data center energy with up to \(40\%\) and \(56\%\) respectively [8]. Therefore, the data center-enabled HAP can significantly reduce the energy consumed by a terrestrial data center thanks to two main reasons. First, the data center-enabled HAP saves the cooling energy thanks to the naturally low atmospheric temperature of the stratosphere (between \(-50^{\circ}\)C and \(-15^{\circ}\)C); where the HAP is located. This advantage dismisses the need for the presence of cooling units in the HAP because the average temperature in the stratosphere is substantially lower than the recommended temperature for the data center (between \(18^{\circ}C\) and \(26^{\circ}\)C) by the American Society of Heating, Refrigerating and Air-Conditioning Engineers. Second, the data center-enabled HAP saves the computational energy thanks to the harvested solar energy. Indeed, the HAP feeds the servers with the solar energy harvested during the daytime and stored in the Lithium-Sulphur batteries during the nighttime. However, the terrestrial data center feeds its servers with the electric energy supplied through the electrical grid constantly.
We note that the energy saved by the data center-enabled HAP is impacted by the HAP's location and the considered period of the year. For instance, the HAP can harvest more solar energy during June if it is located in the Northern hemisphere. Indeed, the solar radiation and the daylight duration are more critical since the Northern hemisphere is closer to the Sun in June.
We also highlight the vital role of distributed learning in improving the energy efficiency of the data center-enabled HAP. Indeed, the optimization of such complex non-linear systems geographically distributed between the sky and the ground by conventional optimization frameworks requires complicated heuristics and tedious calculations. However, implementing a distributed learning algorithm that trains the data of the
Fig. 1: Data Center Enabled HAP Use Cases
\begin{table}
\begin{tabular}{|c|c|c|} \hline Type & Parameter & Numerical Value \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} Cooling/ \\ Thermal \\ Inputs \\ \end{tabular} } & Supply Temperature & \(299.15\) K \\ & Server Initial Temperature & \(310\) K \\ & CPU Initial Temperature & \(318\) K \\ & Thermal Resistance & \(0.34\) K/W \\ & Server Heat Capacity & \(340\) J/K \\ \hline \multirow{4}{*}{\begin{tabular}{c} HAP \\ Inputs \\ \end{tabular} } & Maximum Payload & \(450\) kg \\ & Flying Server Payload & \(11\) kg \\ & Area of the Photovoltaic (PV) Surface & \(8000\) m\({}^{2}\) \\ & Efficiency of the PV & \(0.4\) \\ & Propeller efficiency & \(0.8\) \\ & Battery capacity & \(2\) kWh\(/\)kg \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Transmission \\ Inputs \\ \end{tabular} } & Antennas in TDC & \(2\) \\ & Antennas in HAP & \(16\) \\ & Carrier Frequency & \(31\) GHz \\ & Channel Bandwidth & \(100\) MHz \\ \hline \end{tabular}
\end{table} TABLE I: Simulation Settings
airborne servers and the terrestrial servers yields a global energy-efficiency model that predicts the necessary policies with less required complexity. Since learning approaches are generally energy-hungry, the adopted algorithms should be carefully selected to conserve the energy-efficiency of the data center-enabled HAP [9].
### _HAP Flying Condition_
Energy management of HAPs is crucial because these platforms are typically designed for long-duration missions. Therefore, we establish the HAP flying condition to examine when the HAP can stay stable and keep flying. Hence, we study the energy budget of the HAP such that the daily harvested energy is equal to the consumed energy. The main energy source for the daytime operation of HAPs is the solar energy harvested through the photovoltaic system implemented on the surface of the inflatable part of the HAP. The harvested energy depends mainly on the latitude of the HAP and the considered day of the year [6, 7, 10]. HAPs also incorporate energy storage components; which are typically Lithium-Sulphur batteries or hydrogen fuel cells. These batteries support the nighttime operation of the HAP and are fed by the solar energy harvested during the daytime. Under the flying condition, the harvested energy should cover the consumed energy by the HAP; which captures the energy consumed by its propulsion subsystem, the energy consumed by its payload subsystem and wireless communication energy. The propulsion energy includes the energy required to localize and stabilize the HAP. The payload energy is the computational energy of the servers hosted in the HAP and it depends on the servers characteristics (e.g. the desirable utilization ratio and the service rate in million instructions per second) besides the workload characteristics (e.g. the arrival rate and the mean task length).
To offer qualitative insight into the HAP flying condition, we study the maximum utilization ratio per server under this condition based on realistic parameters of HAP and data centers [11, 12]. We plot the probability distribution function of the utilization ratio in Fig. 2 over the different days of the year. Indeed, the HAP flying condition determines the maximum computational energy consumed by the HAP-hosted servers and depends mainly on the servers' utilization ratio. We can assess the utilization ratio of a server by dividing the workload arrival rate by the server's service rate. We consider the server to be under-utilized if its utilization ratio is below \(70\%\), and it is over-utilized if its utilization ratio is close to \(100\%\).
The HAP flying condition is mainly impacted by the number of airborne servers hosted in the HAP. For instance, when 40 servers are deployed in the HAP, the effective usage of the airborne servers (i.e., utilization ratio between \(70\%\) and \(100\%\)) is possible during most of the year's days as depicted in Fig. 2. In this case, the collected solar energy covers exactly the propulsion energy and the payload energy and the servers are mostly used efficiently because the HAP is fully loaded with 40 servers. However, when only 35 servers are deployed in the HAP, the airborne servers need to be over-utilized (i.e., utilization ratio close to \(100\%\)) to fully use the harvested energy. Therefore, the harvested energy and the servers' capacities are effectively used when the number of airborne servers increases.
### _Trade-offs_
It is important to reduce the energy consumed by the terrestrial data center by maximizing the number of airborne servers and the computing workload offloaded to the HAP. However, the adoption of such techniques leads to a resource utilization dilemma since energy consumption and resource utilization are strongly coupled. On the one hand, the over-utilization of the available resources threatens the physical capabilities of the system and might yield dysfunctional servers or unbalanced HAP. For instance, high central processing unit (CPU) utilization and/or memory utilization overload(s) the server and lead(s) to unresponsive or frozen systems. Also, high resource utilization requires high harvested energy, which may preclude the flying condition. On the other hand, the under-utilization of the available resources might yield aging servers and substantial wasted energy since the idle servers can consume as high as \(60\%\) of the peak power [8]. Therefore, it is valuable to adopt the adequate resource management (e.g. consolidation, containerization) techniques in data center-enabled HAP to reduce the consumed energy without overlooking the physical capacities of the available resources.
Another relevant trade-off to energy-efficiency is related to service availability. Indeed, data centers have strict service level agreements (SLA) that include for instance providing high availability (24/7), short delays and maximum security to the hosted services. To meet these requirements and avoid the SLA's violation penalties, resources are usually over-provisioned. Therefore, it is valuable to find the appropriate resource provisioning and workload scheduling in the airborne servers present in the HAP that not only reduces energy consumption but also avoids performance degradation especially in terms of service continuity, reliability and security.
Fig. 2: HAP Flying Condition over the Year’s Days
## III Scalability of Data Center-Enabled HAP
In this section, we study different deployment scenarios to overcome the scalability issues of the data center-enabled HAP architecture.
### _Bottlenecks in Data Center-Enabled HAP_
According to Cisco, global cloud data center traffic was estimated to reach 20.6 zettabytes per year in 2021 given the data-intensive applications such as artificial intelligence and internet-of-everything applications [13]. This skyrocketing data consumes huge computing and communication resources and might create potential bottlenecks in the data center-enabled HAP. For instance, some airborne servers may become hotspots if the offloaded traffic to the HAP is not properly load-balanced. This issue leads to overloaded servers' buffers and impacts substantially the experienced queuing delay. Moreover, the wireless link between the HAP and the terrestrial data center is a typical bottleneck in the data center-enabled HAP. Indeed, link outage happens if the offloaded workload from the terrestrial data center cannot be supported by the established transmission link to the HAP. This issue leads to a supplementary outage delay besides the imposed transmission delay engendered by the servers distribution between the HAP and the terrestrial data center. It also deteriorates the system reliability and requires supplementary re-transmission to recover the lost workload. Therefore, the servers capabilities and the link characteristics should be carefully considered when scheduling the workload to the HAP to achieve higher performance in terms of delay and reliability. Beyond conventional workload scheduling approaches, we need to develop ambitious solutions that improve the scalability of this integrated terrestrial-aerial computing system. In this regard, we propose three deployment scenarios for data center-enabled HAP in the next section.
Fig. 4: Year’s Period Impact
Fig. 3: Data center-enabled HAP Deployment Scenarios
### _Multi-HAPs Constellations_
Multi-HAPs constellations can coordinate and cooperate to support larger payloads by offering reliable services [7]. In our airborne computing scenario, a constellation of multi-HAPs can circumvent the potential bottlenecks by supporting larger payloads and offering a more prominent capacity [7]. However, the server payload should be carefully considered in this configuration given its impact on the flying condition. Specifically, a distributed air computing platform supported by different HAPs can overcome the computational overload of one HAP. Moreover, the multi-links established between different HAPs and the terrestrial data center increase the available capacity and mitigate the link outage substantially. However, a proper constellation design is imperative to meet the growing data traffic requirements. Firstly, it is essential to determine the number of necessary airships and to optimize their locations while reducing the overlapped footprint to maximize the resources' usage. For instance, we should investigate when is it more effective from an energy perspective and from a performance perspective to deploy one HAP loaded with some airborne servers along with a terrestrial data center and when to deploy all the servers of the terrestrial data center in different HAPs (Fig. 3). Secondly, it is important to determine the interconnection type between the different HAPs (e.g. optical link, radio link) while implementing the necessary spectrum sharing and interference management techniques not only within HAPs but also with adjacent networks such as satellites' constellations and low altitude platforms constellations. Moreover, the LOS conditions with the terrestrial data centers and between the HAPs leads to crucial interference issues that must be mitigated through proper multiplexing techniques that meet the energy-efficiency objectives of the data center-enabled HAP. It is indispensable to determine if the control strategy of the multi-HAPs-enabled data center should be centralized or distributed within the HAPs [14]. Specifically, in a centralized strategy (Fig. 3), the control is assigned to one of the HAPs that has the necessary information about the remaining HAPs capabilities and schedules the offloaded workload from the terrestrial data center accordingly.
In a distributed strategy, all the HAPs coordinate their communication and cooperate to manage the offloaded workload from the terrestrial data. The centralized strategy is advantageous because the controller-HAP has a global view about the constellation and can achieve a near optimal control solution at the expense of a relatively long response time. The distributed strategy is advantageous because it circumvents the single point of failure problem at the expense of higher complexity and supplementary communication overheads. The centralized-control strategy can be improved through distributed learning approaches to optimize the control rules regarding response time and energy efficiency by offloading the appropriate workload amount to the appropriate HAP.
### _Deployment Scenarios_
To offer qualitative insight about the scalability's gain of data center-enabled HAP, we compare the system electricity cost; which represents the main operational expenditure source in data centers; according to the deployment scenarios shown in Fig 3[15]. In the first scenario, a terrestrial data center is considered to process all the incoming workload. In the second scenario, some servers are placed in the HAP and most of the servers are hosted in the terrestrial data center. In the third scenario, 4 HAPs collaborate with the terrestrial data center to process the incoming workload. Throughout these simulations, the system electricity cost is evaluated for the maximum workload arrival rate determined by the HAP flying condition according to the variation of latitudes and days as shown in Fig. 4. It is worthy to note that the electricity cost of the terrestrial data center includes the cost of the computational energy and the cooling energy. However, the electricity cost of the data center-enabled HAP system captures the saved energy thanks to using the HAP while considering the wireless transmission energy cost for the communication link from terrestrial base-station to the HAP in addition to cost of utilized terrestrial servers.
First, by carrying out our own simulations, we notice from Fig. 4 the deployment of a flying data center in the HAP saves substantially the electricity cost of a fully-terrestrial data center with \(=12\%\) achieved when one HAP is deployed and with \(=36\%\) when 4 HAPs are deployed respectively for the maximum workload arrival rate. Therefore, the electricity costs of terrestrial data centers can be further cut down with lower arrival rates thanks to HAPs deployment, which reduces the tasks arrival rates to terrestrial data centers. Moreover, we notice from Fig. 4 that the highest electricity costs are recorded at the beginning and the end of the year because the lowest solar energy can be harvested during this period in the Northern hemisphere.
## IV Management Agility of Data Center-Enabled HAP
In this section, we provide different management techniques that entail workload management, network management and airship management to enable the desired performance of the data center-enabled HAP.
### _Workload Management_
Incoming workload to data centers are time varying and heterogeneous because they are originated from different applications and serve various users. Therefore, different quality of service levels are required from the data center enabled-HAP. However, the characteristics of the servers hosted in the HAP are different from those hosted in the terrestrial data center. For instance, the servers hosted in the HAP are lighter in terms of weight, have less advanced computing features and are limited in number to keep the HAP balanced. Hence, an agile workload management is imperative to improve the aggregated performance of the data center-enabled HAP in terms of reliability and delay.
To investigate the data center-enabled HAP delay performance, we consider the task length characteristic of the workload and study its impact on the delay performance versus the workload arrival rate; as shown in Fig. 5. For a short task length workload, the transmission delay needed to send the
workload to the servers hosted in the HAP and symbolized with round trip time (RTT) is significantly lower compared to the queuing delay experienced in the terrestrial data center. Therefore, the transmission delay can be tolerated because the workload spends substantial time in the terrestrial servers' queues before processing. However, this observation is not valid when the incoming workload is characterised with a long task length. Specifically, we notice that the transmission delay exceeds the queuing delay for some arrival rates as depicted in Fig. 5.
Another type of delays that needs careful consideration in the multi-HAPs scenario is the relaying delay. We note that the relaying delay represents the time required to relay the workload inter-HAPs. We notice that the relaying delay is lower than the queuing delay for a 3-HAPs constellation and long task length (2 hops) or 8-HAPs constellation (7 hops) for a short task length workload; as shown in Fig. 5. Thus, we conclude that the workload can be relayed between 3 HAPs for long task length and 8 HAPs for short task length without impacting the total delay performance. However, the relaying delay between 3 HAPs exceeds the queuing delay if the workload is characterised with a long task length.
The agile workload management is also fundamental for delay-tolerant applications used in far-away areas such as disruption-tolerant networking. The data center-enabled-HAP can be useful in this case when a HAPs' constellation is deployed over different regions to widen the coverage. Accordingly, the delay-tolerant workload is offloaded to the nearest HAP in the constellation such that the experienced delay is minimized.
Moreover, an agile workload management that tracks the renewable energy sources is gaining momentum to address the energy concerns of data centers. However, the trade-off between energy demand and renewable energy supply is challenging for different reasons. First, the incoming workload to data centers are time varying and heterogeneous. Second, renewable energy sources are intermittent and depend tightly on weather conditions and geographical locations.
### _Network Management_
The distribution of the computing resources between the HAP and the terrestrial data center engenders supplementary overhead to synchronize and replicate the offloaded data to the HAP. Therefore, effective network management strategies based on self-organizing approaches should be adopted in the data center-enabled HAP. On the one hand, the control strategy of the data center-enabled HAP's network is fundamental to pool and orchestrate the distributed resources with agility and flexibility and provide them as disaggregated resources in the entire data center-enabled HAP and within a data center-enabled multi-HAPs. Different technologies can be used to implement disaggregation such as composable data center infrastructure. This technology is based on the software-defined paradigm to enable the abstraction of the disaggregated resources and their composition/decomposition and management over the HAP or the multi-HAPs and the terrestrial data center.
On the other hand, higher bandwidth requirements are imposed on the communication link between the HAP and the terrestrial data center and between the HAPs to embrace the continuously-increasing data traffic and its control information. In this regard, the terahertz and optical wireless communications of 6G networks will play a key role to handle incoming workload to terrestrial data center when transferred to the stratosphere. Besides, different transmission technologies can be used to enhance the link capacity and optimize its performance. For instance, massive multiple-input multiple-output (MIMO) offers a high link robustness based on the aggressive spatial multiplexing and can be easily implemented on the HAP thanks to its large surface. Massive MIMO can be combined with various access techniques such as orthogonal multiple access to achieve a high spectral efficiency between the HAP and the terrestrial data center by using maximum ratio combing and zero-forcing precoding methods.
### _HAP Management_
Even though HAPs are designed for long duration missions, regular maintenance is required to guarantee the reliable performance of the data center enabled-HAP. First, it is important to supervise if the airship is operating under the prescribed service conditions such as the supported stratospheric turbulence and load. It is also crucial to check the energy storage system of the airship and the electric engines and to monitor the loss percentage of the gas responsible for lifting the airship.
Moreover, it is important to supervise the operation of the data center hosted in the HAP by monitoring the security threats and the non-malicious failures such as memory leakage and CPU overload. Therefore, secure and dependable mechanisms adapted to the capabilities of the HAP-flying data center should be implemented and periodically updated to offer secure, recoverable and reliable computing services. For instance, the blockchain technology can be deployed in the data center enabled-HAP to secure the data hosted in the HAP.
Besides, advanced redundancy mechanisms can be deployed within different HAPs in a multi-HAPs constellation setting to fight against the non-malicious failures. Interestingly, HAPs
Fig. 5: Workload Impact on Delays of Data center-Enabled HAP
have the asset to offer an agile maintenance compared to satellites because the possibility of return to Earth is easier. However, it is valuable to consider the trade-off between the maintenance costs; which contribute significantly to the operational expenditures; besides the takeoff/landing costs on the one hand and the data center enabled-HAP performance on the other hand.
## V Conclusion and Future Directions
In this article, we shed light on the potential of the data center-enabled HAP to unleash the green computing frontier to the stratosphere. The main goal of this first stage study is to investigate the energy-efficiency and the performance of data center-enabled-HAP. However, it is important to explore the potential research challenges towards a comprehensive study. From a financial perspective, it is crucial to analyze the capital and operational expenditures of the data center-enabled HAP because they may protract its adoption given the supplementary generated costs of the HAP, the flying servers, the required regular maintenance. From a technical perspective, it is important to consider the impact of the extreme weather in the stratosphere over the operation of the flying servers and the supported mission duration. It is also essential to manage the workload in the data center-enabled HAP based on the application requirements and the originating regions. For instance, the multi-HAP scenario can be efficient to handle the workload generated by disruption-tolerant networking in disconnected areas if the necessary network management strategies are adopted. Eventually, it is fundamental to implement artificial intelligence in workload prediction and network management to improve the performance of the data center-enabled HAP. However, energy-aware AI algorithms should be applied to conserve the energy-efficiency of the data center-enabled HAP.
|
2310.16012 | Regularization estimates of the Landau-Coulomb diffusion | The Landau-Coulomb equation is an important model in plasma physics featuring
both nonlinear diffusion and reaction terms. In this manuscript we focus on the
diffusion operator within the equation by dropping the potentially nefarious
reaction term altogether. We show that the diffusion operator in the
Landau-Coulomb equation provides a much stronger L^1 to L^\infty rate of
regularization than its linear counterpart, the Laplace operator. The result is
made possible by a nonlinear functional inequality of Gressman, Krieger, and
Strain together with a De Giorgi iteration. This stronger regularization rate
illustrates the importance of the nonlinear nature of the diffusion in the
analysis of the Landau equation and raises the question of determining whether
this rate also happens for the Landau-Coulomb equation itself. | Rene Cabrera, Maria Gualdani, Nestor Guillen | 2023-10-24T17:09:56Z | http://arxiv.org/abs/2310.16012v3 | # Regularization estimates of the Landau-Coulomb diffusion
###### Abstract.
At the present moment, it remains uncertain whether the Landau-Coulomb equation possesses a unique smooth solution for arbitrarily large times. Alongside the diffusion term, this equation includes a reaction term that could rapidly transform nice configurations into singularities. In this manuscript we show that the diffusion operator in the Landau-Coulomb equation provides much stronger \(L^{1}\to L^{\infty}\) regularization effects than its linear counterpart, the Laplace operator. Our novel quantification suggests that when seeking a proof of global well-posedness, the nonlinear diffusion term could play a pivotal role.
MPG is supported by DMS-2019335 and would like to thank NCTS Mathematics Division Taipei for their kind hospitality.
found in [14, 2, 4] and and related references. The first work that considers initial data in \(L^{\infty}\) is the one by Kim, Guo and Hwang [15]. Recently, Golding, Gualdani and Loher [6] encompassed the problem in all \(L^{p}\) spaces with \(p>\frac{3}{2}\), leaving the case \(p\leq\frac{3}{2}\) still unexplored. (iii) Conditional regularity. This line of research concerns the investigations of conditions that guarantee global well-posedness of solutions for arbitrarily large times. Silvestre [17] and Gualdani, Guillen [12] showed that if the function \(u(x,t)\in L^{p}(\mathbb{R}^{3})\) with \(p>\frac{3}{2}\) uniformly in time, then it is smooth. Recently, Alonso, Bagland, Desvillettes, and Lods [1] showed that if \(u(x,t)\in L^{q}(0,T,L^{p}(\mathbb{R}^{3})\) for a certain range of \(q\) and \(p\), then it is automatically in \(L^{p}(\mathbb{R}^{3})\) with \(p>\frac{3}{2}\) and therefore smooth. Regarding conditional uniqueness, Fournier in [5] showed that solutions which have \(L^{\infty}\) norm integrable in time are unique. Chern and Gualdani [3] showed that uniqueness holds in the class of high integrable functions. (iv) Partial regularity. This line of research for the Landau equation started with Golse, Gualdani, Imbert and Vasseur [8]. In this work it is shown that, if singularities occur, they are concentrated in a time interval that has Hausdorff measure at most \(\frac{1}{2}\). Most recently, Golse, Imbert and Vasseur showed that the spatial and temporal domain for singularities to happen has Hausdorff measure \(1+\frac{5}{2}\)[9]. (v) Study of modified models that pertain the same difficulties of the Landau equation but seem analytically more tractable. This line of research started with the work of Gressmann, Krieger and Strain [10, 16] and their analysis of an isotropic version of (1.2),
\[u_{t}=a[u]\Delta u+\alpha u^{2},\quad\alpha>0. \tag{1.3}\]
In [10, 16] they show that (1.3) is globally well-posed if initial data are radially symmetric and monotonically decreasing and \(\alpha\in(0,\frac{74}{75})\). Later, Gualdani and Guillen [11] proved global well-posedness for \(\alpha=1\) also in the case when initial data are radially symmetric and monotonically decreasing. These works proved the conjecture that, unlike what happens in the semilinear heat or porous media equations, the nonlinear diffusion \(a[u]\Delta u\) is strong enough to overcome the reaction \(u^{2}\). Later, Gualdani and Guillen [13] showed the the isotropic Landau equation with less singular potentials (\(\gamma\in(-2.5,-2]\)) is also globally well-posed.
These findings lead us to the motivation behind the present manuscript. The proofs in groups (i), (iii) and (iv) primarily rely on the ellipticity estimates provided by the lower bound of \(A[u]\). Specifically, if the function \(u\) has mass, second moment and entropy bounded, the matrix \(A[u]\) is uniformly bounded from below by
\[A[u]\geq\frac{c}{1+|x|^{d}}\mathbb{Id},\quad x\in\mathbb{R}^{d},\]
where \(c\) only depends on mass, second moment and entropy of \(u\). While a weighted Laplacian operator is analytically more tractable then the full nonlinear nonlocal diffusion \(\operatorname{div}(A[u]\nabla u)\), one might argue that by using the lower bound on \(A[u]\) we discard an important element that could actually prevent singularities following the intuition that when \(u\) is big so is the diffusion coefficient \(A[u]\) and this strength could prevent formation of singularities. However, this intuition has been very difficult to apply in practice. Interestingly, however, is the fact that all the global-well-posedness results for general data mentioned above use the full power of the diffusion operator. These include the results in [10, 16], which are a consequence of a novel weighted Poincare inequality
\[\int_{\mathbb{R}^{d}}u^{p+1}\;dx\leq\left(\frac{p+1}{p}\right)^{2}\int_{ \mathbb{R}^{d}}A[u]|\nabla u^{\frac{p}{2}}|^{2}\;dx,\]
the ones in [11], which use a geometric argument in which the coefficient \(a[u]\) plays a pivotal rule, and lastly, the ones in [13], proven via new weighted Hardy inequalities of the form
\[\int_{\mathbb{R}^{d}}(u\ast|x|^{\gamma})u^{p}\;dx\leq c_{d,\gamma,p}\int_{ \mathbb{R}^{d}}(u\ast|x|^{\gamma+2})|\nabla u^{\frac{p}{2}}|^{2}\;dx,\quad \gamma>-d.\]
Lastly, it was already noted in [12] that conditional regularity result for (1.2) shows a rate of regularization much stronger than what is usually expected for regular parabolic equations.
In this manuscript we provide a new and precise quantification of the regularization power of the Landau diffusion operator. Notably, this regularization exhibits a significantly faster rate than that achieved by the Laplacian operator.
**Theorem 1.1**.: _Let \(u(t,x)\) be a solution to (1.1) with initial data \(0\leq u_{in}\) that belongs to \(L^{1}_{m}(\mathbb{R}^{d})\) for some \(m>3d(d-2)\) and \(d\geq 3\). Then we have_
\[\sup_{t>0}\|u\|_{L^{\infty}(\mathbb{R}^{d})}\leq\frac{c}{t^{1+\varepsilon}}, \tag{1.4}\]
_for \(\varepsilon\) as small as one wishes and \(c>0\) a constant that only depends on the \(L^{1}_{m}\)-norm of the initial data._
Hereafter, we denote by \(L^{1}_{m}(\mathbb{R}^{d})\) the space of all \(L^{1}(\mathbb{R}^{d})\) functions such that
\[\int_{\mathbb{R}^{d}}|f|(1+|x|^{2})^{m/2}\;dx<+\infty.\]
The proof of Theorem 1.1 follows from two steps. In the first one we show a \(L^{1}\to L^{p}\) gain of integrability for \(u\), solution to (1.1). The second step includes a De Giorgi iteration that covers the \(L^{p}\to L^{\infty}\) jump. The combination of these two steps yields (1.4).
## 2. Some technical lemmas
We first recall well-known results on the bounds of the diffusion matrix \(A[u]\):
**Lemma 2.1**.: _There exist positive constants \(C_{0}\) and \(c_{0}\) depending on the dimension \(d\geq 3\) such that_
\[\|A[u]\|_{L^{\infty}(\mathbb{R}^{d})}\;\leq\;C_{0}\|u\|_{L^{p}(\mathbb{R}^{d} )}^{\frac{p(d-2)}{d(p-1)}}\|u\|_{L^{1}(\mathbb{R}^{d})}^{\frac{2p-d}{d(p-1)}},\quad p>\frac{d}{2},\]
_and_
\[\|divA[u]\|_{L^{\infty}(\mathbb{R}^{d})}\;\leq\;c_{0}\|u\|_{L^{p}(\mathbb{R} ^{d})}^{\frac{p(d-1)}{d(p-1)}}\|u\|_{L^{1}(\mathbb{R}^{d})}^{\frac{p-d}{d(p-1 )}},\quad p>d.\]
We will also use the following weighted Sobolev inequality: for \(f\) smooth enough and any \(1\leq s\leq\frac{2d}{d-2}\) we have
\[\left(\int_{\mathbb{R}^{d}}|f|^{\frac{2d}{d-2}}\langle x\rangle^{-3d}\;dx \right)^{\frac{d-2}{d}}\leq c_{1}\int_{\mathbb{R}^{d}}|\nabla f|^{2}\langle x \rangle^{-d}\;dx+c_{2}\left(\int_{\mathbb{R}^{d}}|f|^{s}\;dx\right)^{2/s}. \tag{2.1}\]
Here, \(\langle x\rangle:=(1+|x|^{2})^{1/2}\). The derivation of (2.1) for \(d=3\) can be found in [6]. We use (2.1) to prove the following interpolation inequality:
**Lemma 2.2**.: _Let \(p>1\) and \(q\) such that \(p+\frac{2}{d}<q<p\left(1+\frac{2}{d}\right)\). Let \(m\) be defined as_
\[m:=\frac{3d(d-2)(p-1)}{(d+2)p-dq}.\]
_For any \(g\) smooth function the following bound holds:_
\[\|g\|_{L^{q}(\mathbb{R}^{d})}^{q}\leq C\left(\|\langle\cdot\rangle^{-\frac{d} {2}}\nabla g^{\frac{p}{2}}\|_{L^{2}(\mathbb{R}^{d})}^{2}+\|g\|_{L^{p}(\mathbb{ R}^{d})}^{p}\right)\|g\|_{L^{p}(\mathbb{R}^{d})}^{p\left(\frac{q-p-\frac{2}{2}}{p-1 }\right)}\|g\langle\cdot\rangle^{m}\|_{L^{1}(\mathbb{R}^{d})}^{\frac{(d+2)p-dq }{d(p-1)}}. \tag{2.2}\]
Proof.: We first establish the following interpolation inequality
\[\|g\|_{L^{q}}^{q}\leq\|\langle\cdot\rangle^{-\frac{3(d-2)}{p}}\,g\|_{L\frac{ dp}{d-2}}^{p}\,\,\|g\|_{L^{p}}^{p\left(\frac{q-p-\frac{2}{2}}{p-1}\right)}\|g \,\langle\cdot\rangle^{m}\|_{L^{1}}^{\frac{(d+2)p-dq}{d(p-1)}}, \tag{2.3}\]
that holds for any \(p+\frac{2}{d}<q<p\left(1+\frac{2}{d}\right)\) and \(m=\frac{3d(d-2)(p-1)}{(d+2)p-dq}\). The lemma follows easily once we prove (2.3): use (2.1) with \(f=g^{\frac{p}{2}}\) and \(s=2\) to bound the first term on the right hand side of (2.3) and get
\[\|g\|_{L^{q}}^{q}\leq C\left(\|\langle\cdot\rangle^{-\frac{d}{2}}\nabla g^{ \frac{p}{2}}\|_{L^{2}}^{2}+\|g\|_{L^{p}}^{p}\right)\|g\|_{L^{p}}^{p\left(\frac{ q-p-\frac{2}{2}}{p-1}\right)}\|g\langle\cdot\rangle^{m}\|_{L^{1}}^{\frac{(d+2)p-dq }{d(p-1)}}.\]
Next, to show (2.3) we start with a weighted interpolation
\[\|g\|_{L^{q}(\mathbb{R}^{d})}^{q}\leq\left(\int_{\mathbb{R}^{d}}g^{\frac{dp}{ d-2}}\langle x\rangle^{-\frac{dp\alpha}{q(d-2)}}\ dx\right)^{\frac{\theta q(d-2)}{dp}}\left(\int_{\mathbb{R}^{d}}g^{r} \langle x\rangle^{\frac{r\alpha}{(1-\theta)q}}\ dx\right)^{\frac{(1-\theta)q}{ r}},\]
with \(\alpha\), \(\theta\) and \(r\) satisfy
\[\left\{\begin{array}{c}\frac{\theta q(d-2)}{dp}+\frac{(1-\theta)q}{r}=1,\\ \frac{\theta q(d-2)}{dp}=\frac{d-2}{d},\\ \frac{dp\alpha}{\theta q(d-2)}=3d.\end{array}\right.\]
The above system has solutions \(\alpha=3(d-2)\), \(\frac{(1-\theta)q}{r}=\frac{2}{d}\), \(r=\frac{d}{2}(q-p)\), and \(\theta=\frac{p}{q}\), which yield
\[\int g^{q}\ dx\leq\left(\int g^{\frac{dp}{d-2}}\langle\cdot\rangle^{-3d}\ dx\right)^{\frac{d-2}{d}}\left(\int g^{\frac{d}{2}(q-p)} \langle\cdot\rangle^{\frac{3d(d-2)}{2}}\ dx\right)^{\frac{2}{d}}.\]
Let us focus on the last term: once more, use Holder's inequality and get
\[\begin{split}\left(\int g^{\frac{d}{2}(q-p)}\langle\cdot\rangle^{ \frac{3d(d-2)}{2}}dx\right)^{\frac{2}{d}}&=\left(\int g^{\alpha}g^{ \frac{d}{2}(q-p)-\alpha}\ \langle\cdot\rangle^{\frac{3d(d-2)}{2}}dx\right)^{\frac{2}{d}}\\ &\leq\left[\left(\int g^{p}dx\right)^{\frac{\alpha}{p}}\left(\int g ^{(\frac{d}{2}(q-p)-\alpha)\beta}\langle\cdot\rangle^{\frac{3d(d-2)}{2} \beta}dx\right)^{\frac{1}{\beta}}\right]^{\frac{2}{d}},\end{split} \tag{2.4}\]
where \(\beta:=\frac{p}{p-\alpha}\). We chose \(\alpha\) such that \((\frac{d}{2}(q-p)-\alpha)\beta=1\), which implies
\[\alpha=\left(\frac{d}{2}(q-p)-1\right)\frac{p}{p-1}.\]
Note that \(\alpha>0\) requires \(q>p+\frac{2}{d}\). Since also \(\beta\) has to be positive, and
\[\beta=\frac{2(p-1)}{(d+2)p-dq},\]
we require \(q<\left(\frac{d+2}{d}\right)p\). Substitution of \(\alpha\) and \(\beta\) in (2.4) yields
\[\left(\int g^{\frac{d}{2}(q-p)}\langle\cdot\rangle^{\frac{3d(d-2)}{2}}dx \right)^{\frac{2}{d}}\leq\left(\int g^{p}\;dx\right)^{\frac{q-p-\frac{2}{d}}{p- 1}}\left(\int g\langle x\rangle^{m}\right)^{\frac{(d+2)p-dq}{d(p-1)}}, \tag{2.5}\]
with \(m=\frac{3d(d-2)(p-1)}{(d+2)p-dq}\). This proves (2.3) and finishes the proof.
**Remark 2.3**.: The condition \(p+\frac{2}{d}<q<\frac{d+2}{d}p\) indicates that \(m:=\frac{3d(d-2)(p-1)}{(d+2)p-dq}\) is such that \(m>\frac{3d(d-2)}{2}\).
## 3. \(L^{1}\to L^{p}\) gain of integrability
The next theorem shows a \(L^{1}\to L^{p}\) gain of integrability for solutions to (1.1). The proof follows almost directly from the weighted Poincare's inequality
\[\int_{\mathbb{R}^{d}}u^{p+1}\;dx\leq\left(\frac{p+1}{p}\right)^{2}\int_{ \mathbb{R}^{d}}A[u]|\nabla u^{p/2}|^{2}\;dx, \tag{3.1}\]
first proven in [10]. The gain of integrability we obtain is much faster that the one of the solution to the heat equation, which is of the order of \(\frac{1}{t^{\frac{d}{2}\left(1-\frac{1}{p}\right)}}\).
**Theorem 3.1**.: _Let \(u(t,x)\) be a solution to (1.1). For any \(p>1\) we have_
\[\sup_{t>0}\|u\|_{L^{p}(\mathbb{R}^{d})}\leq\frac{c}{t^{1-\frac{1}{p}}},\]
_with \(c\) a constant depending only on \(p\) and \(\|u_{\text{in}}\|_{L^{1}(\mathbb{R}^{d})}\)._
Proof.: Multiply (1.1) by \(\varphi:=u^{p-1}\) and integrate the resulting equation in \(\mathbb{R}^{d}\). Integration by parts yields
\[\partial_{t}\int u^{p}\;dx=-\frac{4(p-1)}{p}\int\left\langle A[u]\nabla u^{ \frac{p}{2}},\nabla u^{\frac{p}{2}}\right\rangle\;dx.\]
Inequality (3.1) imply
\[\partial_{t}\int u^{p}(x)\;dx\leq-\frac{4p(p-1)}{(p+1)^{2}}\int u^{p+1}(x)\;dx. \tag{3.2}\]
Combining the interpolation inequality
\[\|u\|_{L^{p}}\leq\|u\|_{L^{1}}^{\theta}\|u\|_{L^{p+1}}^{1-\theta},\quad\theta =\frac{1}{p^{2}},\]
with (3.2) yields
\[\partial_{t}\|u\|_{L^{p}}^{p}\leq-C\|u\|_{L^{p}}^{\frac{p^{2}}{p-1}},\]
with \(C=\frac{4p(p-1)}{(p+1)^{2}}\|u_{\text{in}}\|_{L^{1}}^{-\frac{1}{p-1}}\). Define \(y:=\|u\|_{L^{p}}^{p}\); the solution to the differential inequality
\[y^{\prime}\leq-Cy^{\frac{p}{p-1}},\]
has the bound
\[y\leq\frac{1}{\left(y_{0}^{-\frac{1}{p-1}}+\frac{C}{p-1}t\right)^{p-1}}.\]
This implies that
\[\sup_{t>0}\|u\|_{L^{p}}\leq\left(\frac{(p-1)}{C}\right)^{1-\frac{1}{p}}\frac{1} {t^{1-\frac{1}{p}}},\]
and this finishes the proof.
We also have the following moment estimate:
**Lemma 3.2**.: _Let \(u(t,x)\) be a smooth solution to (1.1) in the time interval \([0,T]\) with initial datum \(u_{in}\in L^{1}_{m}(\mathbb{R}^{d})\) for some \(m\geq 2\). Then there exists a constant \(c\) that only depends on \(T\) and the \(L^{1}_{m}\)-norm of \(u_{in}\) such that_
\[\sup_{t\in[0,T]}\|u(t,x)\|_{L^{1}_{m}(\mathbb{R}^{d})}\leq c.\]
Proof.: We start with \(m=2\). Testing with \(\phi=(1+|x|^{2})\) and integrating by parts yield
\[\partial_{t}\int u(1+|x|^{2})dx \leq 4\int u\;|x||\nabla A[u]|\;dx+4d\int u\;\mathrm{Tr}(A[u])\;dx\] \[=:\mathcal{J}_{1}+\mathcal{J}_{2}.\]
Let us first estimate \(\mathcal{J}_{2}\). Applying the first estimate of Lemma 2.1 to \(\mathcal{J}_{2}\), we get
\[\mathcal{J}_{2}\leq C_{0}\|u\|_{L^{p}(\mathbb{R}^{d})}^{\frac{p(d-2)}{(p-1)d}} \|u\|_{L^{1}(\mathbb{R}^{d})}^{\frac{2p-d}{d(p-1)}+1}. \tag{3.3}\]
Then an application of Theorem 3.1 to the \(L^{p}\)-norm of (3.3), gives
\[\mathcal{J}_{2}\lesssim\frac{1}{t^{1-\frac{2}{d}}}\|u\|_{L^{1}(\mathbb{R}^{d}) }^{\frac{2p-d}{d(p-1)}+1}. \tag{3.4}\]
Next, we estimate \(\mathcal{J}_{1}\). We have
\[\mathcal{J}_{1}\leq\|\nabla A[u]\|_{L^{\infty}}\int u\;(1+|x|^{2})\;dx.\]
Apply once more Lemma 2.1 and Theorem 3.1 to get
\[\mathcal{J}_{1}\lesssim\frac{1}{t^{1-\frac{1}{d}}}\int u(1+|x|^{2})\;dx.\]
Gathering the estimates of \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) together, we acquire the bound
\[\partial_{t}\int u(1+|x|^{2})dx =\mathcal{J}_{1}+\mathcal{J}_{2}\] \[\leq\frac{c}{t^{\frac{d-1}{d}}}\int u(1+|x|^{2})\;dx+\frac{C}{t^{ \frac{d-2}{d}}},\]
where \(c:=\|u\|_{L^{1}(\mathbb{R}^{d})}^{\frac{p-d}{d(p-1)}}\) and \(C:=\|u\|_{L^{1}}^{\frac{2p-d}{d(p-1)}+1}\). The last inequality is equivalent to the differential inequality
\[y^{\prime}\leq\frac{c}{t^{\frac{d-1}{d}}}y+\frac{C}{t^{\frac{d-2}{d}}}, \tag{3.5}\]
which, after multiplying by \(\mu(s)=e^{-ds^{\frac{1}{d}}}\), reduces to
\[(y\mu(s))^{\prime}\leq\mu(s)s^{-\frac{d-2}{d}},\]
and has solution:
\[y(t)=e^{dt^{1/d}}\left\{\int_{0}^{t}e^{-ds^{1/d}}s^{\frac{2-d}{d}}ds+y_{0}e^{-dt ^{\frac{1}{d}}}\right\}.\]
Applying the same argument iteratively, we can get the estimate for any \(m>2\).
**Remark 3.3**.: Thanks to the bound on the second moments from Lemma 3.2, the conservation of mass and the decay of entropy, the matrix \(A[u]\) satisfies the following ellipticity condition:
\[A[u](x,t)\geq\frac{c(T)}{\langle x\rangle^{d}}\quad\text{for any}\quad x\in \mathbb{R}^{d},\;t\in[0,T]. \tag{3.6}\]
## 4. \(L^{1}\to L^{\infty}\) gain of integrability
In this section we first show the \(L^{p}\to L^{\infty}\) gain of integrability for solutions to (1.1). This, combined with the estimate of Theorem 3.1 will conclude the proof of Theorem 1.1. We follow a modification of the De Giorgi iteration previously used in [6] and [7]. We start with a technical lemma. Let \(M>0\) and \(t>0\); for each \(k\in\mathbb{N}\), define
\[C_{k}:=M(1-2^{-k}),\quad T_{k}:=\frac{t}{2}\left(1-\frac{1}{2^{k}}\right).\]
We denote with \((u-c)_{+}\) the maximum between \(0\) and \((u-c)\).
**Lemma 4.1**.: _Let \(p>d/2\), \(\gamma>0\) defined as_
\[\gamma=-1+\frac{2}{d}p-\frac{3}{m}(d-2)(p-1),\]
_and \(m\geq 2\) such that_
\[m>\frac{3d(d-2)}{2}\max\left\{1,\frac{p-1}{p-\frac{d}{2}}\right\}.\]
_For each \(k\geq 1\) we have the bound_
\[\int_{\mathbb{R}^{d}}(u-C_{k})_{+}^{p}dx\leq\left(\frac{c_{0}2^{k }}{M}\right)^{1+\gamma} \left(\|\langle\cdot\rangle^{-d/2}\nabla(u-C_{k-1})_{+}^{\frac{p }{2}}\|_{L^{2}}^{2}+\|(u-C_{k-1})_{+}\|_{L^{p}}^{p}\right)\] \[\cdot\|(u-C_{k-1})_{+}\|_{L^{p}}^{p\left(\frac{2}{d}-\frac{3}{m} (d-2)\right)}\|(u-C_{k-1})_{+}\|_{L^{1}_{m}}^{\frac{3}{m}(d-2)},\]
_with \(c_{0}\) dimensionless constant._
Proof.: Observe that \(0\leq C_{k-1}<C_{k}\). From this we have
\[0\leq(u-C_{k})_{+}\leq(u-C_{k-1})_{+}. \tag{4.1}\]
Moreover \(u-C_{k-1}=u-C_{k}+C_{k}-C_{k-1}\). Dividing by \(C_{k}-C_{k-1}\) we acquire
\[\frac{u-C_{k-1}}{C_{k}-C_{k-1}}=\frac{u-C_{k}}{C_{k}-C_{k-1}}+1\geq 1.\]
This tells us that
\[\mathds{1}_{\{u-C_{k}\geq 0\}}\leq\frac{(u-C_{k-1})_{+}}{C_{k}-C_{k-1}}.\]
Hence, for any \(a>0\) we have
\[\mathds{1}_{\{u-C_{k}\geq 0\}}\leq\left(\frac{(u-C_{k-1})_{+}}{C_{k}-C_{k-1}} \right)^{a}.\]
Multiplying the above inequality by \((u-C_{k})_{+}\) and using (4.1), we deduce
\[(u-C_{k})_{+}\leq\frac{(u-C_{k-1})_{+}^{1+a}}{(C_{k}-C_{k-1})^{a}}\quad\text{ for any}\quad a>0. \tag{4.2}\]
Chose \(a=\frac{1+\gamma}{p}\) for some \(\gamma>0\) to be defined later. Inequality (4.2) implies
\[\int_{\mathbb{R}^{d}}(u-C_{k})_{+}^{p}\;dx\leq\left(\frac{2^{k}}{M}\right)^{1+ \gamma}\int_{\mathbb{R}^{d}}(u-C_{k-1})_{+}^{p+1+\gamma}\;dx.\]
Lemma 2.2 with \(q=1+\gamma+p\) yields
\[\int_{\mathbb{R}^{d}}(u-C_{k})_{+}^{p}\;dx\leq c_{0}\left(\frac{ 2^{k}}{M}\right)^{1+\gamma}\left(\left\|\nabla(u-C_{k-1})_{+}^{\frac{p}{2}} \langle\cdot\rangle^{-\frac{d}{2}}\right\|_{L^{2}}^{2}+\|(u-C_{k-1})_{+}\|_{L ^{p}}^{p}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\cdot\|(u-C_{k-1})_{+}\|_{L^{2}}^{p\left(\frac{\frac{d-2}{d}+\gamma}{p- 1}\right)}\|(u-C_{k-1})_{+}\|_{L^{1}_{m}}^{\frac{2p-d-\gamma}{d(p-1)}},\]
with \(c_{0}\) dimensionless constant and \(m=\frac{3d(d-2)(p-1)}{(d+2)p-d(1+\gamma+p)}\). Next, we express \(\gamma\) in terms of \(m\), and get \(\gamma=\frac{2}{d}p-1-\frac{3(d-2)(p-1)}{m}\), which implies, after substitution in the norms,
\[\int_{\mathbb{R}^{d}}(u-C_{k})_{+}^{p}\;dx\leq c_{0}\left(\frac{ 2^{k}}{M}\right)^{1+\gamma} \left(\left\|\langle\cdot\rangle^{-d/2}\nabla(u-C_{k-1})_{+}^{ \frac{p}{2}}\right\|_{L^{2}}^{2}+\|(u-C_{k-1})_{+}\|_{L^{p}}^{p}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot\|(u-C_{k-1 })_{+}\|_{L^{p}}^{p\left(\frac{2}{d}-\frac{3}{m}(d-2)\right)}\|(u-C_{k-1})_{+} \|_{L^{1}_{m}}^{\frac{3}{m}(d-2)}.\]
The constraint \(\gamma>0\) implies \(m>\frac{3d(d-2)}{2}\frac{(p-1)}{p-\frac{d}{2}}\). The proof of the lemma is complete after recalling Remark 2.3.
We are now ready to start the De Giorgi iteration. For any \(k\geq 1\) let us define the energy \(\mathcal{E}_{k}\) as
\[\mathcal{E}_{k}(T_{k+1},t):=\sup_{\tau\in(T_{k+1},t)}\int(u-C_{k})_{+}^{p}(\tau,x)dx+C(p)\int_{T_{k+1}}^{t}\int\langle x\rangle^{-d}\left|\nabla(u-C_{k})_{ +}^{\frac{p}{2}}\right|^{2}\;dxd\tau,\]
and \(\mathcal{E}_{0}\) as
\[\mathcal{E}_{0}:=\sup_{(t/4,t)}\int_{\mathbb{R}^{d}}u^{p}\;dx+C(p)\int_{t/4}^{ t}\int_{\mathbb{R}^{d}}\langle x\rangle^{-d}\left|\nabla u^{\frac{p}{2}} \right|^{2}\;dx\;d\tau. \tag{4.3}\]
**Lemma 4.2**.: _Given \(p>\frac{d}{2}\), \(\gamma=-1+\frac{2}{d}p-\frac{3(d-2)(p-1)}{m}\) and \(m>\frac{3d(d-2)}{2}\max\left\{1,\frac{p-1}{p-\frac{d}{2}}\right\}\). For all \(k\geq 1\) we have_
\[\mathcal{E}_{k}(T_{k+1},t)\lesssim\frac{1}{tM^{1+\gamma}}\;\mathcal{E}_{k-1}( T_{k},t)^{\left(1+\frac{2}{d}-\frac{3}{m}(d-2)\right)}.\]
Proof.: We test (1.1) with \((u-C_{k})_{+}^{p-1}\), integrate in \(\mathbb{R}^{d}\times(s,\tau)\) with \(0\leq T_{k}\leq s\leq T_{k+1}\leq\tau\). After averaging on \(s\) between \(T_{k}\) and \(T_{k+1}\), and taking the supremum of \(\tau\) in \((T_{k+1},t)\) we get
\[\sup_{\tau\in(T_{k+1},t)}\int(u-C_{k})_{+}^{p}(\tau,x)dx +C(p)\int_{T_{k+1}}^{t}\int A[u]\left|\nabla(u-C_{k})_{+}^{\frac{p }{2}}\right|^{2}dxds\] \[\leq\frac{1}{T_{k+1}-T_{k}}\int_{T_{k}}^{t}\int(u-C_{k})_{+}^{p} dx\;ds,\]
which can be also written as
\[\mathcal{E}_{k}(T_{k+1},T)\leq\frac{1}{T_{k+1}-T_{k}}\int_{T_{k}}^{t}\int(u-C_ {k})_{+}^{p}dx\;ds. \tag{4.4}\]
Since \((T_{k+1}-T_{k})=\frac{t}{2^{k+2}}\), we apply the integral bound of Lemma 4.1 to get
\[\mathcal{E}_{k} \lesssim\frac{2^{k+1}}{t}\left(\frac{2^{k}}{M}\right)^{1+\gamma }\sup_{(T_{k},t)}\|(u-C_{k-1})_{+}\|_{L_{m}^{1}}^{\frac{3(d-2)}{m}}\sup_{(T_{ k},t)}\|(u-C_{k-1})_{+}\|_{L^{p}}^{p\left(\frac{2}{d}-\frac{3}{m}(d-2)\right)}\] \[\quad\cdot\left[\sup_{(T_{k},t)}\|(u-C_{k-1})_{+}\|_{L^{p}}^{p}+ \int_{T_{k}}^{t}\|\langle\cdot\rangle^{-d/2}\nabla(u-C_{k-1})_{+}^{\frac{p}{2} }\|_{L^{2}}^{2}\;ds\right]\] \[\leq\frac{2^{k+1}}{t}\left(\frac{2^{k}}{M}\right)^{1+\gamma}\sup_ {(T_{k},t)}\|(u-C_{k-1})_{+}\|_{L_{m}^{1}}^{\frac{3(d-2)}{m}}\] \[\quad\cdot\left[\sup_{(T_{k},t)}\|(u-C_{k-1})_{+}\|_{L^{p}}^{p}+ \int_{T_{k}}^{t}\|\langle\cdot\rangle^{-d/2}\nabla(u-C_{k-1})_{+}^{\frac{p}{2} }\|_{L^{2}}^{2}\;ds\right]^{1+\frac{2}{d}-\frac{3}{m}(d-2)}\] \[=\frac{C^{k}C_{0}}{tM^{1+\gamma}}\mathcal{E}_{k-1}^{\left(1+\frac {2}{d}-\frac{3}{m}(d-2)\right)},\]
with \(C_{0}:=\sup_{(0,T)}\|u\|_{L_{m}^{1}}^{\frac{3}{m}(d-2)}\).
For simplicity in the notation, we define \(\beta_{1}:=\frac{2}{d}-\frac{3}{m}(d-2)\). The inequality of the previous lemma shows that, iteratively,
\[\mathcal{E}_{k}\lesssim\left(\frac{c_{0}}{t^{\frac{1}{\beta_{1}}}M^{\frac{(1+ \gamma)}{\beta_{1}}}}\mathcal{E}_{0}\right)^{\left(1+\beta_{1}\right)^{k}}. \tag{4.5}\]
Recall the definition of \(\mathcal{E}_{0}\):
\[\mathcal{E}_{0}=\sup_{(t/4,t)}\int u^{p}(s,x)\;dx+C(p)\int_{t/4}^{t}\int A[u] \left|\nabla u^{\frac{p}{2}}\right|^{2}\;dx\;ds.\]
Since
\[\mathcal{E}_{0}\leq\sup_{(0,t)}\int u^{p}(s,x)\;dx,\]
Theorem 3.1 implies
\[\mathcal{E}_{0}\leq\frac{c}{t^{p-1}},\]
where \(c\) only depends on \(p\) and on the \(L^{1}\)-norm of the initial data. Passing to the limit \(k\to+\infty\) in (4.5) we obtain
\[u\leq M,\]
provided
\[M\lesssim\frac{c_{0}}{t^{1+\varepsilon}},\]
with \(\varepsilon=\frac{1-\frac{2}{2}}{1+\gamma}\) and \(c_{0}\) only dependent on the \(L^{1}_{m}\)-norm of the initial data. Note that \(\varepsilon>0\) can be as small as one wishes by choosing \(p\) arbitrarily large. To see this, first note that if \(p\) is greater than \(d-1\) then \(1<\max\left\{1,\frac{p-1}{p-\frac{d}{2}}\right\}\leq 2\). Then, taking \(m>3d(d-2)\), we get
\[\varepsilon=\frac{1-\frac{2}{d}}{1+\gamma}=\frac{1-\frac{2}{d}}{\frac{2p}{d}- \frac{3(d-2)(p-1)}{m}}\leq\frac{d-2}{p+1}.\]
This finishes the proof of Theorem 1.1.
|
2304.14942 | The Emotions of the Crowd: Learning Image Sentiment from Tweets via
Cross-modal Distillation | Trends and opinion mining in social media increasingly focus on novel
interactions involving visual media, like images and short videos, in addition
to text. In this work, we tackle the problem of visual sentiment analysis of
social media images -- specifically, the prediction of image sentiment
polarity. While previous work relied on manually labeled training sets, we
propose an automated approach for building sentiment polarity classifiers based
on a cross-modal distillation paradigm; starting from scraped multimodal (text
+ images) data, we train a student model on the visual modality based on the
outputs of a textual teacher model that analyses the sentiment of the
corresponding textual modality. We applied our method to randomly collected
images crawled from Twitter over three months and produced, after automatic
cleaning, a weakly-labeled dataset of $\sim$1.5 million images. Despite
exploiting noisy labeled samples, our training pipeline produces classifiers
showing strong generalization capabilities and outperforming the current state
of the art on five manually labeled benchmarks for image sentiment polarity
prediction. | Alessio Serra, Fabio Carrara, Maurizio Tesconi, Fabrizio Falchi | 2023-04-28T15:56:02Z | http://arxiv.org/abs/2304.14942v1 | # The Emotions of the Crowd: Learning Image Sentiment from Tweets via Cross-modal Distillation
###### Abstract
Trends and opinion mining in social media increasingly focus on novel interactions involving visual media, like images and short videos, in addition to text. In this work, we tackle the problem of visual sentiment analysis of social media images - specifically, the prediction of image sentiment polarity. While previous work relied on manually labeled training sets, we propose an automated approach for building sentiment polarity classifiers based on a cross-modal distillation paradigm; starting from scraped multimodal (text + images) data, we train a student model on the visual modality based on the outputs of a textual teacher model that analyses the sentiment of the corresponding textual modality. We applied our method to randomly collected images crawled from Twitter over three months and produced, after automatic cleaning, a weakly-labeled dataset of \(\sim\)1.5 million images. Despite exploiting noisy labeled samples, our training pipeline produces classifiers showing strong generalization capabilities and outperforming the current state of the art on five manually labeled benchmarks for image sentiment polarity prediction.
## 1 Introduction
Mining trends and opinions from social networks provides crucial information to help make strategic decisions in various fields. Twitter data, for example, have been used to explain and predict social issues and user opinions on product brands and sales [1, 14], patient reactions to medicines [1], stock market movements [13], political performances and election outcomes [1, 2, 1] and many others. While most research in sentiment analysis from social-network data focused on text, online interactions increasingly involve visual media such as pictures, edited images, and short videos, putting more interest in visual sentiment analysis (VSA). The main issue of state-of-the-art approaches for VSA is their strongly supervised nature: manually labeling images for VSA is costly due to the subjectivity of image interpretation and the viewer's emotional response, thus requiring multiple labelers and limiting the dataset scale to a few thousand samples [15, 16]. Moreover, natural distribution shifts occurring in opinions and trends would require repeating the labeling process periodically, which is unfeasible.
This paper proposes an automated approach to train models for visual sentiment analysis. Specifically, we tackle the problem of predicting the average polarity of sentiments an image evokes to its viewers, usually coarsely estimated as being 'positive', 'neutral' or 'negative'. We propose an approach based on a cross-modal distillation method; a pretrained textual sentiment predictor, acting as the teacher model, is distilled into a visual sentiment predictor using text-image pairs streamed from random-sampled multimodal posts as training samples. The proposed approach is not fully unsupervised but rather based on distant supervision [12], as we assume a pretrained textual teacher model that transfers knowledge to the student visual predictor. However, the availability of self-supervised, easily fine-tunable language models makes it possible to harness the available resources for textual sentiment analysis and transfer their knowledge to the visual domain without additional labeling costs. Moreover, our approach is employable in a continual learning setup, especially if employed with diachronic language models such as TimeLM [1], providing an effective and cheap way to keep sentiment analysis tools up to date.
We apply our approach to random-sampled Twitter posts in three months (Apr-Jun 2022) and show that the obtained visual models outperform the current state of the art in five manually-annotated benchmarks for image sentiment polarity prediction. We also contribute by releasing the code, trained models, and the set collected and preprocessed images (\(\sim\)1.5M) used in the experimental phase.
In summary, we contribute by
* proposing a cross-modal distillation approach to train image sentiment polarity predictors without relying on manually labeled image datasets,
* testing the obtained models on five manually-labeled benchmarks and outperforming the current state of the art in five of them, and
* publicly releasing the code, the trained models, and the
collected data (\(\sim\)3.7M images) used in our experiments.
## 2 Related Work
Our main focus is purely visual sentiment analysis, where a judgment can be expressed by looking only at image pixels. Other related tasks are also tackled, such as the well-explored textual-based sentiment analysis [11, 12, 10, 13] and directions also exploiting additional inputs or modalities [21, 10, 14, 15, 16] or focusing on aspects different from sentiment, like virality or aesthetics [1, 17, 18].
Visual Sentiment Analysis (VSA)Seminal approaches to visual sentiment analysis, mainly from 2010, were based on extracting handcrafted low-level features from input images based on color, texture, composition, and content characteristics. For example [11] merged SIFT descriptor, Gabor texture, and HSV color histogram to obtain a global feature vector and [12] extracted color, texture and harmonious composition from images. Subsequent approaches leveraged mid-level features of images, such as the one proposed by [1]; they built a visual sentiment ontology consisting of 3'000 objective-noun pairs that express strong sentiment values and are related to an emotion, represented using the well-known "Plutchik's Wheel of Emotions" psychological model [15]. The adjective-noun pair were used as keywords to get images from Flickr, which were then leveraged to train an individual tracker for each member of the ontology. Subsequently, only reasonably performing detectors were selected to compose SentiBank, their proposed framework capable of extracting mid-level characteristics from images, which can be used as input for a sentiment classifier. However, research has recently been geared towards deep learning models, which can automatically learn how to extract high-level visual characteristics from raw input data. Most methods in this category rely on supervised transfer learning, exploiting various convolutional models like GoogleNet [13], AlexNet-inspired [1], or custom architectures [20]. One of the most recent approaches is [21], which combines global and local features of the image using both a CNN and a saliency detector; in particular, salient sub-images are detected, and then an optimized VGGNet makes a prediction on both the entire image and the sub-images. Finally, the predictions are combined by a weighted sum to detect a positive, neutral, or negative sentiment polarity.
Dataset for VSAEven for VSA, models are often as good as their training data are. The most used approach to build a dataset for a VSA task relies on manual annotation since it allows getting reliable, strong labels. However, it is also costly due to the subjectivity of the sentiment we attach to samples, thus requiring more than one annotator to incorporate multiple perspectives into the labeling. For this scope, many researchers [20, 21, 22, 23] relied on crowdsourcing services, i.e., Amazon Mechanical Turk (AMT), to involve multiple labelers and ensure strong labels. In addition, [20, 21] select labelers based on their ability to classify feelings using a qualification test, ensuring cleaner labels. However, scaling datasets beyond the order of tens of thousands of samples still requires a non-negligible effort.
Weak supervisionAdopting weak supervision allows us to obtain much larger datasets at the cost of lowering the labeling quality and introducing label noise. In the visual domain, this technique recently gained more and more attention. For example, [20] exploited a complex mixture of raw web signals, connections between web pages, and user feedback to generate a huge image classification dataset, and [12] relied on hashtag prediction on social media images. For VSA, there are just a few examples. The approach of [21] assigns weak sentiment labels to images coming from Flickr based on image tags. Still, it is susceptible to noisy or missing tags and is biased by the tags' choice; similarly, [12] assigns weak labels by analyzing the text content of tweets. Our approach follows this direction by crawling randomly sampled multimodal data from social media streams, but the supervision signal is obtained by distilling a textual sentiment predictor into a visual model.
## 3 Methodology
As done in previous work, we formulate image sentiment polarity prediction as an \(N\)-way image classification problem. Our objective is to learn an image classifier that assigns the correct sentiment label out of \(N\) possible labels to an input image without resorting to supervised training and, thus, an expensive manual annotation of images. To do so, we propose an automatic approach organized in two steps; a) _data collection, filtering, and deduplication_ and b) _cross-modal distillation_. Figure 1 schematizes our proposal. We further describe each step in the following subsections.
### Data Collection, Filtering, and Deduplication
This first step aims to construct a data stream to fuel the subsequent learning step. We crawl data from a social network of interest by collecting random posts in a specified period. In this work, we demonstrate our proposal on Twitter, but in principle, any platform providing access (free or paid) to large volumes of randomly-sampled posts can be used.
To subsequently apply a cross-modal paradigm, we are interested in filtering out samples having only a single modality in favor of ones containing both text and one or more images. We apply the same filtering steps applied in [12] and keep only tweets that a) have a text comprised of 5 or more words in the English language, b) have at least one image, and c) are not retweets. We thus obtain a set \(S=\{s_{j}\}_{j=1}^{M}\) of text-image pairs \(s_{j}=\left(t_{j},i_{j}\right),t_{j}\in T\,,i_{j}\in I\), where \(T\) and \(I\) respectively indicate the space of texts and images. We indicate with \(M\) the number of samples at the end of the collection campaign, but in an online learning configuration, \(S\) constitutes an infinite data stream.
Due to the virality of some contents, a non-negligible part of posts and corresponding images crawled end up duplicates or near-duplicate images. To make the process leaner and obtain a more varied stream of visual data, we drop samples having the same or nearly-same content in the visual medium. Specifically, we assume two samples \(s_{1}=(t_{1},i_{1})\) and \(s_{2}=(t_{2},i_{2})\) are duplicates if \(\cos(\Phi(i_{1}),\Phi(i_{2}))>\tau\), where \(\Phi(i)\in\mathbb{R}^{n}\) is a feature vector extracted from the image \(i\) by a general-purpose pretrained visual model \(\Phi\), and \(\tau\) is an empirically-chosen threshold.
### Cross-modal Distillation
We set up a cross-modal student-teacher learning paradigm fed by data streaming from the previous step.
Let \(g:T\rightarrow[0,1]^{N}\) a pretrained textual sentiment polarity predictor that maps an input text into an \(N\)-dimensional categorical distribution and similarly, \(f:I\rightarrow[0,1]^{N}\) an image classifier sharing the same label space as \(g\). Given a set of multimodal samples \(S=\{s_{j}\}_{j=1}^{M}\), we train the student model \(f\) to align its prediction on the visual modality to the ones of the teacher model \(g\) on the textual modality. Formally, for a single text-images pair \(s=(t,i)\), we minimize the following cross-entropy loss
\[\mathcal{L}(t,i)=\lambda(g(t))\sum_{k=1}^{N}g_{k}(t)\log(f_{k}(i))\,, \tag{1}\]
where \(g_{k}(t)\) and \(f_{k}(i)\) indicate the \(k\)-th output of the teacher and student model, respectively, and
\[\lambda(g(t))=\begin{cases}1&\text{if }g_{\bar{k}}(t)\geq c_{\bar{k}}\,,\; \bar{k}=\operatorname*{argmax}_{k}g_{k}(t)\\ 0&\text{otherwise}\end{cases} \tag{2}\]
is a multiplier that filters out low-confidence samples, as it sets the sample loss to zero if the probability of the most confident class \(g_{\bar{k}}(t)\) is below a predefined threshold \(c_{\bar{k}}\) that is defined for each possible class \(\{c_{j}\}_{j=1}^{N}\,,c_{j}\in[0,1]\). We define \(\lambda(g(t))\) to represent a generic weighting scheme for training samples. Equation 2 represents a hard gating strategy based on the teacher's confidence. In future work, we plan to explore other formulations, such as soft gating. During training, the teacher model \(g\) is frozen, and only \(f\) is updated by gradient-based optimization until convergence.
## 4 Experiments
### Experimental Setup
Data CollectionWe collected roughly 3M tweets with 3.7M images (1.26 images per tweet on average) in three months between April and June 2022. Crawling was implemented via the Twitter API Volume Streams1 that provides a streaming endpoint delivering roughly a 1% random sample of the global and publicly available tweets in real-time. For deduplication, we choose an ImageNet-pretrained ResNet-50 as feature vector extractor \(\Phi\); specifically, we use the max-pooled output of the sixth residual block as feature vector and mark tweets as duplicates if their image contents have very-high cosine similarity (\(\tau=0.98875\)). Deduplication yielded a \(\sim\)22% reduction of the image set, which went from \(\sim\)3.7M to \(\sim\)2.9M. In Table 1, we report a summary of collected data broken down by the three sentiment polarity classes induced by the teacher model chosen in our experimentation (more on this in the following subsections). As an additional source of samples, we also employ B-T4SA [21] -- a set of 470586 text+images tweets collected following the same crawling rules between July and December 2016. Following a chronological order, we refer to the B-T4SA dataset as A and our newly collected dataset as B.
Footnote 1: [https://developer.twitter.com/en/docs/twitter-api/tweets/volume-streams/introduction](https://developer.twitter.com/en/docs/twitter-api/tweets/volume-streams/introduction)
Teacher ArchitectureAmong many approaches proposed in the literature for textual sentiment analysis, for the teacher model, we choose a model from Time-LMs [10] -- a family of models trained with a _continual learning_ approach. It comprises a BERT-based model trained on real-time Twitter data and periodically released, enabling di-achronic specialization that is particularly relevant in the social media domain where the topic of discussion changes rapidly, as well as slang and language used. For instance, a model trained before 2019 would not be aware of the meaning of neologisms such as _"COVID-19"_ or the different feelings related to _"swabs"_ or _"variant"_ that we give after the pandemic.We select the Time-LM model released at the end of June 2022, fine-tuned for sentiment analysis on the TweetEval benchmark [1] available in the TweetNLP library [1] This choice also sets the granularity of the prediction (\(N=3\)), as
Figure 1: Overview of the proposed approach. Tweets are filtered and deduplicated to keep multimodal samples with long-enough English text and at least one image. Then, we apply cross-modal distillation; given a text-image pair \((i,t)\), a visual student model is trained to predict from \(i\) the same sentiment polarity of \(t\) inferred by a textual teacher model (a pretrained textual sentiment classifier).
the model has three possible outputs; 'positive', 'neutral', or 'negative' sentiment polarity.
Student ArchitectureAs the visual student model, we select a Vision Transformer (ViT) [11] with the final head adjusted to output \(N=3\) logits. We start training from the publicly available checkpoints pretrained on _Imagenet-21k_ and on _Imagenet-1k_. During training, we employ data augmentation on the visual pipeline by applying random horizontal flips, shifts, and rotations. Optimization is carried out using the Adam optimizer with an initial learning rate of \(10^{-4}\), \(\epsilon=10^{-7}\), \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\).
### Benchmarks
To test the effectiveness of the proposed cross-modal training process, we evaluate our models on the benchmarks for image sentiment polarity prediction manually annotated via Amazon Mechanical Turk (AMT). We consider a) Twitter Dataset (TD) [23], b) Flickr&Instagram (FI) [23], and c) EmotionROI [12]. TD provides three benchmarks corresponding to three different levels of label agreement, i.e., where at least five, four, or three AMT workers agreed on the labels assigned to images. The other datasets provide a single set of images with already aggregated labels. TD provides binary labels ('positive' or 'negative') for sentiment polarity. Thus we mask the neutral class output of our models and take the maximum confidence among positive and negative outputs.
FI and EmotionRol provide fine-grained sentiment annotations and are used in literature as sentiment polarity benchmarks by mapping labels into two 'positive' and 'negative' polarities [24]. In particular, for dataset FI, the emotions of _Awe_, _Amusement_, _Excitement_, and _Contentment_ are mapped to the 'positive' polarity while _Fear_, _Disgust_, _Sadness_, and _Anger_ to 'negative'. For EmotionROI, _Anger_, _Disgust_, _Fear_, and _Sadness_ are relabeled as 'negative', and _Joy_ and _Surprise_ as 'positive'. Table 2 reports the characteristics of each dataset, and Figure 2 shows some examples. We adopt TD for preliminary experiments and ablation studies while we compare the best-performing models with other state-of-the-art methods on all the mentioned benchmarks.
### Ablation study
In this section, we evaluate how aspects such as data freshness, data filtering, and model architecture can affect the effectiveness of trained models. We perform experiments varying the inputs and hyperparameters of our approach and producing several models. We apply the obtained models on the TD benchmark in a zero-shot configuration (no learning on benchmark data is performed) and measure the classification accuracy. Table 3 reports all the obtained results we discuss below.
Confidence FilteringIn this experiment, we fix the input set (A) and the student model architecture (ViT Base with 86M parameters and a patch size of 32) and run our pipeline with or without confidence filtering, i.e., setting \(c_{j}=.70\,,\forall j\) or \(c_{j}=0\,,\forall j\) in Equation 2. Comparing rows 3.1 and 3.2 in Table 3, we note that masking low-confidence samples in the student loss helps increase accuracy by 1-2%.
Input DataIn experiment 3.3, we repeat experiment 3.2, swapping the set A collected in 2016 with the one collected by us in 2022 (B). We observed a small accuracy loss in the five-agree benchmark. Despite having more images, our set is more unbalanced towards positive and neutral classes with respect to A, which the original authors already balanced during data cleaning. Indeed, setting higher confidence thresholds for those classes (experiment 3.4) mitigates this problem and provides additional improvements also to the lower-agree benchmarks. Combining the two sets (experiment 3.5) further increases performance by \(\sim\)2%.
Student ArchitectureWe evaluate scaling the model parameters and the patch size of the student ViT architecture. Starting with the configuration of experiment 3.5, in experiment 3.6, we swap the student model for the larger ViT-Large (307M parameters, \(\sim\)3.5x more than ViT-Base), while in experiments 3.7 and 3.8, we repeat experiments 3.5 and 3.6 decreasing the input patch size from 32 to 16 (4x larger input sequences). Decreasing patch size alone (3.7) is more effective than increasing model parameters (3.6), as the visual model can grasp finer details of the input image. Scaling both dimensions together (3.8) produces our best-performing configuration, confirming recent findings [15].
### Comparison with State of the Art
We compare our best model (ViT-L/16 trained on A+B) to state-of-the-art methods on the five manually-labeled benchmarks for image sentiment polarity described in Section 4.2. For a fair comparison, we follow the evaluation protocol of previous work [24] that includes fine-tuning the models on the benchmark data. Specifically, for TD and Emotion ROI, 5-fold cross-validation is performed, while for FI, models are trained on five random splits with 80/5/15 proportions of training/validation/test subsets. For each benchmark, we measure the mean and standard deviation of the accuracy on the test splits.
As seen in Table 4, our models outperform or are comparable to other state-of-the-art methods in all benchmarks. Without fine-tuning, our models still obtain satisfactory results. For the TD benchmark, which shares a data distribution similar to the one of the crawled data used, our model achieves an accuracy comparable to fine-tuned state-of-the-art models, even outperforming them on the 3-agreement subset. On the other hand, the distribution shift between Twitter images and
\begin{table}
\begin{tabular}{l r r r} \hline \hline & \multicolumn{2}{c}{**Collected**} & \multicolumn{2}{c}{**Deduplicated**} \\ \cline{2-4}
**Sentiment** & \# tweets & \# images & \# images \\ \hline \hline Positive & \(1\,206\,158\) & \(1\,593\,484\) & \(1\,299\,916\) \\ Neutral & \(1\,403\,683\) & \(1\,708\,195\) & \(1\,293\,259\) \\ Negative & \(356\,002\) & \(433\,172\) & \(329\,395\) \\ \hline Total & \(2\,965\,843\) & \(3\,734\,849\) & \(2\,922\,568\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary on collected and preprocessed training data broken down by the sentiment polarity assigned by the teacher model (TimeLM Jun-2022 fine-tuned on TweetEval).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{**Twitter Dataset**} \\ \cline{2-7}
**Model** & 5 agree & \(\geq\)4 agree & \(\geq\)3 agree & **Emotion ROI** & **FI** \\ \hline Chen _et al._ (2014)* & 76.4 & 70.2 & 71.3 & 70.1 & 61.5 \\ You _et al._ (2015)* & 82.5 & 76.5 & 76.4 & 73.6 & 75.3 \\ Jou _et al._ (2015)\({}^{\dagger}\) & 83.9\(\pm\)0.3 & & & & \\ Vadicamo _et al._ (2017) & 89.6 & 86.6 & 82.0 & & \\ Yang _et al._ (2018)* & 88.7 & 85.1 & 81.1 & 81.3 & 86.4 \\ Wu _et al._ (2020) & 89.5 & 87.0 & 81.7 & **83.0** & 88.8 \\ \hline ViT-L/16 (no fine-tuning) & 87.8 & 84.8 & 81.9 & 64.1 & 76.0 \\ ViT-L/16 & **92.4\(\pm\)2.0** & **90.2\(\pm\)2.0** & **86.3\(\pm\)3.0** & **83.9\(\pm\)1.0** & **89.4\(\pm\)0.1** \\ \hline \multicolumn{7}{l}{*As reported by Wu _et al._ (2020). \({}^{\dagger}\)As reported by Campos _et al._ (2017).} \\ \end{tabular}
\end{table}
Table 4: Accuracy on standard benchmarks compared with state-of-the-art image sentiment polarity predictors.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{**Confidence Filter**} & \multicolumn{4}{c}{**Twitter Dataset**} \\ \cline{2-7}
**\#** & **Dataset** & **\(\mathbf{\oplus}\)** & **\(\mathbf{\oplus}\)** & **Student Model** & 5 agree & \(\geq\)4 agree & \(\geq\)3 agree \\ \hline
3.1 & A & - & - & - & B/32 & 82.2 & 78.0 & 75.5 \\
3.2 & A &.70 &.70 &.70 & B/32 & 84.7 & 79.7 & 76.6 \\ \hline
3.3 & B &.70 &.70 &.70 & B/32 & 82.3 & 78.7 & 75.3 \\
3.4 & B &.90 &.90 &.70 & B/32 & 84.4 & 80.3 & 77.1 \\
3.5 & A+B &.90 &.90 &.70 & B/32 & 86.5 & 82.6 & 78.9 \\ \hline
3.6 & A+B &.90 &.90 &.70 & L/32 & 85.0 & 82.4 & 79.4 \\
3.7 & A+B &.90 &.90 &.70 & B/16 & 87.0 & 83.1 & 79.4 \\
3.8 & A+B &.90 &.90 &.70 & L/16 & 87.8 & 84.8 & 81.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study. Accuracy on the three Twitter Dataset benchmarks (at-least-five-, four-, and three-agreement subsets). A = Set of tweets collected in Jul-Dec 2016 by Vadicamo _et al._ (2017). B = Set of tweets collected in Apr-Jun 2022 by us. Confidence filtering columns report the values used for the parameters \(\{c_{j}\}_{j=1}^{3}\).
Figure 2: Samples from the manually-annotated benchmark used for evaluation. From left to right, we show a positive and a negative sample for TD, EmotionROI, and FI benchmarks.
the Emotion ROI and FI benchmarks are too significant to ensure generalization. We deem the culprit to be the class distribution for Emotion ROI, which privileges a negative sentiment polarity contrarily to other datasets, and the domain gap for FI, where images comprise more high-quality artistic pictures rather than synthetic/edited images and pictures taken with a smartphone. However, fine-tuning reduces these gaps, showing that the knowledge in our model can be easily transferred to other domains.
In Figure 3, we report some cherry-picked failure cases of our best model (non-finetuned ViT-L/16) on the Twitter Dataset benchmark. Most failure cases comprise very subjective samples, for which the correct label is not immediately clear, even for a human judge.
## 5 Conclusion
We presented an automated approach to obtain trained models for visual sentiment analysis targeted for social media mining. Harnessing existing resources for textual sentiment analysis, the proposed cross-modal distillation approach can produce robust models for image sentiment polarity prediction without any human intervention in data collection or labeling. The experimental phase on Twitter data showed that our models reached a significant performance on manually-annotated benchmarks, setting the new state of the art on five of them. All the collected data, the annotated datasets, and the trained models will be publicly available. Moreover, the presented pipeline enables the production of visual diachronic models via continual learning from streaming social media data.
However, several limitations remain to be tackled. One of the main issues (and thus motivation for future work) is the lack of zero-shot generalization to other domains, i.e., social media. Although finetuning our models demonstrated a great transferability of the knowledge extracted from Twitter data, applying the model as-is yielded a satisfactory performance only on same-domain data. Drawing data from a stream of multiple social media would improve zero-shot generalization and enable experimentation on larger scales. Moreover, confidence filtering is still manually tuned for the particular distribution of input data, while an adaptive online balancing of samples will be explored in future work.
## Ethical Statement
The use of sentiment analysis by large corporations to achieve commercial benefit poses an ethical issue, as it runs the risk of causing detrimental effects on individuals or groups of people. Moreover, the proposed method is intended to be used in conjunction with ethical web scraping. The experiments reported in this work have been conducted exploiting the Twitter developer API complying with their Terms of Service.
## Acknowledgments
This work was partially funded by: AI4Media - A European Excellence Centre for Media, Society and Democracy (EC, H2020 n. 951911); SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by European Union - NextGenerationEU.
|
2304.05697 | Foundations for an Abstract Proof Theory in the Context of Horn Rules | We introduce a novel, logic-independent framework for the study of
sequent-style proof systems, which covers a number of proof-theoretic
formalisms and concrete proof systems that appear in the literature. In
particular, we introduce a generalized form of sequents, dubbed 'g-sequents,'
which are taken to be binary graphs of typical, Gentzen-style sequents. We then
define a variety of 'inference rule types' as sets of operations that act over
such objects, and define 'abstract (sequent) calculi' as pairs consisting of a
set of g-sequents together with a finite set of operations. Our approach
permits an analysis of how certain inference rule types interact in a general
setting, demonstrating under what conditions rules of a specific type can be
permuted with or simulated by others, and being applicable to any sequent-style
proof system that fits within our framework. We then leverage our permutation
and simulation results to establish generic calculus and proof transformation
algorithms, which show that every abstract calculus can be effectively
transformed into a lattice of polynomially equivalent abstract calculi. We
determine the complexity of computing this lattice and compute the relative
sizes of proofs and sequents within distinct calculi of a lattice. We recognize
that top elements in lattices correspond to nested sequent systems, while
bottom elements correspond to labeled sequent systems, and observe that top and
bottom elements coincide with many known (cut-free) nested and labeled sequent
systems for logics characterized by Horn properties. | Tim S. Lyon, Piotr Ostropolski-Nalewaja | 2023-04-12T08:40:20Z | http://arxiv.org/abs/2304.05697v1 | # Foundations for an Abstract Proof Theory in the Context of Horn Rules
# Foundations for an Abstract Proof Theory in the Context of Horn Rules
Tim S. Lyon\({}^{1}\) and Piotr Ostropolski-Nalewaja\({}^{1,2}\)
\({}^{1}\)Computational Logic Group, TU Dresden
\({}^{2}\)University of Wroclaw
###### Abstract
We introduce a novel, logic-independent framework for the study of sequent-style proof systems, which covers a number of proof-theoretic formalisms and concrete proof systems that appear in the literature. In particular, we introduce a generalized form of sequents, dubbed \(g\)-_sequents_, which are taken to be binary graphs of typical, Gentzen-style sequents. We then define a variety of _inference rule types_ as sets of operations that act over such objects, and define _abstract (sequent) calculi_ as pairs consisting of a set of \(g\)-sequents together with a finite set of operations. Our approach permits an analysis of how certain inference rule types interact in a general setting, demonstrating under what conditions rules of a specific type can be permuted with or simulated by others, and being applicable to any sequent-style proof system that fits within our framework. We then leverage our permutation and simulation results to establish generic calculus and proof transformation algorithms, which show that every abstract calculus can be effectively transformed into a lattice of polynomially equivalent abstract calculi. We determine the complexity of computing this lattice and compute the relative sizes of proofs and sequents within distinct calculi of a lattice. We recognize that top elements in lattices correspond to nested sequent systems, while bottom elements correspond to labeled sequent systems, and observe that top and bottom elements coincide with many known (cut-free) nested and labeled sequent systems for logics characterized by Horn properties.
## I Introduction
Proof calculi are indispensable tools in the theory and application of logics, serving as engines that facilitate reasoning within a given logical paradigm. Of particular importance are _sequent-style calculi_, which were first introduced by Gentzen in the 1930s [1, 2]. Gentzen's sequent systems consist of _inference rules_, which operate over formulae called _sequents_, i.e. formulae of the form \(\varphi_{1},\ldots,\varphi_{n}\Rightarrow\psi_{1},\ldots,\psi_{k}\) with \(\varphi_{i}\) and \(\psi_{j}\) logical formulae, being used to derive theorems of a specified logic. A crucial feature of Gentzen's sequent calculi is that they exhibit the so-called _sub-formula property_, meaning every formula occurring in the premise of an inference rule is a sub-formula of one occurring in the conclusion of the rule. This feature, and the sequent formalism more generally, proved to be fruitful from both a theoretical and practical standpoint, being used to supply proof systems for a wide array of logics [3, 4, 5, 6], to discover new logics [7], to establish properties of logics (e.g. interpolation [8]), and to automate reasoning with logics [9, 10].
Yet, the discovery of new, expressive logics (e.g. the modal logic S5 and bi-intuitionistic logic) led to the realization that the sequent formalism was _too strict_ as sequent calculi exhibiting the sub-formula property remained elusive [11, 12]. In response, a variety of formalisms extending Gentzen's traditional sequent formalism were introduced to recapture the sub-formula property; e.g. hypersequents were introduced as multisets of sequents [13, 14], \(2\)-sequents/linear nested sequents were introduced as lines of sequents [15, 16], nested sequents were introduced as trees of sequents [17, 18], and labeled sequents were introduced [19, 20], being interpretable as binary graphs of sequents [21, 22].
Such proof systems have found a wide array of applications for diverse classes of logics, being used in the design of interpolant construction algorithms [23, 24, 25], in writing decision algorithms (with counter-model extraction) [26, 27, 28], and have been applied in knowledge integration scenarios [29]. Nevertheless, it was found that differing formalisms possessed distinct advantages over one another; e.g. nested calculi were found to be suitable for writing proof-search and decision algorithms [28, 29], whereas labeled calculi were found to admit algorithmic construction [30].
Naturally, the arrival of new sequent-based formalisms gave rise to questions concerning their relationships: How are calculi in one formalism transformed into 'deductively equivalent' calculi in another? What are the relative sizes of proofs and sequents in one formalism compared to another? Under what conditions are proofs transformable between formalisms and what are the complexity bounds thereof? Such questions have typically been investigated in restricted concrete settings, focusing on specific sequent-based calculi for known classes of logics [21, 31, 32, 16, 33]. In contrast, we propose an alternative methodology, designing a novel _abstract framework_ for the general study of structural sequent calculi.1
Footnote 1: Due to the duality exhibited between sequent systems and tableaux [34], our work is also applicable to the latter.
In particular, we formulate calculi as pairs consisting of (1) a set of objects called _generalized sequents_ accompanied by (2) a finite set of inference rules. We therefore shift our attention from proof systems for logics, and instead, focus on proof systems in and of themselves, yielding a _logic independent_ approach for studying the properties of, and relationships between, sequent-style systems. Due to its generality, our framework subsumes the various sequent-based formalisms mentioned above, and our results hold for any logic or sequent-style system that can be viewed as an object in our framework. Specifically, we accomplish the following:
\(\bullet\) We generalize the notion of _sequent_ to a graph of Gentzen
style sequents, referred to as _g-sequents_, which cover various kinds of sequents (e.g. labeled, nested, linear nested) that commonly appear in proof-theoretic works.
\(\bullet\) We generalize inference rules to select _inference rule types_ that operate over g-sequents, revealing the critical components that constitute an inference rule. These inference rule types subsume standard inference rules for sequent-style systems.
\(\bullet\) We define a generic notion of calculus, so that our results hold generally for all sequent-style calculi that can be viewed as objects within our framework.
\(\bullet\) Our abstract calculi include structural rules that facilitate reasoning with Horn properties. Therefore, a sizable number of (non-)classical logics, semantically characterized by 'Horn' frame conditions, and their accompanying sequent-based systems are subsumed by our work. Examples of logics with proof systems covered by our framework can be viewed in Figure 1.
\(\bullet\) We define proof transformation notions (e.g. _permutation_ and _simulation_) as well as explain how to strengthen or weaken certain rules (via notions called _absorption_ and _fracturing_), which are used to provide generic calculus and derivation transformation algorithms and to compute complexity bounds thereof. This work contributes to a better understanding of how structural rules are eliminated from proofs, and how reachability and propagation rules [35, 22] arise from this process.
\(\bullet\) We discover that every abstract calculus exists within a finite lattice of 'polynomially equivalent' abstract calculi, which we show how to compute. We observe that the top and bottom of a lattice is one of two calculus types, which we call _implicit_ and _explicit_ calculi, respectively. When we instantiate a lattice with known sequent-style systems, we find that nested calculi [17, 18] serve as the top element, whereas labeled calculi [19, 20] serve as the bottom element, establishing that nested and labeled calculi are _dual_.
Our abstract approach has explanatory value, yielding deep insights into the nature of, and connection between, sequent-based systems. Moreover, we provide a widely applicable toolkit for the manipulation of proofs and proof systems.
This paper is organized as follows: In Section II, we explain how our framework was designed by abstracting general underlying patterns appearing in calculi, considering various inference rule types and proof manipulation techniques. In Section III, we define our framework, and then put it to use in Section IV to establish a large number of permutation and simulation relationships between various inference rule types. In Section V, we demonstrate how abstract calculi can be converted into lattices of polynomially equivalent calculi. Finally, in Section VI, we discuss how nested and labeled systems can be identified with top and bottom elements of lattices, and discuss avenues for future research. Due to space constraints, we defer all proofs to the appendix.
## II Overview of our Approach
We now turn our attention toward explaining our abstract proof-theoretic framework, arising from the study of numerous calculi, and the formalization of underlying patterns. While it is infeasible to thoroughly describe this lengthy process, one can provide a narrated walk-through of this work for a single (labeled) sequent calculus. This is the purpose of this section. It is meant to make the reader adopt a certain mindset by following a concrete example foreshadowing our general framework defined in Section III. It is worth noting, that while this section provides a description of the aforementioned process, Section III reflects its entirety by providing the end result. Thus, it might be worthwhile to consult the exact definitions in the following section, if the reader wishes to.
We have chosen a fragment of the labeled sequent calculus G3l[38] for propositional intuitionistic logic to use as our running example. We denote this fragment by G3l\({}^{\prime}\), and define it to be the set of rules shown in Figure 2.2 This calculus avoids the unnecessary complexities of other sequent-based systems, while also possessing regulatory attributes that justify concepts later defined within our framework.
Footnote 2: We employ a slight variation of the notation used for labeled sequents in [38] to better fit within the notation of our framework.
### _The Structure of Sequents and Inference Rules_
As mentioned in Section I, generalizations of Gentzen-style sequents (we hitherto refer to _Gentzen-style sequents_ as _sequents_) take various forms, typically being types of graphs with sequents as vertices. These may take the form of general graphs [19], polytrees [21], trees [18], lines [15], or points, yielding standard sequents [1, 2]; e.g. the labeled sequents used in G3l\({}^{\prime}\) take the form of graphs, as explained below.
Labeled sequents in G3l\({}^{\prime}\) are objects of the form \(\mathcal{R},\Gamma\vdash\Delta\), where \(\mathcal{R}\) is a set of _relational atoms_ (or, _edges_) of the form \(wEu\) and \(\Gamma,\Delta\) are multisets of _labeled formulae_ of the form \(w:\varphi\), which use a set \(\{w,u,v,\ldots\}\) of _labels_. In G3l\({}^{\prime}\), labeled formulae employ formulae from the language \(\mathcal{L}\) of propositional intuitionistic logic, generated via the following grammar in BNF: \(\varphi:=p\mid\bot\mid\varphi\lor\varphi\mid\varphi\land\varphi\mid\varphi \supset\varphi\), where \(p\) ranges over a set of propositional variables.
We observe that each labeled sequent can be viewed as a binary graph of sequents, obtained by depicting all labels as vertices, all relational atoms as edges, and all labeled formulae as sequents labeling nodes (cf. [22]). For example, the graph shown to the left
Fig. 1: Logics and associated sequent-style calculi covered by our abstract framework. Citations to papers with (cut-free) labeled and nested sequent systems thereof are provided in the second and third columns.
corresponds to the labeled sequent \(\mathcal{R},\Gamma\ \vdash\ \Delta\), given that we let \(\mathcal{R}=wEw,wEz,wEv,wEu,uEv\), \(\Gamma=w:\varphi,u:\chi,z:\psi\), and \(\Delta=w:\psi,z:\chi,z:\xi\). Every labeled sequent can be rewritten in an equivalent form \(\mathcal{R}\ \vdash\ \Sigma\), where \(\mathcal{R}\) is a set of relational atoms as before, but \(\Sigma\) is a set of prefixed sequents; e.g. the labeled sequent \(\mathcal{R},\Gamma\ \vdash\ \Delta\) depicted above, can we written as \(\mathcal{R}\ \vdash\ \Sigma\), where
\[\Sigma=w:(\varphi\Rightarrow\psi),z:(\psi\Rightarrow\chi,\xi),u:(\chi \Rightarrow\emptyset),v:(\emptyset\Rightarrow\emptyset).\]
We view this perspective of labeled (or, graphical) sequents as beneficial for a couple reasons: (1) The internal structure of a sequent is _logic-dependent_; e.g. certain intuitionistic logics may restrict the succedent to at most one formula [1, 2] or certain sub-structural logics may employ sequences of formulae as opposed to (multi)sets in the antecedent or succedent [7]. As we are interested in providing a generic framework that studies the _graphical properties_ of 'generalized or graphical sequents' and their associated proof systems, we may simply view sequents as _types of labels_. We therefore define the notion of a _generalized sequent_ (_g-sequent_ for short) as a graph of'sequents' without specifying the internal structure of such sequents, yielding a logic-independent study of sequent-style systems, as presented in Section III. (2) As mentioned above, various sequent-style formalisms beget proof systems that operate over certain types of graphs of sequents. Thus, our notion of g-sequent captures all such formalisms uniformly as restricting the g-sequents used yields a certain formalism.
We also find it important to clarify the connection between our notation and the notation typically used in labeled sequents, as this demonstrates how various sequent-style systems can be viewed as objects within our framework. Thus, we will view labeled sequents in the examples below as binary graphs of sequents. Moreover, we remark that a variety of works clarify how generalized versions of sequents correspond to graphs of sequents and how labeled sequents subsume such formalisms [21, 32, 47, 22], letting one view various proof systems as systems within our formalism.
Since we have adopted the view that labeled sequents are graphs of sequents, we reinterpret the labeled sequents and inference rules of \(\mathsf{G3l}^{\prime}\) in light of this perspective. We now take a labeled sequent to be an object of the form \(\mathcal{R}\ \vdash\ \Sigma\) such that \(\mathcal{R}\) is a set of edges (i.e. relational atoms) as before and \(\Sigma\) is a set of _prefixed sequents_, which are of the form \(w:(X\Rightarrow Y)\) with \(X=\varphi_{1},\ldots,\varphi_{n}\) and \(Y=\psi_{1},\ldots,\psi_{k}\) multisets of intuitionistic formulae. We rewrite the inference rules of \(\mathsf{G3l}^{\prime}\) in this notation and find certain commonalities among sets of rules in \(\mathsf{G3l}^{\prime}\) that give rise to various notions of _inference rule types_, of which we now elaborate.
**Initial Rules.** In our new notation, the \((id)\) rule takes the form shown below.
\[\overline{\mathcal{R},\,wEu\ \vdash\ \Sigma,\,w:(X,p\Rightarrow Y),\,u:(X^{ \prime}\Rightarrow p,Y^{\prime})\ }(id)\]
We ask: what are the features of an inference rule that make it initial? Obviously, they are free of premises, and dictate what is taken to be axiomatic. Second, we observe that such rules may rely on the existence of relational data; e.g. in the \((id)\) rule above, an edge \(wEu\) connecting one sequent of a certain type to another sequent of a certain type must be present. That is, \((id)\) is subject to a _structural constraint_, which we may formalize as a labeled graph of the form \(C=(\{w,u\},\{(w,u)\},L)\) such that \(L(w,u)=E\). One may verify that an instance of \((id)\) satisfies such a constraint in the sense that any instance of \((id)\) can be 'pattern matched' to such a constraint (with the edge \((w,u)\) being associated with \(wEu\)). Third, we notice that although structural constraints appear to be critical features of initial (or, inference) rules, such objects are not enough to clearly express the operation of \((id)\). We also require that prefixed sequents satisfy a certain relation, which we refer to as a _sequent constraint_. For example, in the \((id)\) rule above, a relation \(R\) is additionally required, where \(R(S_{1},S_{2})\) holds with \(S_{1}=X_{1}\Rightarrow Y_{1}\) and \(S_{2}=X_{2}\Rightarrow Y_{2}\)_iff_\(p\in X_{1}\) and \(p\in Y_{2}\).
**Local Rules.** We define a _local rule_ to be an inference rule that only operates on sequents at a specific label. For example, if we rewrite \((\vee_{L})\) in our notation, the rule becomes:
\[\frac{\mathcal{R}\ \vdash\ \Sigma,w:(X,\varphi\Rightarrow Y)\quad\ \mathcal{R}\ \vdash\ \Sigma,w:(X,\psi\Rightarrow Y)}{\mathcal{R}\ \vdash\ \Sigma,w:(X,\varphi\vee\psi\Rightarrow Y)}\ (\vee_{L})\]
Observe that this rule is local in the sense that it only manipulates data occurring in sequents at the label \(w\). As in the initial rule case above, we recognize that a sequent constraint is required to fully specify the operation of the \((\vee_{L})\) rule: for sequents \(S_{1}=X_{1}\Rightarrow Y_{1}\), \(S_{2}=X_{2}\Rightarrow Y_{2}\), and \(S_{3}=X_{3}\Rightarrow Y_{3}\), we define \(R(S_{1},S_{2},S_{3})\)_iff_\(\varphi\in X_{1}\), \(\psi\in X_{2}\), and \(\varphi\vee\psi\in X_{3}\). Such a relation must hold in any application of this rule for it to qualify as a valid rule application.
**Expansion Rules.** We classify _expansion rules_ as inference rules that bottom-up introduce an edge (in \(\mathcal{R}\)) to a fresh label,
Fig. 2: Some inference rules from the labeled calculus \(\mathsf{G3l}\) for propositional intuitionistic logic [38]. We let \(\mathsf{G3l}^{\prime}\) denote the collection of the above rules. The side condition \(\dagger\) stipulates that the rule is applicable only if \(u\) is fresh, i.e. \(u\) does not occur in the surrounding context \(\mathcal{R},\Gamma,\Delta\).
thus expanding the graphical structure of the conclusion. The \((\supset_{R})\) rule serves as an example of an expansion rule, which takes the following form when rewritten in our notation:
\[\frac{\mathcal{R},\,wEu\ \vdash\ \Sigma,\,w:(X\Rightarrow Y),u:(\varphi \Rightarrow\psi)}{\mathcal{R}\ \vdash\ \Sigma,\,w:(X\Rightarrow Y,\varphi\supset\psi)}\ (\supset_{R})\]
Similar to the case of the \((\lor_{L})\) rule above, we observe that a sequent constraint must hold, specifying how the sequents at \(w\) and \(u\) in the premise relate to each other and the sequent at \(w\) in the conclusion.
**Transmission Rules.** We take a _transmission rule_ to be an inference rule that updates two sequents connected by a single edge. The \((\supset_{L})\) rule serves as an example of a transmission rule, which takes the following form in our notation:
\[\mathcal{G}_{1}=\mathcal{R},\,wEu\ \vdash\ \Sigma,w:(X,\varphi\supset\psi \Rightarrow Y),u:(X^{\prime}\Rightarrow\varphi,Y^{\prime})\]
\[\mathcal{G}_{2}=\mathcal{R},\,wEu\ \vdash\ \Sigma,w:(X,\varphi\supset\psi \Rightarrow Y),u:(X^{\prime},\psi\Rightarrow Y^{\prime})\]
\[\frac{\mathcal{G}_{1}}{\mathcal{G}_{2}}\]
\[\frac{\mathcal{G}_{1}}{\mathcal{R},\,wEu\ \vdash\ \Sigma,\,w:(X,\varphi \supset\psi\Rightarrow Y),\,u:(X^{\prime}\Rightarrow Y^{\prime})}\ (\supset_{L})\]
Similar to the case of the \((id)\) rule, the use of an edge (viz. \(wEu\)) in updating the sequents at \(w\) and \(u\) implies that structural constraints must be enforced on the left and right premises. In addition, a sequent constraint is required to fully specify that operation of the above rule, relating the sequents at \(w\) and \(u\) in the left and right premises with the sequents at \(w\) and \(u\) in the conclusion.
**Horn Rules.** In our setting we consider _Horn rules_ to be inference rules that encode a Horn property, stipulating that if a certain sequence of edges exist in the conclusion of the rule, then a single type of edge must occur in the premise (cf. [20, 44]). Such rules serve as types of _structural rules_[21] or _relational rules_[20] existing in the literature. The \((ref)\) and \((tra)\) rules stand as examples of Horn rules, which take the following form in our notation:
\[\frac{\mathcal{R},\,wEw\ \vdash\ \Sigma}{\mathcal{R}\ \vdash\ \Sigma}\ (ref)\ \ \ \ \ \ \ \frac{\mathcal{R},wEu,uEv,wEv\ \vdash\ \Sigma}{\mathcal{R},\,wEu,\,uEv\ \vdash\ \Sigma}\ (tra)\]
The \((ref)\) rule encodes reflexivity, by adding a single 'loop' (i.e. \(wEw\)) to the premise, whereas the the \((tra)\) rule encodes transitivity, requiring a sequence of two edges (i.e. \(wEu,uEv\)), and connecting \(w\) to \(v\) via a single edge (i.e. \(wEv\)) in the premise. Both rules encode types of Horn conditions, and we note that their functionality can be specified without the use of constraints. As we discuss below, Horn rules can be 'absorbed' into the constraints associated with initial and transmission rules, producing new inference rules.
### _Calculus Transformation and Rule Trading_
Permutations arguments are at the heart of proof theory; e.g. Gentzen's celebrated cut-elimination theorem shows how the cut rule can be eliminated via permutations, yielding a proof exhibiting the sub-formula property [1, 2]. Likewise, simulations between sets of inference rules is of critical importance as they can be used to establish the'relative strength' of proof systems and to establish the relative sizes of proofs. In Section IV, we will define these notions, using them to confirm a broad set of general relationships between rule types within our framework, and assisting us in writing generic algorithms (with complexity bounds) that transform calculi and their associated proofs.
We now exemplify simulations and permutations in the context of G3I\({}^{\prime}\). In particular, we look at how initial, transmission, and Horn rules relate to one another. This investigation will demonstrate the connection between structural constraints and Horn rules, justifying their presence in our framework.
We begin by studying simulations between the initial rule \((id)\) and the Horn rules \((ref)\) and \((tra)\), and look at the cases where the explicit edge \(wEu\) in \((id)\) is 'active' in applications of \((ref)\) and \((tra)\). The first case yields a derivation of the following form:
\[\frac{\mathcal{R},\,wEw\ \vdash\ \Sigma,\,w:(X,p\Rightarrow p,Y)}{\mathcal{R} \ \vdash\ \Sigma,\,w:(X,p\Rightarrow p,Y)}\ (ref)\]
where as the second case yields a derivation of the form:
\[\frac{\mathcal{R},\,wEu,\,uEv,\,wEv\ \vdash\ \Sigma^{\prime}}{\mathcal{R},\,wEu,\, uEv\ \vdash\ \Sigma^{\prime}}\ (tra)\]
with \(\Sigma^{\prime}=\Sigma,\,w:(X,p\Rightarrow Y),v:(X^{\prime}\Rightarrow p,Y^{ \prime})\). We observe that the conclusion in the \((ref)\) case is similar to an instance of \((id)\). However, whereas \((id)\) requires the existence of prefixed sequents of the form \(w:(X,p\Rightarrow Y)\) and \(u:(X^{\prime}\Rightarrow p,Y^{\prime})\) connected by a single edge \(wEu\), the conclusion of \((ref)\)_identifies_ these two sequents as \(w:(X,p\Rightarrow p,Y)\) and omits the occurrence of an edge. In the \((tra)\) case, the conclusion of \((tra)\) contains two prefixed sequents like \((id)\), but with these two prefixed sequents connected by a path of edges \(wEu,uEv\). Taking this into account, we recognize that we could simulate such derivations with a stronger form of \((id)\) that absorbs the functionality of the \((ref)\) and \((tra)\) rules:
\[\frac{\mathcal{R}\ \vdash\ \Sigma,\,w:(X,p\Rightarrow Y),\,u:(X^{\prime} \Rightarrow p,Y^{\prime})}{\mathcal{R}\ \vdash\ \Sigma}\ (tra)\]
where \((id)^{\prime}\) is subject to the side condition that a path \(wEv_{1},\dots,v_{n-1}Eu\) of relational atoms of length \(0\) (meaning \(w=u\)) or greater exists between \(w\) and \(u\). We can formalize this requirement as a structural constraint of the form \(C=(\{w,u\},\{(w,u)\},L)\) with \(L(w,u)\in\{\varepsilon,E,EE,\ldots\}\), where \(\varepsilon\) is the empty string (meaning \(w=u\)), \(E\) is treated as a character, and each \(EE\cdots E\) is a word. Moreover, we require the same sequent relation to be enforced on \((id)^{\prime}\) just as it was with \((id)\). We can take the conclusion of \((ref)\) (in the derivation above) to be an instance of \((id)^{\prime}\) where \(L(w,u)=\varepsilon\), the conclusion of a typical \((id)\) rule to be an instance of \((id)^{\prime}\) where \(L(w,u)=E\), and the conclusion of \((tra)\) to be an instance of \((id)^{\prime}\) where \(L(w,u)=EE\).
One can indeed show that any labeled sequent derivable by \((id)\) followed by applications of \((ref)\) or \((tra)\) can be simulated by \((id)^{\prime}\) and vice-versa [40, 22]. Furthermore, this example justifies the inclusion of constraints in our framework as it shows that constraints can be modified, generating
stronger inference rules, and forging new derivations that simulate others, effectively yielding new types of calculi.
We also observe a similar behavior when applying \((ref)\) and \((tra)\) to the transmission rule \((\supset_{L})\). Let us consider applying the \((ref)\) rule after an instance of \((\supset_{L})\) such that the relational atom 'active' in the latter is removed by \((ref)\). We then have a derivation of the following form:
\[\mathcal{G}_{1} =\mathcal{R},\,wEw\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi \Rightarrow\varphi,Y)\] \[\mathcal{G}_{2} =\mathcal{R},\,wEw\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi,\psi \Rightarrow Y)\] \[\frac{\mathcal{G}_{1}}{\mathcal{G}_{2}} (\supset_{L})\] \[\frac{\mathcal{R},\,wEw\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi \Rightarrow Y)}{\mathcal{R}\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi \Rightarrow Y)}\,(ref)\]
Similar to the \((id)\) case above, whereas \((\supset_{L})\) acts on prefixed sequents at \(w\) and \(u\), separated by a single edge \(wEu\), \((ref)\) requires the identification of these two prefixed sequents as \(w:(X,\varphi\supset\psi\Rightarrow Y)\). An investigation of applying \((tra)\) to an instance of \((\supset_{L})\) would exhibit behavior as in the \((id)\) case as well, where the two prefixed sequents are connected via a chain of relational atoms greater than one. We could therefore modify the constraints imposed on \((\supset_{L})\), enforcing a _constraint family_\(\mathcal{C}=(C_{1},C_{2})\) with \(C_{1}\) applied to the left premise and \(C_{2}\) applied to the right premise. Specifically, we let \(C_{i}=(\{w,u\},\{(w,u)\},L_{i})\) such that \(i\in\{1,2\}\) and \(L_{i}(w,u)\in\{\varepsilon,E,EE,\ldots\}\). (NB. Although \(C_{1}=C_{2}\) in this example, we allow constraints within a constraint family to differ in the general setting.) This constraint family can be imposed to define a new rule \((\supset_{L})^{\prime}\), which operates like \((\supset_{L})\), but applies between sequents connected via a chain of relational atoms of length zero or greater. Using this modified rule, we find that the above derivation can be simulated by an application of \((ref)\) followed by an application of \((\supset_{L})^{\prime}\), yielding a type of permutation, as shown below.
\[\mathcal{D}_{1} =\] \[\mathcal{D}_{1} = \frac{\mathcal{R},\,wEw\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi \Rightarrow\varphi,Y)}{\mathcal{R}\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi \Rightarrow\varphi,Y)}\,(ref)\] \[\mathcal{D}_{1} = \frac{\mathcal{R},\,wEw\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi,\psi \Rightarrow Y)}{\mathcal{R}\ \vdash\ \Sigma,\,w:(X,\varphi\supset\psi,\psi \Rightarrow Y)}\,(ref)\] \[\frac{\mathcal{D}_{1}}{\mathcal{R}\ \vdash\ \Sigma,\,w:(X,\varphi \supset\psi\Rightarrow Y)}\,(\supset_{L})^{\prime}\]
If we replace \((id)\) and \((\supset_{L})\) by \((id)^{\prime}\) and \((\supset_{L})^{\prime}\) in \(\mathsf{G3}\mathsf{J}^{\prime}\), we find that \((ref)\) and \((tra)\) can be permuted upward in any given derivation and ultimately eliminated [48, 40]. Rules such as \((id)^{\prime}\) and \((\supset_{L})^{\prime}\) have been referred to as _reachability_ (and in certain cases, _propagation_) rules [46, 22], and form a crucial component of our framework. Such rules witness the importance of structural constraints, and as we will show in Section IV, the interplay between constraints, reachability rules, and Horn rules uncover a wealth of permutation and simulation relationships between classes of inference rule types. Ultimately, in Section V, such rules will play a vital role, helping us identify spaces of polynomially equivalent calculi.
## III Abstract Sequent Calculi
We now present our framework, and introduce the notion of a _generalized sequent_ (or, _g-sequent_ for short). These objects are graphs whose edges are labeled with characters and vertices are labeled with sequents. As we are in a general setting, and are interested in (the interaction between) inference rules that operate on such graphs, we do not discuss the internal structure of sequents, and thus, a _sequent_ is merely a label in our context. The use of such objects is motivated by more expressive sequent systems, such as labeled calculi [19, 20] and nested calculi [17, 18], which can be seen as systems that reason over graphs of sequents, as previously explained.
After the introduction of g-sequents, we define various types of inference rules that typically appear in sequent-style proof systems. Certain classes of inference rules (e.g. reachability rules) require the use of _languages_ in their constraints. We will use restricted versions of semi-Thue systems [49] to generate such languages and to facilitate our proof-theoretic study in subsequent sections.
### _Generalized Sequents_
We let \(\mathsf{S}=\{S_{1},S_{2},S_{3},\ldots\}\) be a countably infinite set of _sequents_, which are denoted by \(S\) and annotated versions thereof. As sequents are taken to be atomic entities in our framework, we do not describe their internal structure. We let \(\mathcal{U}=\{w,u,v,\ldots\}\) be the _universe_, whose entities are denoted by \(w\), \(u\), \(v\), \(\ldots\) (potentially annotated), and which serve as vertices in the various graphs we define. Below, we define _g-sequents_ relative to a non-empty, finite set \(\mathtt{E}=\{a,b,c,\ldots\}\) of _edge types_, which are used to index the edges of a g-sequent.
**Definition 1** (Generalized Sequent).: A _generalized sequent (g-sequent)_ is defined to be a tuple \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{L})\) such that
* \(\mathcal{V}\subseteq\mathcal{U}\) is a (potentially empty) set of vertices;
* \(\mathcal{E}=\{\mathcal{E}_{a}\mid a\in\mathtt{E}\}\) with \(\mathcal{E}_{a}\subseteq\mathcal{V}\times\mathcal{V}\) for each \(a\in\mathtt{E}\);
* \(\mathcal{L}:\mathcal{V}\rightarrow\mathtt{S}\).
We use \(\mathcal{G}\) (possibly annotated) to denote g-sequents, and let \(\mathfrak{G}(\mathtt{E})\) be the set of all g-sequents defined relative to a set \(\mathtt{E}\) of edge types. For a g-sequent \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{L})\), we let \(\mathcal{U}(\mathcal{G})=\mathcal{V}\).
As proof systems are concerned with the manipulation of syntactic entities via inference rules, we employ a more standard'sequent-style' notation for g-sequents in our technical work. In particular, we use the equivalent notation \(\Gamma\vdash\Delta\) to denote a g-sequent \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{L})\), where the _antecedent_\(\Gamma\) is a set of _edge atoms_ of the form \(w\mathcal{E}_{a}u\) and the _succedent_\(\Delta\) is a set of _prefixed sequents_ of the form \(w:S\) such that (1) for each \(a\in\mathtt{E}\), \(w\mathcal{E}_{a}u\in\Gamma\)_iff_\((w,u)\in\mathcal{E}_{a}\), and (2) \(w:S\in\Delta\)_iff_\(\mathcal{L}(w)=S\). We define the _size_ of a g-sequent \(\mathcal{G}=\Gamma\ \vdash\ \Delta=(\mathcal{V},\mathcal{E},\mathcal{L})\) to be \(s(\mathcal{G})=|\Gamma|+|\Delta|=|\mathcal{E}|+|\mathcal{V}|\). Also, we let \(\mathtt{PS}=\mathcal{U}\times S\) denote the set of prefixed sequents.
To improve intuition concerning g-sequents and their representations, we provide examples in Figure 3. We also specify a special subclass of g-sequents (whose importance will be discussed in Sections V and VI) referred to as _polytree g-sequents_. A polytree g-sequent is a g-sequent that is (1)
connected, and is (2) free of (un)directed cycles. Observe that the g-sequent shown right in Figure 3 is a polytree g-sequent.
### E-_Systems and Propagation_
To control the functionality of certain inference rules, we make use of a restricted version of semi-Thue systems [49] that rewrite single edge types to strings of thereof.
Given a set A we define the set A\({}^{*}\) of _strings_ over A to be the set of finite sequences of elements of A including the _empty string_\(\varepsilon\). We denote strings with (possibly annotated) letters \(s\), \(t\), \(r\). A _production rule_ is defined to be an object of the form \(p=s\longrightarrow s^{\prime}\) such that \(s,s^{\prime}\in\texttt{A}^{*}\). We often use \(p\) and annotated versions thereof to denote production rules. A _semi-Thue system_ is defined to be a finite set \(\mathbf{G}\) of production rules. Semi-Thue systems permit us to derive strings via repeated applications of production rules. Given a semi-Thue system G over A, and a pair of strings \(t,t^{\prime}\in\texttt{A}^{*}\) we write \(t\longrightarrow_{\mathbf{G}}t^{\prime}\)_iff_ there exists a rule \(s\longrightarrow s^{\prime}\in\mathbf{G}\) such that \(s\) is a sub-string of \(t\), and \(t^{\prime}\) can be obtained from \(t\) by replacing some occurrence of \(s\) in \(t\) by \(s^{\prime}\). A _G-derivation_ of a string \(t\in\texttt{A}^{*}\) from a string \(s\in\texttt{A}^{*}\), denoted \(s\longrightarrow^{*}_{\mathbf{G}}t\), is defined accordingly: (1) \(s\longrightarrow^{*}_{\mathbf{G}}s\), (2) if \(s\longrightarrow^{*}_{\mathbf{G}}r\) and \(r\longrightarrow_{\mathbf{G}}t\), then \(s\longrightarrow^{*}_{\mathbf{G}}t\). We define the _length_ of a \(\mathbf{G}\)-derivation of a string \(t\in\texttt{A}^{*}\) from a string \(s\in\texttt{A}^{*}\) to be the minimal number of rule applications used to derive \(t\) from \(s\). The _language_ of a string \(s\in\texttt{A}^{*}\) relative to a semi-Thue system \(\mathbf{G}\) is defined as: \(\mathbf{G}(s)=\{t\mid s\longrightarrow^{*}_{\mathbf{G}}t\}\).
Let \(\overline{\texttt{E}}\) be the following set \(\{\overline{a}\mid a\in\mathbb{E}\}\). For a production rule of the form \(p=x\longrightarrow y_{1}\cdots y_{n}\) with \(x,y_{1},\ldots,y_{n}\in\texttt{E}\cup\overline{\texttt{E}}\), we define \(\overline{p}=\overline{x}\longrightarrow\overline{y}_{n}\cdots\overline{y}_{1}\), where \(\overline{z}=z\) for \(z\in\texttt{E}\cup\overline{\texttt{E}}\). We define an _E-system_ to be a semi-Thue system \(\mathbf{G}\) over \(\texttt{E}\cup\overline{\texttt{E}}\) satisfying: (1) for every rule \(s\longrightarrow t\in\mathbf{G}\) we have \(|s|=1\), and (2) \(s\longrightarrow t\in\mathbf{G}\)_iff_\(\overline{s}\longrightarrow\overline{t}\in\mathbf{G}\). A _production pair_ from \(\mathbf{G}\) is defined be a pair \((p,\overline{p})\) such that \(p,\overline{p}\in\mathbf{G}\). We define \(P(\mathbf{G})\) to be the set of all production pairs in \(\mathbf{G}\). For a set \(P\) of production pairs, we let \(\mathbf{G}(P)\) be the set of all rules found in a production pair of \(P\).
Given a g-sequent \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{L})\), two vertices \(u,w\in\mathcal{V}\), and an element \(a\in\texttt{E}\) we write \(\mathcal{G}\models u\stackrel{{ a}}{{\leadsto}}u\)_iff_\((u,w)\in\mathcal{E}_{a}\), and \(\mathcal{G}\models u\stackrel{{ a}}{{\leadsto}}w\)_iff_\((w,u)\in\mathcal{E}_{a}\). Moreover, given a string \(xs\in(\texttt{E}\cup\overline{\texttt{E}})^{*}\) where \(x\in\texttt{E}\cup\overline{\texttt{E}}\), we inductively define \(\mathcal{G}\models u\stackrel{{ xs}}{{\leadsto}}w\) as '\(\exists_{v\in\mathcal{V}}\mathcal{G}\models u\stackrel{{ x}}{{\leadsto}}v\) and \(\mathcal{G}\models v\stackrel{{ s}}{{\leadsto}}w\)', and \(\mathcal{G}\models u\stackrel{{(\ref{eq:sx})}}{{\leadsto}}w\) as '\(\exists_{v\in\mathcal{V}}\mathcal{G}\models u\stackrel{{ x}}{{\leadsto}}v\) and \(\mathcal{G}\models v\stackrel{{ s}}{{\leadsto}}w\)'. Additionally, when \(\mathcal{G}\) is clear from the context we may simply write \(u\stackrel{{ s}}{{\leadsto}}w\) to express \(\mathcal{G}\models u\stackrel{{ s}}{{\leadsto}}w\). Finally, given a language \(\mathscr{L}\) (of some E-system) we use \(u\stackrel{{\mathcal{L}}}{{\leadsto}}w\)_iff_ there is a string \(s\in\mathscr{L}\) such that \(u\stackrel{{ s}}{{\leadsto}}w\).
### _Rules and Abstract Systems_
Let us move on to discuss operations over g-sequents. We will define such operations in the format of inference rules as this will let us view our work more clearly in the context of structural proof theory. First, we define _structural constraints_, which are used to specify classes of g-sequents that are permitted to appear in specific inference rules.
**Definition 2** (Structural Constraint).: Let E be a set of edge types. We define a _structural constraint_\(C\) to be a finite labeled tree \((V,E,L)\) such that \(V\subseteq\mathcal{U}\), \(E\subseteq V\times V\), and if \((w,u)\in E\), then \(L(w,u)=\mathbf{G}(a)\) for \(a\in\texttt{E}\) and \(\mathbf{G}\) an E-system.
We define a _constraint family_ to be a finite sequence \(\mathcal{C}=(C_{1},\ldots,C_{n})\) of constraints, and we say that an E-system \(\mathbf{G}\)_participates_ in a constraint \(C=(V,E,L)\)_iff_ there exists an \((w,u)\in E\) and \(a\in\texttt{E}\) such that \(L(w,u)=\mathbf{G}(a)\). Likewise, we say that an E-system \(\mathbf{G}\)_participates_ in a constraint family \(\mathcal{C}\)_iff_ there exists a constraint \(C\) in \(\mathcal{C}\) such that \(\mathbf{G}\) participates in \(C\). We let \(\mathbf{G}(C)=\mathbf{G}_{1}\cup\cdots\cup\mathbf{G}_{n}\) such that \(\mathbf{G}_{1},\ldots,\mathbf{G}_{n}\) are all E-systems participating in \(C\), and define the _size_ of a constraint \(C\) as: \(|C|=|\mathbf{G}|\).
**Definition 3** (Constraint Satisfaction).: Let \(C=(V,E,L)\) be a constraint, \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{L})\) a g-sequent, and suppose \(V\subseteq\mathcal{V}\). We say that \(\mathcal{G}\)_satisfies_\(C\)_iff_ the following condition holds: if \(L(w,u)=\mathbf{G}(a)\), then \(\mathcal{G}\models w\stackrel{{\mathbf{G}(a)}}{{\leadsto}}u\).
**Definition 4** (Sequent Constraint).: We define a _sequent constraint_\(R\) to be an \((n+1)\)-ary relation such that:
\[R\subseteq\underbrace{\texttt{S}\times\cdots\times\texttt{S}}_{n}\times 2^{ \texttt{PS}}.\]
We say that \(S_{1},\ldots,S_{n}\in\texttt{S}\) and \(\Delta\subseteq\texttt{PS}\)_satisfy_\(R\)_iff_ there exists a tuple \((S_{k_{1}},\ldots,S_{k_{n}},\Delta)\in R\) with \(k_{i},k_{j}\in\{1,\ldots,n\}\) and \(k_{i}\neq k_{j}\) for each \(1\leq i<j\leq n\),
As certain inference rules in the literature are _context dependent_, e.g. the \(L\exists\) rule of Fitting [41], sequent constraints must take the entire succedent \(\Delta\) of a g-sequent into account in inference rule applications. This explains the presence of \(\Delta\) in sequent constraints. Note that we will hitherto refer to structural constraints as _constraints_ more simply, while referring to sequent constraints as _sequent constraints_.
We now specify certain classes of inference rules, which will be collected together in finite sets to define our abstract calculi later on. Note that in the formulation of the inference rules below, for any constraint \(C=(V,E,L)\) parameterizing an inference rule, the vertices in \(V\) are always assumed to occur in the g-sequents of the rule, as specified by Definition 3. For inference rules with multiple premises, we use \(i\in[n]\) to mean \(1\leq i\leq n\).
**Initial Rule.** We define an _initial rule_ to be an operation of the following form:
\[\overline{\Gamma\;\vdash\;\Delta}\;\;i(C,R)\]
satisfying two conditions: (1) the g-sequent \(\Gamma\;\vdash\;\Delta\) satisfies the constraint \(C=(V,E,L)\), and (2) if \(\Gamma\;\vdash\;\Delta:=(\mathcal{V},\mathcal{E},\mathcal{L})\), then \(\mathcal{L}(w_{1}),\ldots,\mathcal{L}(w_{n})\), and \(\Delta^{\prime}=(\Delta\setminus\{w_{i}:\mathcal{L}(w_{i})\;|\;i\in[n]\})\) satisfy \(R\), where \(V=\{w_{1},\ldots,w_{n}\}\). Examples of initial rules include \(\mathrm{init}_{2}\) in [24] and \((\bot L)\) in [19].
**Local Rule.** We define a _local rule_ to have the following form:
\[\frac{\{\;\Gamma\;\vdash\;\Delta,\;w:S_{i}\;\}_{i\in[n]}}{\;\;\Gamma\;\vdash\; \Delta,\;w:S}\;l(R)\]
such that (1) \(S_{1},\ldots,S_{n},S\) and \(\Delta\) satisfy \(R\). Examples of local rules include \((\neg\rightarrow)\) in [17] and CUT in [7].
**Expansion Rule.** We define an _expansion rule_ as:
\[\frac{\;\Gamma,\Sigma\;\vdash\;\Delta,w:S_{1},u:S_{2}}{\;\Gamma\;\vdash\; \Delta,\;w:S}\;e(R)\]
where (1) \(S_{1}\), \(S_{2}\), \(S\), \(\Delta\) satisfy the sequent constraint \(R\), and (2) \(\mathcal{U}(\Gamma\;\vdash\;\Delta)\cap\mathcal{U}(\Sigma)=\{w\}\) with \(\Sigma\in\{w\mathcal{E}_{a}u,u\mathcal{E}_{a}w\;|\;a\in\mathbb{E}\}\). Examples of such rules are \(\Box R\) in [20] in [28].
**Forward Horn Rule.** If \(s=a_{1}\cdots a_{n}\in(\mathtt{E}\cup\overline{\mathtt{E}})^{*}\), then we define \(w\mathcal{E}_{s}u=w\mathcal{E}_{a_{1}}v_{1},\ldots,v_{n-1}\mathcal{E}_{a_{1}}u\), where \(w\mathcal{E}_{\overline{a}}u:=w\mathcal{E}_{a}w\) and \(w\mathcal{E}_{\varepsilon}u=(w=u)\). We define a _forward Horn rule_ to be an operation of the form shown below left, which takes the form shown below right when \(s=\varepsilon\).
\[\frac{\;\Gamma,\,w\mathcal{E}_{s}u,\,w\mathcal{E}_{a}u\;\vdash\;\Delta}{\; \Gamma,\,w\mathcal{E}_{s}u\;\vdash\;\Delta}\;h_{f}\quad\frac{\;\Gamma,\,w \mathcal{E}_{a}w\;\vdash\;\Delta}{\;\Gamma\;\vdash\;\Delta}\;h_{f}\]
For a production rule \(p=a\longrightarrow s\), we define the singleton set \(\mathrm{H}(p,\overline{p})\) to be the set containing the forward Horn rule left, which takes the form above right when \(s=\varepsilon\).
**Backward Horn Rule.** We define a _backward Horn rule_ to be an operation of the form shown below left, which takes the form shown below right when \(s=\varepsilon\).
\[\frac{\;\Gamma,\,w\mathcal{E}_{s}u,\,u\mathcal{E}_{a}w\;\vdash\;\Delta}{\; \Gamma,\,w\mathcal{E}_{s}u\;\vdash\;\Delta}\;h_{b}\quad\frac{\;\Gamma,\,w \mathcal{E}_{a}w\;\vdash\;\Delta}{\;\Gamma\;\vdash\;\Delta}\;h_{b}\]
For a production rule \(p=\overline{a}\longrightarrow s\), we define the singleton set \(\mathrm{H}(p,\overline{p})\) to be the set containing the backward Horn rule shown above left, which takes the form shown above right when \(s=\varepsilon\). We define a _Horn rule_ to be either a forward or backward Horn rule, and for a set \(P\) of production pairs, we let \(\mathrm{H}(P)=\bigcup_{(p,\overline{p})\in P}\mathrm{H}(p,\overline{p})\). Examples of Horn rules include \(\chi_{B}\) in [19] and \(\mathrm{(Path)}\) in [21].
We remark that Horn rules encode (universally closed) relational properties of the form \(w\mathcal{E}_{s}u\to w\mathcal{E}_{x}u\) with \(x\in\mathtt{E}\cup\overline{\mathtt{E}}\), covering standard frame conditions for tense logics [46], first-order intuitionistic logics [50], and even agency logics [27].
**Reachability and Transmission Rules.** We define a _reachability rule_ to be an operation of the following form:
\[\frac{\;\{\Gamma\;\vdash\;\Delta,\,w:S_{i},\,u:S_{i}^{\prime}\}_{i\in[n]}}{\; \;\Gamma\;\vdash\;\Delta,\,w:S}\;r(\mathcal{C},R)\]
such that (1) \(\mathcal{C}=(C_{1},\ldots,C_{n})\) and each \(C_{i}\) is of the form \((\{\,w,\},\{(w,\}),L_{i})\) with \(L_{i}(w,u)=\mathbf{G}_{i}(a_{i})\) for some \(a_{i}\in\mathtt{E}\) and \(\mathtt{E}\)-system \(\mathbf{G}_{i}\), (2) the \(i^{th}\) premise satisfies \(C_{i}\), and (3) \(S_{1},S_{1}^{\prime},\ldots,S_{n},S_{n}^{\prime},S,S^{\prime}\) and \(\Delta\) satisfy \(R\). Examples of reachability rules include \(\mathrm{Prop}(\mathbf{P})\) in [46] and \((\forall_{l}^{n})\) in [40].
We define a _transmission rule_\(t(\mathcal{C},R)\) (as discussed in the previous section) to be a special instance of a reachability rule satisfying conditions (1) and (2) above, but where for every constraint \(C_{i}\), \(L_{i}(w,u)=\{a\}\) for some \(a\in\mathtt{E}\). Examples of transmission rules include \(\Diamond^{\circ}\) in [43] and Lift in [24].
We refer to any inference rule of the above form as either an _inference rule_ or _rule_, more generally, and use \(\rho\), \(\sigma\), \(\tau\), \(\ldots\) (potentially annotated) to denote them. For those inference rules parameterized by a constraint \(C\) or constraint family \(\mathcal{C}\), we say that an \(\mathtt{E}\)-system _participates in the rule iff_ the \(\mathtt{E}\)-system participates in the constraint \(C\) or constraint family \(\mathcal{C}\). Let us now define the notion of an _abstract calculus_.
**Definition 5** (Abstract Calculus).: Let \(\mathtt{E}\) be a set of edge types. We define an _abstract (sequent) calculus_ (over \(\mathtt{E}\)) to be an ordered pair \(\mathfrak{A}=(\mathfrak{G}(\mathtt{E}),\mathfrak{R})\) with \(\mathfrak{G}(\mathtt{E})\) the set of \(\mathtt{g}\)-sequents defined relative to \(\mathtt{E}\) and \(\mathfrak{R}\) a finite collection of inference rules. We use \(\mathfrak{A}\), \(\mathfrak{B}\), \(\mathfrak{C}\), \(\ldots\) (occasionally annotated) to denote abstract calculi and define \(\mathtt{S}(\mathtt{E})\) to be the collection of all abstract calculi over \(\mathtt{E}\). Furthermore, for an abstract calculus \(\mathfrak{A}=(\mathfrak{G}(\mathtt{E}_{1}),\mathfrak{R}_{1})\) and \(\mathfrak{B}=(\mathfrak{G}(\mathtt{E}_{2}),\mathfrak{R}_{2})\), we say that \(\mathfrak{B}\) is an _extension_ of \(\mathfrak{A}\), and write \(\mathfrak{A}\subseteq\mathfrak{B}\), _iff_\(\mathtt{E}_{1}\subseteq\mathtt{E}_{2}\) and \(\mathfrak{R}_{1}\subseteq\mathfrak{R}_{2}\).
Given a set \(\mathds{R}\) of rules, we define a _derivation_\(\mathcal{D}\) to be any sequence of applications of rules in \(\mathds{R}\) to \(\mathtt{g}\)-sequents in \(\mathfrak{G}(\mathtt{E})\). If a \(\mathtt{g}\)-sequent \(\mathcal{G}\) occurs in a derivation \(\mathcal{D}\), then we write \(\mathcal{G}\in\mathcal{D}\) to indicate this. The _quantity_ of a derivation \(\mathcal{D}\) is defined as \(q(\mathcal{D})=|\{\mathcal{G}\in\mathfrak{G}(\mathtt{E})\;|\;\mathcal{G}\in \mathcal{D}\}|\) and the _size_ of a derivation \(\mathcal{D}\) is defined to be \(s(\mathcal{D})=\max\{s(\mathcal{G})\;|\;\mathcal{G}\in\mathcal{D}\}\times q( \mathcal{D})\).
A _proof_\(\mathcal{P}\) is defined to be a derivation beginning with applications of initial rules, and a _complete proof_ is any proof ending with a \(\mathtt{g}\)-sequent of the form \(w:S\). Finally, a _polytree proof_ is defined to be a proof such that every \(\mathtt{g}\)-sequent occurring in the proof is a polytree \(\mathtt{g}\)-sequent.
Two abstract calculi \(\mathfrak{A},\mathfrak{B}\in\mathbb{S}(\mathtt{E})\) are defined to be _polynomially equivalent_, written \(\mathfrak{A}\dashvDash_{p}\mathfrak{B}\), when a proof of a \(\mathtt{g}\)-sequent \(\mathcal{G}\) exists in \(\mathfrak{A}\)_iff_ a proof \(\mathcal{P}^{\prime}\) of \(\mathcal{G}\) exists in \(\mathfrak{B}\), and there exist \(\mathrm{PTIME}\) functions \(f\) and \(g\) such that \(f(\mathcal{P})=\mathcal{P}^{\prime}\) and \(g(\mathcal{P}^{\prime})=\mathcal{P}\). We also lift specific set-theoretic operations to abstract calculi: for a set \(\mathds{R}\) of rules and an abstract calculus \(\mathfrak{A}=(\mathfrak{G}(\mathtt{E}),
rules \(\mathrm{R}=\{\rho_{1},\ldots,\rho_{n}\}\), we let \(\mathbf{G}(\mathrm{R})=\mathbf{G}(\rho_{1})\cup\cdots\cup\mathbf{G}(\rho_{n})\). For an abstract calculus \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\), \(\mathbf{G}(\mathfrak{A})=\mathbf{G}(\mathfrak{R})\).
Similarly, for a set \(\mathrm{R}\) of rules, we define the set of production pairs of \(\mathrm{R}\) as \(P(\mathrm{R})=P(\mathbf{G}(\mathrm{R}))\), and for an abstract calculus \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\), we let \(P(\mathfrak{A})=P(\mathbf{G}(\mathfrak{R}))\).
## IV Permutations and Simulations
We now put our framework to use, studying the relationships between inference rules. A novel feature of our approach concerns the manipulation of constraints with \(\mathsf{E}\)-systems via two operations, referred to as _absorb_ and _fracture_. The absorb operation increases the expressiveness of constraints, whereas the fracture operation decreases the expressiveness of constraints. Increasing the expressiveness of a constraint in a rule secures new permutability relationships, while decreasing the expressiveness of a constraint, yields a weaker version that requires additional rules to recover its original functionality.
### _Permutation and Absorption_
We begin by defining the absorb operation, which 'adds' an \(\mathsf{E}\)-system to the constraint of rule. We note that this operation only affects rules parameterized with constraints that associate \(\mathsf{E}\)-systems with the edges of a constraint, namely, the \(i(C,R)\) and \(r(\mathcal{C},R)\) rules (see Section III). As local, expansion, and Horn rules omit the use of constraints entirely, such inference rules would be unaffected by the absorb operation, and thus, we disregard the absorb operation in these cases.
**Definition 7** (Absorb).: We define the _absorb_ operation between a constraint \(C=(V,E,L)\) and a \(\mathsf{E}\)-system \(\mathbf{G}\) denoted \(C\oplus\mathbf{G}\), as the constraint \((V,E,L^{\prime})\) such that for each \((w,u)\in E\), \(L^{\prime}(w,u)=(\mathbf{G}^{\prime}\cup\mathbf{G})(a)\)_iff_\(L(w,u)=\mathbf{G}^{\prime}(a)\). For a constraint family \(\mathcal{C}=(C_{1},\ldots,C_{n})\), \(C\oplus\mathbf{G}=(C_{1}\oplus\mathbf{G},\ldots,C_{n}\oplus\mathbf{G})\). We lift the absorb operation from constraints to initial and reachability rules as follows: \(i(C,R)\oplus\mathbf{G}=i(C\oplus\mathbf{G},R)\) and \(r(\mathcal{C},R)\oplus\mathbf{G}=r(\mathcal{C}\oplus\mathbf{G},R)\).
We now define the notion of _permutation_ in our setting, clarifying what it means for two rule sets to be permutable with one another.
**Definition 8** (Permutation).: Let \(\mathsf{E}\) be a set of edge types, and \(\mathrm{R}_{1}\) and \(\mathrm{R}_{2}\) be two sets of rules. We say that \(\mathrm{R}_{1}\)_permutes above_\(\mathrm{R}_{2}\), written \(\mathrm{R}_{1}\nearrow\mathrm{R}_{2}\), _iff_ for any \(\mathrm{g}\)-sequents \(\mathcal{G},\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\in\mathfrak{G}(\mathsf{E})\), if \(\mathcal{G}\) can be derived via an application of a rule \(\sigma\in\mathrm{R}_{2}\) followed by an application of a rule \(\rho\in\mathrm{R}_{1}\) from \(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\), then \(\mathcal{G}\) can be derived via an application of \(\rho\) followed by an application of \(\sigma\) from \(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\).
If \(\mathrm{R}_{1}\nearrow\mathrm{R}_{2}\) and \(\mathrm{R}_{2}\nearrow\mathrm{R}_{1}\), then we say that \(\mathrm{R}_{1}\) and \(\mathrm{R}_{2}\) are _permutable_ with one another, and write \(\mathrm{R}_{1}\rightleftharpoons\mathrm{R}_{2}\). We note that when \(\mathrm{R}_{1}\) or \(\mathrm{R}_{2}\) is a singleton (i.e. a single rule \(\rho\)), we simply write the rule name \(\rho\) in the notation defined above.
We now present a sequence of permutation results. In particular, we find that Horn rules are permutable with local rules (Theorem 9), Horn rules can always be permuted above expansion rules (Theorem 10), and Horn rules are always permutable with reachability rules, given that such Horn rules have been absorbed into their constraints (Theorem 11). For the remainder of the section, we fix a set \(\mathsf{E}\) of edge types, and consider relationships between rules that participate in abstract calculi of \(\mathbb{S}(\mathsf{E})\), unless specified otherwise.
**Theorem 9**.: _If \(l(R)\) is a local rule and \(\mathrm{H}\) is a set of Horn rules, then \(l(R)\rightleftharpoons\mathrm{H}\)._
**Theorem 10**.: _If \(e(R)\) is an expansion rule and \(\mathrm{H}\) is a set of Horn rules, then \(\mathrm{H}\nearrow e(R)\)._
**Theorem 11**.: _If \(r(\mathcal{C},R)\) is a reachability rule, and \(\mathrm{H}\) is a set of Horn rules, then \(r(\mathcal{C},R)\oplus\mathbf{G}(\mathrm{H})\rightleftharpoons\mathrm{H}\)._
### _Simulation and Fracturing_
We now introduce the fracture operation, which under certain conditions, functions as the inverse of the absorb operation, thus weakening constraints on initial and reachability rules. Subsequently, we define the notion of simulation, and show how weakened variants of initial and reachability rules can be simulated with the help of Horn rules.
**Definition 12** (Fracture).: We define the _fracture_ operation between a constraint \(C=(V,E,L)\) and a \(\mathsf{E}\)-system \(\mathbf{G}\), denoted \(C\ominus\mathbf{G}\), to be the constraint \((V,E,L^{\prime})\) such that for each \((w,u)\in E\), \((\mathbf{G}^{\prime}\setminus\mathbf{G})(a)=L^{\prime}(w,u)\)_iff_\(\mathbf{G}^{\prime}(a)=L(w,u)\). For a constraint family \(\mathcal{C}=(C_{1},\ldots,C_{n})\), we let \(\mathcal{C}\ominus\mathbf{G}=(C_{1}\ominus\mathbf{G},\ldots,C_{n}\ominus \mathbf{G})\). We lift the fracture operation from constraints to initial rules and reachability rules as follows: \(i(C,R)\ominus\mathbf{G}=i(C\ominus\mathbf{G},R)\) and \(r(\mathcal{C},R)\ominus\mathbf{G}=r(\mathcal{C}\ominus\mathbf{G},R)\).
It is straightforward to verify that absorbing a grammar into a transmission rule (discussed in Sections II and III), yields a reachability rule, and that 'fracturing' the grammar \(\mathbf{G}(r(\mathcal{C},R))\) from a reachability rule \(r(\mathcal{C},R)\), gives a transmission rule.
**Proposition 13**.: _Let \(t(\mathcal{C},R)\) be a transmission rule, \(r(\mathcal{C},R)\) be a reachability rule, and \(\mathrm{H}\) be a non-empty set of Horn rules._
1. \(t(\mathcal{C},R)\oplus\mathbf{G}(\mathrm{H})\) _is a reachability rule;_
2. \(r(\mathcal{C},R)\ominus\mathbf{G}(r(\mathcal{C},R))\) _is a transmission rule._
Moreover, one can confirm that under certain conditions, the absorb and fracture operations are inverses of one another, and exhibit the following properties.
**Lemma 14**.: _Let \(\rho\in\{i(C,R),r(\mathcal{C},R)\}\) and \(\mathrm{H}\) be a set of Horn rules. Then,_
1. \((\rho\oplus\mathbf{G}(\mathrm{H}))\ominus\mathbf{G}(\mathrm{H})=\rho\ominus \mathbf{G}(\mathrm{H})\)_;_
2. _if_ \(\mathbf{G}(\mathrm{H})\cap\mathbf{G}(\rho)=\emptyset\)_, then_ \((\rho\oplus\mathbf{G}(\mathrm{H}))\ominus\mathbf{G}(\mathrm{H})=\rho\)_;_
3. \((\rho\ominus\mathbf{G}(\mathrm{H}))\oplus\mathbf{G}(\mathrm{H})=\rho\oplus \mathbf{G}(\mathrm{H})\)_;_
4. _if_ \(\mathbf{G}(\mathrm{H})\subseteq\mathbf{G}(\rho)\)_, then_ \((\rho\ominus\mathbf{G}(\mathrm{H}))\oplus\mathbf{G}(\mathrm{H})=\rho\)_._
We now define the simulation and bi-simulation relation between rule sets and abstract calculi. In the sequel, we state a variety of useful properties concerning such relations.
**Definition 15** (Simulation).: Let \(\mathsf{E}\) be a set of edge types, and \(\mathrm{R}_{1}\) and \(\mathrm{R}_{2}\) two sets of rules. We say that \(\mathrm{R}_{2}\)_simulates_\(\mathrm{R}_{1}\), written \(\mathrm{R}_{1}\preceq\mathrm{R}_{2}\), _iff_ for any \(\mathrm{g}\)-sequents \(\mathcal{G},\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\in\mathfrak{G}(\mathsf{E})\)
if \(\mathcal{G}\) is derivable from \(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\) with \(\mathrm{R}_{1}\), then \(\mathcal{G}\) derivable from \(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\) with \(\mathrm{R}_{2}\). If \(\mathrm{R}_{1}\preceq\mathrm{R}_{2}\) and \(\mathrm{R}_{2}\preceq\mathrm{R}_{1}\), then we say that \(\mathrm{R}_{1}\) and \(\mathrm{R}_{2}\)_bi-simulate_ each other and write \(\mathrm{R}_{1}\simeq\mathrm{R}_{2}\).
Let \(\mathfrak{A}_{1}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R}_{1})\) and \(\mathfrak{A}_{2}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R}_{2})\) be two abstract calculi. We say that \(\mathfrak{A}_{2}\)_simulates_\(\mathfrak{A}_{1}\), and write \(\mathfrak{A}_{1}\preceq\mathfrak{A}_{2}\), _iff_\(\mathfrak{R}_{1}\preceq\mathfrak{A}_{2}\). We say that \(\mathfrak{A}_{1}\)_bi-simulates_\(\mathfrak{A}_{2}\), and write \(\mathfrak{A}_{1}\simeq\mathfrak{A}_{2}\)_, iff_\(\mathfrak{A}_{1}\preceq\mathfrak{A}_{2}\) and \(\mathfrak{A}_{2}\preceq\mathfrak{A}_{1}\).
It is a basic exercise to establish the following properties:
**Lemma 16**.: _Let \(\mathsf{E}\) and \(\mathsf{E}^{\prime}\) be sets of edge types, \(\mathfrak{A}\in\mathbb{S}(\mathsf{E})\) with \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\), and \(\mathfrak{B}\in\mathbb{S}(\mathsf{E}^{\prime})\). Then,_
1. \(\preceq\) _is a pre-order over_ \(\mathbb{S}(\mathsf{E})\)_;_
2. \(\simeq\) _is an equivalence relation over_ \(\mathbb{S}(\mathsf{E})\)_;_
3. _if_ \(\mathrm{R}_{1}\subseteq\mathfrak{R}\) _and_ \(\mathrm{R}_{1}\preceq\mathrm{R}_{2}\) _with_ \(\mathrm{R}_{2}\) _a set of rules, then_ \(\mathfrak{A}\preceq(\mathfrak{G}(\mathsf{E}),(\mathfrak{R}\setminus\mathrm{R} _{1})\cup\mathrm{R}_{2})\)_;_
4. _if_ \(\mathfrak{A}\subseteq\mathfrak{B}\)_, then_ \(\mathfrak{A}\preceq\mathfrak{B}\)_._
To discuss simulations and properties thereof, we require the use of _ordered rule sets_. In essence, an ordered rule set is a set \(\mathrm{R}=\mathrm{R}_{1}\cup\cdots\cup\mathrm{R}_{n}\) of rules such that any derivation constructed with \(\mathrm{R}\) must proceed in a certain order, being obtained by applying at least \(0\) or \(1\) rule applications from \(\mathrm{R}_{1}\), followed by at least \(0\) or \(1\) rule applications from \(\mathrm{R}_{2}\), etc.
**Definition 17** (Ordered Rule Sets).: Let \(\mathrm{R}_{1},\ldots,\mathrm{R}_{n}\) be sets of rules. We define \(\triangleright^{i_{1}}\mathrm{R}_{1}\cdots\triangleright^{i_{n}}\mathrm{R}_{n}\) with \(i_{j}\in\{0,1\}\) to be an _ordered rule set_ such that any derivation constructed with the rules in \(\mathrm{R}_{1}\cup\cdots\cup\mathrm{R}_{n}\) must proceed by first applying \(i_{1}\) or more applications of rules from \(\mathrm{R}_{1}\), followed by \(i_{2}\) or more applications of rules from \(\mathrm{R}_{2}\), etc. When a set of rules in an ordered rule set is a singleton \(\{\rho\}\), we will simply write \(\rho\).
The following lemma is straightforward to establish:
**Lemma 18**.: _Let \(\mathrm{R}_{1}\) and \(\mathrm{R}_{2}\) be two set of rules. Then,_
* _if_ \(\mathrm{R}_{1}\nearrow\mathrm{R}_{2}\)_, then_ \((\triangleright^{1}\mathrm{R}_{2}\triangleright^{1}\mathrm{R}_{1})\preceq( \triangleright^{1}\mathrm{R}_{1}\triangleright^{1}\mathrm{R}_{2})\)_;_
* _for_ \(i\in\{0,1\}\)_,_ \(\triangleright^{1}\mathrm{R}_{1}\triangleright^{1}\mathrm{R}_{2}\preceq\triangleright^ {0}\mathrm{R}_{1}\triangleright^{1}\mathrm{R}_{2}\)_;_
* _for_ \(i\in\{0,1\}\)_,_ \(\triangleright^{i}\mathrm{R}_{1}\triangleright^{1}\mathrm{R}_{2}\preceq\triangleright^ {i}\mathrm{R}_{1}\triangleright^{0}\mathrm{R}_{2}\)_._
As stated in the lemma below, we find that applying the absorb operation to an initial or reachability rule \(\rho\)_strengthens_ the rule in the sense that \(\rho\oplus\mathbf{G}(\mathrm{H})\) can simulate \(\rho\) for a set \(\mathrm{H}\) of Horn rules, and conversely, we find that the fracture operation _weakens_ an initial or reachability rule. Moreover, the absorb operation satisfies a monotonicity property relative to the subset relation over Horn rules, while the fracture operation satisfies an antitonicity property, as expressed by the fourth claim of the following lemma.
**Lemma 19**.: _Let \(\mathrm{H}\) and \(\mathrm{H}^{\prime}\) be two sets of Horn rules, \(\mathrm{H}\subseteq\mathrm{H}^{\prime}\), \(\rho\in\{i(C,R),r(\mathcal{C},R)\}\), and \(i,j\in\{0,1\}\). Then,_
1. \(\rho\preceq\rho\oplus\mathbf{G}(\mathrm{H})\) _and_ \(\rho\ominus\mathbf{G}(\mathrm{H})\preceq\rho\)_;_
2. \((\triangleright^{i}\rho\triangleright^{j}\mathrm{H})\preceq(\triangleright^{j}\rho \oplus\mathbf{G}(\mathrm{H})\triangleright^{j}\mathrm{H})\)_;_
3. \((\triangleright^{i}\rho\ominus\mathbf{G}(\mathrm{H})\triangleright^{j}\mathrm{H}) \preceq(\triangleright^{i}\rho\triangleright^{j}\mathrm{H})\)_;_
4. \(\rho\oplus\mathbf{G}(\mathrm{H})\preceq\rho\oplus\mathbf{G}(\mathrm{H}^{ \prime})\) _and_ \(\rho\ominus\mathbf{G}(\mathrm{H}^{\prime})\preceq\rho\ominus\mathbf{G}( \mathrm{H})\)_._
The following theorem is crucial for our generic algorithms in Section V. The theorem states that any derivation consisting of initial rules followed by applications of Horn rules can be simulated by initial rules under absorption.
**Theorem 20**.: _If \(i(C,R)\) is an initial rule and \(\mathrm{H}\) is a set of Horn rules, then \((\triangleright^{1}i(C,R)\triangleright^{0}\mathrm{H})\preceq i(C,R)\oplus\mathbf{G} (\mathrm{H})\)._
Let us turn our attention toward investigating simulations in the presence of fracturing. When fracturing an initial or reachability rule \(\rho\) with an \(\mathsf{E}\)-system \(\mathbf{G}(\mathrm{H})\) for \(\mathrm{H}\) a set of Horn rules, we find that \(\rho\) can be simulated by \(\rho\ominus\mathbf{G}(\mathrm{H})\) along with applications of other inference rules. Yet, it so happens that _dependencies_ between Horn rules in \(\mathrm{H}\) are of importance when considering simulations in this context. Intuitively, one Horn rule \(h_{1}\) depends on another Horn rule \(h_{2}\) when an application of \(h_{2}\) produces a \(\mathsf{g}\)-sequent \(\mathcal{G}\) such that \(h_{1}\) becomes applicable to it. For the interested reader, we have provided an example of Horn rules and dependencies (which are captured by the following notion of a _dependency graph_) in Example 49 of Section V. We now define these dependencies:
**Definition 21** (Dependency Graph).: Let \(\mathbf{G}\) be an \(\mathsf{E}\)-system with \((p,\overline{p})\) and \((p^{\prime},\overline{p}^{\prime})\) distinct propagation pairs in \(P(\mathbf{G})\) such that \(p=x\longrightarrow s\) and \(p^{\prime}=y\longrightarrow t\) with \(x,y\in\mathsf{E}\cup\overline{\mathsf{E}}\). We say that \((p^{\prime},\overline{p}^{\prime})\)_depends on_\((p,\overline{p})\), written \((p,\overline{p})\sqsubset(p^{\prime},\overline{p}^{\prime})\), _iff_\(s\) or \(\overline{s}\) is of the form \(s_{1}ys_{2}\). We define the _dependency graph_ of \(\mathbf{G}\) to be the pair \(\mathsf{DG}(\mathbf{G})=(P,\sqsubseteq)\) such that \(P=P(\mathbf{G})\) and \(\sqsubseteq\) is the reflexive-transitive closure of \(\sqsubset\).
For a set \(\mathrm{H}\) of Horn rules, we define \(\mathsf{DG}(\mathrm{H})=(\mathrm{H},\sqsubseteq^{\prime})\) such that for \(h,h^{\prime}\in\mathrm{H}\), \(h\sqsubseteq^{\prime}h^{\prime}\)_iff_ for \((p,\overline{p})\in P(h)\) and \((p^{\prime},\overline{p}^{\prime})\in P(h^{\prime})\), \((p,\overline{p})\sqsubseteq(p^{\prime},\overline{p}^{\prime})\) in \(\mathsf{DG}(\mathbf{G}(\mathrm{H}))=(P,\sqsubseteq)\). To capture dependency graphs over \(\mathsf{E}\)-systems and Horn rules in a uniform notation, we may denote them by \(\mathsf{DG}=(V,\sqsubseteq)\).
Of critical importance in dependency graphs is the notion of a _fracturable set_. In essence, for a dependency graph \(\mathsf{DG}=(V,\sqsubseteq)\), a fracturable set is a set \(V^{\prime}\subseteq V\) of vertices such that every vertex \(v\in V^{\prime}\)'sees' only vertices in \(V^{\prime}\).
**Definition 22** (Fracturable Set).: Given a dependency graph \(\mathsf{DG}=(V,\sqsubseteq)\) we say that a subset \(V^{\prime}\) of \(V\) is _fract
**Definition 25** (Saturation).: Let \(\mathrm{H}\) be a set of Horn rules with \(h\in\mathrm{H}\). If \(h\) is of the form shown below left, we define the _inverse_\(\overline{h}\) of \(h\) to be the rule of the form shown below right:
\[\frac{\mathcal{G}}{\mathcal{G}^{\prime}}\ h\ \ \ \ \ \ \ \ \frac{\mathcal{G}^{\prime}}{ \mathcal{G}}\ \overline{h}\]
We write \(\overline{h}(\mathcal{G})=\mathcal{G}^{\prime}\) to mean that an application of \(\overline{h}\) to \(\mathcal{G}\) produces \(\mathcal{G}^{\prime}\). A g-sequent \(\mathcal{G}\) is defined to be _\(\overline{h}\)-saturated iff_ every application of \(\overline{h}\) to \(\mathcal{G}\) produces \(\mathcal{G}\). We say that \(\overline{h}(\mathcal{G})\) is _permissible iff_\(\overline{h}(\mathcal{G})\) produces a g-sequent \(\mathcal{G}^{\prime}\neq\mathcal{G}\). We define \(\overline{\mathrm{H}}=\{\overline{h}\ |\ h\in\mathrm{H}\}\) and define a g-sequent \(\mathcal{G}\) to be \(\overline{\mathrm{H}}\)-saturated _iff_ it is \(\overline{h}\)-saturated for every \(\overline{h}\in\overline{\mathrm{H}}\).
We now provide a sequence of results of that will ultimately be used to show under what conditions initial and reachability rules can be simulated with 'weaker' variants along with applications of Horn rules. In what follows, we let \(\overline{\mathrm{H}}(\mathcal{G})\) denote the \(\overline{\mathrm{H}}\)-saturated g-sequent obtained by repeatedly applying all permissible applications of rules in \(\overline{\mathrm{H}}\) to \(\mathcal{G}\).
**Lemma 26**.: _If \(\mathcal{G}\) is a g-sequent and \(\mathrm{H}\) is a finite set of Horn rules, then \(\overline{\mathrm{H}}(\mathcal{G})\) is computable in \(\mathrm{PTIME}\) and is \(\overline{\mathrm{H}}\)-saturated._
**Observation 27**.: _Let \(\mathrm{H}\) be a set of Horn rules, \(\mathcal{G}=\Gamma\ \vdash\ \Delta\) be a g-sequent, and \(\overline{\mathrm{H}}(\mathcal{G})=\Gamma^{\prime}\ \vdash\ \Delta^{\prime}\). Then, \(\mathcal{U}(\mathcal{G})=\mathcal{U}(\overline{\mathrm{H}}(\mathcal{G}))\), \(\Gamma\subseteq\Gamma^{\prime}\), and \(\Delta=\Delta^{\prime}\)._
**Lemma 28**.: _Let \(\mathrm{H}\) be a set of Horn rules, \(\mathcal{G}\) be a g-sequent, and \(\mathbf{G}=\mathbf{G}(\mathrm{H})\). For any \(s\in(\mathtt{E}\cup\overline{\mathtt{E}})^{*}\) and \(u,w\in\mathcal{U}(\mathcal{G})\) if \(\mathcal{G}\models w\ \raisebox{-1.72pt}{\scalebox{1.5}{$\prec$}}\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
To transform proofs 'down' the partial order of an upward space \(\mathbb{U}(\mathfrak{A})\) into a proof of \(\mathfrak{A}\) requires an additional condition, namely, the initial and reachability rules of \(\mathfrak{A}\) must satisfy a certain set of equations. This gives rise to the notion of an _explicit calculus_. Intuitively, an explicit calculus is one where all initial and reachability rules are parameterized with minimal constraints, i.e. constraints \(C\) such that \(|C|=0\). This has the effect that if a proof utilizes Horn rules, then such rules cannot be eliminated from the proof as they cannot be'mimicked' by other rules of the calculus.
**Definition 38** (Explicit Calculus).: Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an abstract calculus. We define \(\mathfrak{A}\) to be _explicit iff_ for every initial \(i(C,R)\) and reachability rule \(r(\mathcal{C},R)\) in \(\mathfrak{R}\), the following hold:
\[i(C,R)\ominus\mathbf{G}(\mathfrak{A})=i(C,R)\text{ and }r(\mathcal{C},R) \ominus\mathbf{G}(\mathfrak{A})=r(\mathcal{C},R).\]
**Theorem 39** (Down the Upward Space).: _Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an explicit calculus with \(\mathbb{U}(\mathfrak{A})=(\mathbf{S},\leqslant)\) its upward space. For any abstract calculus \(\mathfrak{B}\in\mathbf{S}\) and proof \(\mathcal{P}\) of a g-sequent \(\mathcal{G}\) in \(\mathfrak{B}\), there exists a proof \(\mathcal{P}^{\prime}\) of \(\mathcal{G}\) in \(\mathfrak{A}\) such that \(\mathcal{P}^{\prime}\) is computable from \(\mathcal{P}\) in \(\mathrm{PTIME}\) with \(s(\mathcal{P}^{\prime})=\mathcal{O}(s(\mathcal{P})^{2})\)._
### _Downward Spaces/Lattices and Implicit Calculi_
Above, we investigated the upward spaces of calculi obtained by strengthening initial and reachability rules via absorbtion. Conversely, we obtain _downward spaces_ by weakening initial and reachability rules via fracturing.
**Definition 40** (Downward Space).: Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an abstract calculus. We define the _downward space_\(\mathbb{D}(\mathfrak{A})=(\mathbf{S},\leqslant)\) inductively as follows: (1) \(\mathfrak{A}\in\mathbf{S}\), (2) if \(\mathfrak{B}\in\mathbf{S}\) with \(\mathrm{H}:=\mathrm{H}(\mathfrak{B})\) and \(\mathsf{DG}(\mathbf{G})=(P,\sqsubseteq)\) the dependency graph of \(\mathbf{G}:=\mathbf{G}(\mathfrak{B}\setminus\mathrm{H})\), then for a fracturable subset \(P^{\prime}\subseteq P\), \(\mathfrak{C}=\big{(}(\mathfrak{B}\ominus\mathbf{G}(P^{\prime}))\cup\mathrm{H }(\mathbf{G}(P^{\prime}))\big{)}\in\mathbf{S}\) and \(\mathfrak{C}\leqslant\mathfrak{B}\).
As with upward spaces, we obtain that downward spaces are partially ordered sets, which can be viewed as lattices.
**Theorem 41**.: _Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an abstract calculus with \(\mathbb{D}(\mathfrak{A})=(\mathbf{S},\leqslant)\) its downward space. Then, the \(\leqslant\) relation is a connected partial order. Moreover, for \(\mathfrak{B},\mathfrak{C}\in\mathbf{S}\), if we take \(\mathfrak{B}\wedge\mathfrak{C}=\inf\{\mathfrak{B},\mathfrak{C}\}\) and \(\mathfrak{B}\vee\mathfrak{C}=\sup\{\mathfrak{B},\mathfrak{C}\}\) under \(\leqslant\), then \((\mathbf{S},\wedge,\vee)\) is a complete lattice with \(\mathfrak{A}=\top\)._
Given an abstract calculus \(\mathfrak{A}\), we can translate proofs 'down' the partial order of \(\mathbb{D}(\mathfrak{A})\) in \(\mathrm{PTIME}\), similar to Theorem 39.
**Theorem 42** (Down the Downward Space).: _Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be a calculus with \(\mathbb{D}(\mathfrak{A})=(\mathbf{S},\leqslant)\) its downward space. For any abstract calculus \(\mathfrak{B}\in\mathbf{S}\) and proof \(\mathcal{P}\) of a g-sequent \(\mathcal{G}\) in \(\mathfrak{A}\), there exists a proof \(\mathcal{P}^{\prime}\) of \(\mathcal{G}\) in \(\mathfrak{B}\) such that \(\mathcal{P}^{\prime}\) is computable from \(\mathcal{P}\) in \(\mathrm{PTIME}\) with \(s(\mathcal{P}^{\prime})=\mathcal{O}(s(\mathcal{P})^{2})\)._
We find that transforming proofs 'up' the partial order of a downward space \(\mathbb{D}(\mathfrak{A})\) requires that \(\mathfrak{A}\) is of a specific form. Namely, we find that such proofs can be transformed when \(\mathfrak{A}\) is an _implicit calculus_, defined below.
**Definition 43** (Implicit Calculus).: Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an abstract calculus. We define \(\mathfrak{A}\) to be _implicit iff_\(\mathfrak{A}\) does not contain any Horn rules, and every initial rule \(i(C,R)\) and reachability rule \(r(\mathcal{C},R)\) in \(\mathfrak{R}\), satisfies the following equations:
\[i(C,R)\oplus\mathbf{G}(\mathfrak{A})=i(C,R)\text{ and }r(\mathcal{C},R)\oplus \mathbf{G}(\mathfrak{A})=r(\mathcal{C},R).\]
**Theorem 44** (Up the Downward Space).: _Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an implicit calculus with \(\mathbb{D}(\mathfrak{A})=(\mathbf{S},\leqslant)\) its downward space. For any abstract calculus \(\mathfrak{B}\in\mathbf{S}\) and proof \(\mathcal{P}\) of a g-sequent \(\mathcal{G}\) in \(\mathfrak{B}\), there exists a proof \(\mathcal{P}^{\prime}\) of \(\mathcal{G}\) in \(\mathfrak{A}\) such that \(\mathcal{P}^{\prime}\) is computable from \(\mathcal{P}\) in \(\mathrm{PTIME}\) with \(s(\mathcal{P}^{\prime})\leq s(\mathcal{P})\)._
Beyond their use in the theorem above, another interesting feature of implicit calculi is that every _complete proof_ employs only g-sequents of a polytree shape.3 Such calculi are reminiscent of nested sequent systems [17, 18], and later on, we will identify a sizable number of nested sequent systems appearing in the literature with implicit calculi.
Footnote 3: See Section III for the definition of complete proofs and polytree g-sequents.
**Theorem 45**.: _If \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) is an implicit calculus, then every complete proof is a polytree proof._
### _Generic Calculus Transformations_
An interesting and new discovery that arises from our framework is that certain abstract calculi participate in lattices of polynomially equivalent calculi. These lattices can be identified by transforming an explicit calculus into its upward space, or by transforming an implicit calculus into its downward space. We provide two calculus transformation algorithms Implicate and Explicate, which take an abstract calculus as input, and compute its upward or downward space,
Fig. 4: The Implicate algorithm takes an explicit calculus as input and computes its upward space.
Fig. 5: The Explicate algorithm takes an implicit calculus as input and computes its downward space.
effectively generating a lattice of polynomially equivalent calculi. To state these algorithms we employ the following notation:
**Definition 46**.: Let \(\mathfrak{A}\) be an abstract calculus with \(\mathrm{H}\) a set of Horn rules and \(P\) a set of production pairs. We define:
\[f(\mathfrak{A},\mathrm{H}):=(\mathfrak{A}\oplus\mathbf{G}(\mathrm{H}))\setminus \mathrm{H}\quad g(\mathfrak{A},P):=(\mathfrak{A}\ominus\mathbf{G}(P))\cup \mathrm{H}(P)\]
Our first calculus transformation algorithm Implicate is presented in Figure 4. The algorithm is named 'Implicate' as it successively computes better approximations of the implicit calculus \(\mathfrak{B}\) that is polynomially equivalent to the input. It is straightforward to verify that Implicate terminates as every execution of the while-loop strictly reduces the finite set of Horn rules associated with all \(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}\)-maximal calculi. The following relies on Theorems 37 and 39, and we remark that \(\top\) can be obtained from \(\mathfrak{A}\) in \(\mathrm{PTIME}\) by computing \(\top:=f(\mathfrak{A},\mathrm{H}(\mathfrak{A}))\).
**Theorem 47**.: _Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an explicit calculus with \(\mathbb{U}(\mathfrak{A})=(\mathbf{S},\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}})\) its upward space. Then,_
1. \(\mathbb{U}(\mathfrak{A})=\textsc{Implicate}(\mathfrak{A})\)_;_
2. \(\mathbb{U}(\mathfrak{A})\) _is computable from_ \(\mathfrak{A}\) _in_ \(\textsc{EXPTIME}\)_;_
3. _If_ \(\mathfrak{B},\mathfrak{C}\in\mathbf{S}\)_, then_ \(\mathfrak{B}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$ +$}}_{p}\mathfrak{C}\)_;_
4. \(\top\) _is computable from_ \(\mathfrak{A}\) _in_ \(\mathrm{PTIME}\)_;_
5. \(\top\) _is the only implicit calculus in_ \(\mathbb{U}(\mathfrak{A})\)_._
Our second calculus transformation algorithm Explicate is displayed in Figure 5. The algorithm is named 'Explicate' since it successively computes better approximations of the explicit calculus polynomially equivalent to the input. We note that Explicate depends on sets of production pairs, and observe that Explicate terminates since each grammar \(\mathbf{G}(\mathfrak{B}\setminus\mathrm{H}(\mathfrak{B}))\) strictly decreases for each \(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}\)-minimal calculus \(\mathfrak{B}\) after each execution of the while-loop.
The following theorem is similar to Theorem 47, but relies on Theorem 42 and 44 to establish the polynomial equivalence of all abstract calculi in the downward space. Moreover, we remark that \(\perp\) can be obtained from the input \(\mathfrak{A}\) in \(\mathrm{PTIME}\) by computing \(\perp:=g(\mathfrak{A},P(\mathfrak{A}))\).
**Theorem 48**.: _Let \(\mathfrak{A}=(\mathfrak{G}(\mathsf{E}),\mathfrak{R})\) be an implicit calculus with \(\mathbb{D}(\mathfrak{A})=(\mathbf{S},\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}})\) its downward space. Then,_
1. \(\mathbb{D}(\mathfrak{A})=\textsc{Explicate}(\mathfrak{A})\)_;_
2. \(\mathbb{D}(\mathfrak{A})\) _is computable from_ \(\mathfrak{A}\) _in_ \(\textsc{EXPTIME}\)_;_
3. _If_ \(\mathfrak{B},\mathfrak{C}\in\mathbf{S}\)_, then_ \(\mathfrak{B}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$ +$}}_{p}\mathfrak{C}\)_;_
4. \(\perp\) _is computable from_ \(\mathfrak{A}\) _in_ \(\mathrm{PTIME}\)_;_
5. \(\perp\) _is the only explicit calculus in_ \(\mathbb{D}(\mathfrak{A})\)_._
**Example 49**.: To demonstrate the functionality of our two calculus transformation algorithms, we consider an example with an explicit calculus \(\mathfrak{A}\) consisting of the following rules, and where \(|C|=0\).
\[\begin{array}{c}\infer{\Gamma\ \vdash\ \Delta\ }{i(C,R)}\quad\infer{\Gamma,\ w \mathcal{E}_{c}u,\,u\mathcal{E}_{c}v,\,w\mathcal{E}_{a}v\ \vdash\ \Delta}{\Gamma,\,w\mathcal{E}_{c}u,\,u\mathcal{E}_{c}v\ \vdash\ \Delta}{h_{1}}\\ \infer{\Gamma,\,w\mathcal{E}_{a}w\ \vdash\ \Delta}{\Gamma\ \vdash\ \Delta}{h_{2}}\quad \infer{\Gamma,\,w\mathcal{E}_{a}u,\,u\mathcal{E}_{b}w\ \vdash\ \Delta}{\Gamma,\,w\mathcal{E}_{a}u\ \vdash\ \Delta}{h_{3}}\end{array}\]
We let \(\mathrm{H}=\{h_{1},h_{2},h_{3}\}\) and observe that in the dependency graph \(\mathsf{DG}(\mathrm{H})=(\mathrm{H},\sqsubseteq)\), \(h_{3}\sqsubseteq h_{1}\) and \(h_{3}\sqsubseteq h_{2}\). In this setting, Implicate(\(\mathfrak{A}\)) constructs the lattice shown on the left of Figure 6, eventually yielding the implicit calculus \(f(\mathfrak{A},\{h_{1},h_{2},h_{3}\})=\mathfrak{B}\) at the top. Furthermore, if we let \(P(\mathrm{H})=P_{1}\cup P_{2}\cup P_{3}\) such that \(P_{i}=P(h_{i})=\{(p_{i},\overline{p}_{i})\}\), then in \(\mathsf{DG}(P)=(P,\sqsubseteq^{\prime})\), \((p_{3},\overline{p}_{3})\sqsubseteq(p_{1},\overline{p}_{1})\) and \((p_{3},\overline{p}_{3})\sqsubseteq(p_{2},\overline{p}_{2})\). If we run Implicate(\(\mathfrak{B}\)), we obtain the lattice shown on the right of Figure 6.
**Theorem 50**.: _Let \(\mathfrak{A}\) be an explicit calculus, \(\mathbb{U}(\mathfrak{A})=(\mathbf{S},\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}})\), \(\mathfrak{B}\) be an implicit calculus, and \(\mathbb{D}(\mathfrak{B})=(\mathbf{S}^{\prime},\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}})\). If either \(\mathfrak{B}\in\mathbf{S}\) or \(\mathfrak{A}\in\mathbf{S}^{\prime}\), then \(\mathbf{S}=\mathbf{S}^{\prime}\), \(\mathbb{U}(\mathfrak{A})\cong\mathbb{D}(\mathfrak{B})\), and for any \(\mathfrak{C},\mathfrak{D}\in\mathbf{S}=\mathbf{S}^{\prime}\), \(\mathbf{G}(\mathfrak{C})=\mathbf{G}(\mathfrak{D})\)._
## VI Concluding Remarks
Our novel framework for the study of sequent-style systems yields rich insights into the nature of numerous concrete proof systems that appear in the literature. As shown in Figure 1, diverse classes of logics with applications in computer science [51, 52], mathematics [42], and philosophy [53, 50] admit sequent-style calculi that can be viewed as instances of abstract calculi. More specifically, if we observe the labeled sequent systems of the logics in Figure 1 (operating over graphs of Gentzen-style sequents [21, 22]), we discover that such systems are instances of explicit calculi. In addition, due to a unique property of implicit calculi, namely, that every complete proof is a polytree proof (Theorem 45), one can identify nested sequent systems (operating over (poly)trees of Gentzen-style sequents [17, 18, 21]) with implicit calculi.
When viewed through the lens of our theory, we thus find that labeled and nested sequent systems exist within a spectrum (or, lattice) of polynomially equivalent calculi. We therefore get the polynomial equivalence of nested and labeled calculi for free via our general results. Our theory also suggests that labeled and nested systems are _dual_ to one another with labeled systems serving as the bottom of a lattice and nested systems serving as the top. This reveals that labeled and nested systems tend to come in pairs. Our work also confirms that nested sequent systems generally admit shorter proofs with syntactically simpler sequents than labeled systems as witnessed by Theorem 37 and 44.
There are various avenues for future research. We could provide a more fine-grained treatment, taking the internal structure of sequents into account. Doing so, we could investigate the inter-workings of cut-elimination in a broad setting. Second, we could generalize the types of structural rules considered in our framework, moving beyond Horn rules. In fact, as discovered in [27, 22], certain proof systems utilizing disjunctive properties admit transformations similar to those in Section V. Examining these cases and incorporating them into our framework seems promising. Third, we could extend our methods to pinpoint how other types of proof systems arise (e.g. linear nested sequents), identifying the spaces these calculi exist within and uncovering transformations that navigate them.
|
2308.03133 | A Duality-Based Proof of the Triangle Inequality for the Wasserstein
Distances | This short note gives a proof of the triangle inequality based on the
Kantorovich duality formula for the Wasserstein distances of exponent
$p\in[1,+\infty)$ in the case of a general Polish space. In particular it
avoids the "glueing of couplings" procedure used in most textbooks on optimal
transport. | François Golse | 2023-08-06T14:52:33Z | http://arxiv.org/abs/2308.03133v1 | # A duality-based proof of the triangle inequality for the Wasserstein distances
###### Abstract.
This short note gives a proof of the triangle inequality based on the Kantorovich duality formula for the Wasserstein distances of exponent \(p\in[1,+\infty)\) in the case of a general Polish space. In particular it avoids the "glueing of couplings" procedure used in most textbooks on optimal transport.
Key words and phrases:Wasserstein distance; Kantorovich duality; Triangle inequality; Optimal transport 2020 Mathematics Subject Classification: 49Q22; 49N15 (60B10) \({}^{1}\)Since \(\mathcal{E}\) is separable, the Borel \(\sigma\)-algebra of \(\mathcal{E}\times\mathcal{E}\) is the product of the Borel \(\sigma\)-algebra of \(\mathcal{E}\) with itself: see Proposition 2.4.2 in chapter I of [6].
with trace equal to one defined on a separable Hilbert space, which are the quantum analogue of phase-space probability measures in classical mechanics. These analogues of the Wasserstein distance of exponent \(2\) satisfy some variant of the triangle inequality (see Theorem 5.1 in [5] and formula (51) in [7]) but, at the time of this writing, whether the genuine triangle inequality is satisfied by these quantum Wasserstein (pseudo)metrics remains an open question.
If one returns to the classical setting, most references on optimal transport prove the triangle inequality for the Wasserstein distances by means of a procedure known as "glueing couplings" between Borel probability measures. Specifically, given \(\lambda,\mu,\nu\in\mathcal{P}_{p}(\mathcal{E})\), pick \(\rho_{12}\in\mathcal{C}(\lambda,\mu)\) and \(\rho_{23}\in\mathcal{C}(\mu,\nu)\); the glueing procedure provides us with \(\sigma\in\mathcal{P}(\mathcal{E}\times\mathcal{E}\times\mathcal{E})\) such that
\[\iiint_{\mathcal{E}\times\mathcal{E}\times\mathcal{E}}(\Phi(x,y)+ \Psi(y,z))\sigma(dxdydz)\] \[=\iint_{\mathcal{E}\times\mathcal{E}}\Phi(x,y)\rho_{12}(dxdy)+ \iint_{\mathcal{E}\times\mathcal{E}}\Psi(y,z)\rho_{23}(dydz)\]
for all \(\Phi,\Psi\in C_{b}(\mathcal{E}\times\mathcal{E})\). This is the key step in one proof of the triangle inequality for \(\mathcal{W}_{p}\) (see for instance [12, 1, 13, 2]). Once the probability measure \(\sigma\) has been constructed, the remaining part of the proof of the triangle inequality is a routine computation involving the Minkowski inequality One proof of the existence of \(\sigma\) provided by the glueing procedure is based on disintegration of probability measures (see for instance Lemma 7.6 in [12] or Remark 5.3.3 in [1]). There exists another argument avoiding disintegration of measures, which is based on the Hahn-Banach theorem in the special case where \(\mathcal{E}\) is compact: see Exercise 7.9 in [12]. Still another proof of the triangle inequality uses an optimal transport map, when it exists: see [10]. An optimal transport map between the probability measures \(\mu,\nu\in\mathcal{P}_{p}(\mathcal{E})\) is a Borel measurable map \(T:\,\mathcal{E}\to\mathcal{E}\) such that2
Footnote 2: We denote by \(T_{\#}\mu\) the image of the probability measure \(\mu\) by the map \(T\), defined by
\[\int_{\mathcal{E}}\phi(y)T_{\#}\mu(dy)=\int_{\mathcal{E}}\phi(T(x))\mu(dx)\,, \quad\text{ for all }\phi\in C_{b}(\mathcal{E})\,.\]
\[T_{\#}\mu=\nu\quad\text{ and }\quad\mathcal{W}_{p}(\mu,\nu)=\left(\int_{ \mathcal{E}}d(x,T(x))^{p}\mu(dx)\right)^{1/p}\]
The existence and uniqueness of an optimal transport map that is the gradient of a convex function is known in the case where \(\mathcal{E}\) is a (finite-dimensional) Euclidean space, with Euclidean metric \(d\), and \(\mu\) is absolutely continuous3 with respect to the Lebesgue measure on \(\mathcal{E}\): this is Brenier's theorem (see Theorem 2.12 (ii) in [12]).
Footnote 3: In fact, it is enough to assume that \(\mu(A)=0\) for all Borel measurable \(A\subset\mathcal{E}\) of Hausdorff codimension \(\geq 1\).
None of the ingredients mentioned above (existence of optimal transport maps, glueing of couplings) are known to have analogues in general in the quantum setting. In fact, a recent counterexample due to D. Serre [11] shows that the glueing procedure _cannot_ be extended to the quantum setting for _arbitrary_ couplings.
In view of all these obstructions, we propose still another proof of the triangle inequality for the Wasserstein distances, in the hope that a different approach could perhaps lead to a better understanding of the quantum case.
## 2. Kantorovich Duality
First, we recall the Kantorovich duality for the Wasserstein distance \(\mathcal{W}_{p}\).
**Kantorovich Duality Theorem.** Let \((\mathcal{E},d)\) be a Polish space, and let \(p\geq 1\). For all \(\mu,\nu\in\mathcal{P}_{p}(\mathcal{E})\)
\[\mathcal{W}_{p}(\mu,\nu)^{p}=\sup_{a(x)+b(y)\leq d(x,y)^{p}}\left(\int_{ \mathcal{E}}a(x)\mu(dx)+\int_{\mathcal{E}}b(y)\nu(dy)\right)\,.\]
In this formula, it is equivalent to assume that the functions \(a\) and \(b\) belong to \(C_{b}(\mathcal{E})\), or that \(a\in L^{1}(\mathcal{E},\mu)\) while \(b\in L^{1}(\mathcal{E},\nu)\), in which case \(a(x)+b(y)\leq d(x,y)^{p}\) holds \(\mu\otimes\nu\)-a.e.. (See for instance Theorem 1.3 in [12].)
Henceforth, in the latter case, we shall systematically normalize the pair \((a,b)\) as follows: pick \(a,b\) : \(\mathcal{E}\to\mathbf{R}\cup\{-\infty\}\) to be measurable representatives of the corresponding elements in \(L^{1}(\mathcal{E},\mu)\) and \(L^{1}(\mathcal{E},\nu)\) resp., together with a \(\mu\)-negligible set \(M\subset\mathcal{E}\) and a \(\nu\)-negligible set \(N\subset\mathcal{E}\) such that \(a(x)+b(y)\leq d(x,y)^{p}\) holds for all \(x\in M^{c}\) and all \(y\in N^{c}\). Modifying \(a\) and \(b\) so that \(a(x)=-\infty\) for all \(x\in M\) and \(b(y)=-\infty\) for all \(y\in N\), we obtain in this way new measurable representatives of the same elements \(L^{1}(\mathcal{E},\mu)\) and \(L^{1}(\mathcal{E},\nu)\) as before, such that the inequality \(a(x)+b(y)\leq d(x,y)^{p}\) holds for all \(x,y\in\mathcal{E}\).
It will be convenient to define a notion of \(p\)-Legendre transform, as follows: for \(f\) : \(\mathcal{E}\to\mathbf{R}\cup\{-\infty\}\), not identically equal to \(-\infty\), set
\[f^{[p*]}(y):=\inf_{x\in\mathcal{E}}\left(d(x,y)^{p}-f(x)\right)\,,\quad y\in \mathcal{E}\,.\]
A function \(g\) : \(\mathcal{E}\to\mathbf{R}\cup\{-\infty\}\) is said to be \(d^{p}\)-concave, if it is of the form \(g=f^{[p*]}\) for some \(f\) : \(\mathcal{E}\to\mathbf{R}\cup\{-\infty\}\) that is not identically equal to \(-\infty\).
**Optimal Kantorovich Potentials.** Under the same assumptions as in the Kantorovich duality theorem for \(\mathcal{W}_{p}\), there exists a pair \((a,b)\) of \(p\)-Legendre conjugate functions -- i.e. \(b=a^{[p*]}\) and \(a=b^{[p*]}\) -- such that \(a\in L^{1}(\mathcal{E},\mu)\) and \(b\in L^{1}(\mathcal{E},\nu)\), and
\[\mathcal{W}_{p}(\mu,\nu)^{p}=\int_{\mathcal{E}}a(x)\mu(dx)+\int_{\mathcal{E}}b (y)\nu(dy)\,.\]
(See Remark 1.12, the double convexification trick (2.10), Remark 2.2 and Exercise 2.36 in [12].)
There are two particular cases of special interest, corresponding to \(p=1\) and \(p=2\).
**Kantorovich-Rubinstein Duality Theorem.** Let \((\mathcal{E},d)\) be a Polish space with metric \(d\). Then, for all \(\mu,\nu\in\mathcal{P}_{1}(\mathcal{E})\), it holds
\[\mathcal{W}_{1}(\mu,\nu)=\sup_{\operatorname{Lip}(\phi)\leq 1}\left|\int_{ \mathcal{E}}\phi(z)\mu(dz)-\int_{\mathcal{E}}\phi(z)\nu(dz)\right|\,.\]
(Indeed, one can check that \((\phi^{[1*]})^{[1*]}=-\phi^{[1*]}\) is a contraction on \(\mathcal{E}\): see Theorem 1.14 and its proof in [12].)
In the case \(p=2\), assuming that \(\mathcal{E}\) is a Euclidean space and \(d(x,y)=|x-y|\) is its Euclidean metric, the notion of \(2\)-Legendre duality is easily reduced to the classical notion of Legendre transform (as in SS26 of [9], for instance). Indeed
\[g(y)=\inf_{x\in\mathcal{E}}(|x-y|^{2}-f(x))=|y|^{2}+\inf_{x\in\mathcal{E}}(|x| ^{2}-2x\cdot y-f(x))\]
if and only if
\[\tfrac{1}{2}(|y|^{2}-g(y))=\sup_{x\in\mathcal{E}}(x\cdot y-\tfrac{1}{2}(|x|^{2}- f(x)))\,.\]
In other words, defining \(F(x):=\tfrac{1}{2}(|x|^{2}-f(x))\) and \(G(y):=\tfrac{1}{2}(|y|^{2}-g(y))\), it holds
\[g=f^{[2*]}\iff G=F^{*}\]
(the Legendre transform of \(F\)).
## 3. A Duality-Based Proof of the Triangle Inequality for \(\mathcal{W}_{2}\)
In this section, we explain how to use the Kantorovich duality to prove that
\[\mathcal{W}_{2}(\lambda,\nu)\leq\mathcal{W}_{2}(\lambda,\mu)+\mathcal{W}_{2}( \mu,\nu)\]
for all \(\lambda,\mu,\nu\in\mathcal{P}_{2}(\mathcal{E})\). This first result is a warm-up for the duality-based proof of the triangle inequality for \(\mathcal{W}_{p}\) in the case of an arbitrary exponent \(p\in[1,+\infty)\). We recall that \((\mathcal{E},d)\) is a Polish space with metric denoted by \(d\).
Pick \(2\)-Legendre conjugate, optimal Kantorovich potentials for \(\mathcal{W}_{2}(\lambda,\nu)\), denoted by \(\alpha\) and \(\gamma\). Thus
\[\alpha(x)=\inf_{z\in\mathcal{E}}(d(x,z)^{2}-\gamma(z))\,,\qquad\gamma(z)=\inf _{x\in\mathcal{E}}(d(x,z)^{2}-\alpha(x))\,;\]
besides \(\alpha\in L^{1}(\mathcal{E},\lambda)\) while \(\gamma\in L^{1}(\mathcal{E},\nu)\), and
\[\mathcal{W}_{2}(\lambda,\nu)^{2}=\int_{\mathcal{E}}\alpha(x)\lambda(dx)+\int _{\mathcal{E}}\gamma(z)\nu(dz)\,.\]
For each \(\eta>0\), set
\[\beta_{\eta}(y):=(1+\tfrac{1}{\eta})\inf_{z\in\mathcal{E}}\left(d(y,z)^{2}- \tfrac{\gamma(z)}{1+\frac{1}{\eta}}\right)\,. \tag{1}\]
Obviously, the function
\[y\mapsto\beta_{\eta}(y)/(1+\tfrac{1}{\eta})\]
is \(d^{2}\)-concave, since \(\gamma\in L^{1}(\mathcal{E},\nu)\) is \(\nu\)-a.e. finite.
**Lemma 1**.: Under the assumptions above, it holds
\[\alpha(x)-\beta_{\eta}(y)\leq(1+\eta)d(x,y)^{2}\,,\qquad\text{ for all }x,y\in\mathcal{E}\,.\]
Taking Lemma 1 for granted, we conclude the proof of the triangle inequality for \(\mathcal{W}_{2}\).
Proof of the triangle inequality for \(\mathcal{W}_{2}\).: First observe that \(\beta_{\eta}\in L^{1}(\mathcal{E};\mu)\). Indeed,
\[|\beta_{\eta}(y)|\leq (1+\tfrac{1}{\eta})d(y,z)^{2}+|\gamma(z)|\] \[\leq (1+\tfrac{1}{\eta})(d(x_{0},y)+d(x_{0},z))^{2}+|\gamma(z)|\] \[\leq 2(1+\tfrac{1}{\eta})(d(x_{0},y)^{2}+d(x_{0},z)^{2})+|\gamma(z)|\,,\]
where \(x_{0}\) is any point chosen in \(\mathcal{E}\). Integrating both sides of this inequality with the measure \(\mu\otimes\nu\), we find that
\[\int_{\mathcal{E}}|\beta_{\eta}(y)|\mu(dy)\leq 2(1+\tfrac{1}{\eta})\left(\int_{ \mathcal{E}}d(x_{0},y)^{2}\mu(dy)+\int_{\mathcal{E}}d(x_{0},z)^{2}\nu(dz) \right)+\int_{\mathcal{E}}|\gamma(z)|\nu(dz)\]
so that \(\beta_{\eta}\in L^{1}(\mathcal{E},\mu)\) since \(\mu,\nu\in\mathcal{P}_{2}(\mathcal{E})\) and \(\gamma\in L^{1}(\mathcal{E},\nu)\).
Since \(\beta_{\eta}\in L^{1}(\mathcal{E},\mu)\) and \(\gamma\in L^{1}(\mathcal{E},\nu)\) satisfy
\[\tfrac{\beta_{\eta}(y)}{1+\frac{1}{\eta}}+\tfrac{\gamma(z)}{1+\frac{1}{\eta}} \leq d(y,z)^{2}\,,\qquad y,z\in\mathcal{E}\]
according to the definition (1) of \(\beta_{\eta}\), the Kantorovich Duality Theorem implies that
\[\tfrac{1}{1+\frac{1}{\eta}}\int_{\mathcal{E}}\beta_{\eta}(y)\mu(dy)+\tfrac{1}{1+ \frac{1}{\eta}}\int_{\mathcal{E}}\gamma(z)\nu(dz)\leq\mathcal{W}_{2}(\mu,\nu)^{ 2}\,. \tag{2}\]
On the other hand, the fact that \(\alpha\in L^{1}(\mathcal{E},\lambda)\) and \(\beta_{\eta}\in L^{1}(\mathcal{E},\mu)\), together with the inequality in Lemma 1 and the Kantorovich Duality Theorem, imply that
\[\tfrac{1}{1+\eta}\int_{\mathcal{E}}\alpha(x)\lambda(dx)-\tfrac{1}{1+\eta}\int_ {\mathcal{E}}\beta_{\eta}(y)\mu(dy)\leq\mathcal{W}_{2}(\lambda,\mu)^{2}\,. \tag{3}\]
Multiplying both sides of (2) by \(1+\frac{1}{\eta}\) and both sides of (3) by \(1+\eta\), and adding each side of the resulting inequalities, we see that
\[\mathcal{W}_{2}(\lambda,\nu)^{2}=\int_{\mathcal{E}}\alpha(x) \lambda(dx)+\int_{\mathcal{E}}\gamma(z)\nu(dz)\] \[=\int_{\mathcal{E}}\alpha(x)\lambda(dx)-\int_{\mathcal{E}}\beta _{\eta}(y)\mu(dy)+\int_{\mathcal{E}}\beta_{\eta}(y)\mu(dy)+\int_{\mathcal{E}} \beta_{\eta}(y)\mu(dy)+\int_{\mathcal{E}}\gamma(z)\nu(dz)\] \[\leq(1+\eta)\mathcal{W}_{2}(\lambda,\mu)^{2}+(1+\tfrac{1}{\eta}) \mathcal{W}_{2}(\mu,\nu)^{2}\,.\]
Assuming that \(\lambda\neq\mu\), so that \(\mathcal{W}_{2}(\lambda,\mu)>0\), we pick \(\eta:=\mathcal{W}_{2}(\mu,\nu)/\mathcal{W}_{2}(\lambda,\mu)\) to find that
\[\mathcal{W}_{2}(\lambda,\nu)^{2}\leq \mathcal{W}_{2}(\lambda,\mu)^{2}+\mathcal{W}_{2}(\mu,\nu)^{2}+2 \mathcal{W}_{2}(\lambda,\mu)\mathcal{W}_{2}(\mu,\nu)\] \[= (\mathcal{W}_{2}(\lambda,\mu)+\mathcal{W}_{2}(\mu,\nu))^{2}\,.\]
Otherwise, \(\lambda=\mu\) and the triangle inequality is trivial.
It only remains to prove Lemma 1.
Proof of Lemma 1.: We seek to bound
\[\alpha(x)-\beta_{\eta}(y) =\inf_{z\in\mathcal{E}}(d(x,z)^{2}-\gamma(z))-(1+\tfrac{1}{\eta} )\inf_{z\in\mathcal{E}}\left(d(y,z)^{2}-\tfrac{\gamma(z)}{1+\frac{1}{\eta}}\right)\] \[=\inf_{z\in\mathcal{E}}(d(x,z)^{2}-\gamma(z))-\inf_{z\in \mathcal{E}}\left((1+\tfrac{1}{\eta})d(y,z)^{2}-\gamma(z)\right)\]
Let \(\epsilon>0\); there exists \(z_{\epsilon}\in\mathbf{R}^{d}\) such that
\[\inf_{z\in\mathcal{E}}\left((1+\tfrac{1}{\eta})d(y,z)^{2}-\gamma (z)\right)\leq \left((1+\tfrac{1}{\eta})d(y,z_{\epsilon})^{2}-\gamma(z_{\epsilon })\right)\] \[< \inf_{z\in\mathcal{E}}\left((1+\tfrac{1}{\eta})d(y,z)^{2}-\gamma (z)\right)+\epsilon\,.\]
Then
\[\alpha(x)-\beta_{\eta}(y)= \inf_{z\in\mathcal{E}}(d(x,z)^{2}-\gamma(z))-\inf_{z\in\mathcal{ E}}\left((1+\tfrac{1}{\eta})d(y,z)^{2}-\gamma(z)\right)\] \[\leq (d(x,z_{\epsilon})^{2}-\gamma(z_{\epsilon}))-\inf_{z\in\mathcal{ E}}\left((1+\tfrac{1}{\eta})d(y,z)^{2}-\gamma(z)\right)\] \[< (d(x,z_{\epsilon})^{2}-\gamma(z_{\epsilon}))-\left((1+\tfrac{1}{ \eta})d(y,z_{\epsilon})^{2}-\gamma(z_{\epsilon})\right)+\epsilon\] \[= d(x,z_{\epsilon})^{2}-(1+\tfrac{1}{\eta})d(y,z_{\epsilon})^{2}+ \epsilon\,.\]
But, for each \(\eta>0\), it holds
\[d(x,z_{\epsilon})^{2}\leq (d(x,y)+d(y,z_{\epsilon}))^{2}\] \[= d(x,y)^{2}+d(y,z_{\epsilon})^{2}+2d(x,y)d(y,z_{\epsilon})\] \[\leq d(x,y)^{2}+d(y,z_{\epsilon})^{2}+\eta d(x,y)^{2}+\tfrac{1}{\eta}d (y,z_{\epsilon})^{2}\] \[= (1+\eta)d(x,y)^{2}+(1+\tfrac{1}{\eta})d(y,z_{\epsilon})^{2}\,.\]
With the preceding inequality, we conclude that
\[\alpha(x)-\beta_{\eta}(y)\leq(1+\eta)d(x,y)^{2}+\epsilon\,,\]
and the desired inequality follows from letting \(\epsilon\to 0^{+}\).
## 4. A Duality-Based Proof of the Triangle Inequality
for \(\mathcal{W}_{p}\) with \(1\leq p<\infty\) and \(p\neq 2\)
In this section, we use the Kantorovich duality to prove that
\[\mathcal{W}_{p}(\lambda,\nu)\leq\mathcal{W}_{p}(\lambda,\mu)+\mathcal{W}_{p}( \mu,\nu) \tag{4}\]
for all \(p\geq 1\) and all \(\lambda,\mu,\nu\in\mathcal{P}_{2}(\mathcal{E})\). We recall that \((\mathcal{E},d)\) is a Polish space with metric denoted by \(d\).
The case \(p=1\) follows immediately from the formula for \(\mathcal{W}_{1}\) in the Kantorovich-Rubinstein Duality Theorem. Henceforth, we assume therefore that
\[p>1\quad\text{ and }\quad p\neq 2\,.\]
A careful inspection of the duality-based proof of the triangle inequality for \(\mathcal{W}_{2}\) shows the importance of the inequality
\[(X+Y)^{2}\leq(1+\eta)X^{2}+(1+\tfrac{1}{\eta})Y^{2} \tag{5}\]
for all \(X,Y\geq 0\) and all \(\eta>0\).
Our first task is therefore to seek a function \((0,+\infty)\ni\eta\mapsto f(\eta)\in(0,+\infty)\) such that
\[(X+Y)^{p}\leq(1+\eta)X^{p}+(1+f(\eta))Y^{p}\,,\qquad X,Y\geq 0\,,\quad\eta>0\,.\]
Obviously, only the case \(X,Y>0\) is of interest, so that, by homogeneity, this boils down to finding \(f\) such that
\[(Z^{1/p}+1)^{p}\leq(1+\eta)Z+1+f(\eta)\,,\qquad Z,\eta>0\,,\]
where \(Z=X^{p}/Y^{p}\).
Equivalently, the optimal \(f(\eta)\) is found to be given by the formula
\[f(\eta):=\sup_{Z>0}(-\eta Z-1-Z+(1+Z^{1/p})^{p})\,.\]
One easily checks that the function \((0,+\infty)\ni Z\mapsto(1+Z^{1/p})^{p}\in(0,+\infty)\) is concave for \(p>1\), since
\[\tfrac{d}{dZ}(1+Z^{1/p})^{p}=p(1+Z^{1/p})^{p-1}\tfrac{1}{p}Z^{\frac{1}{p}-1}= (Z^{-1/p}+1)^{p-1}\]
defines a decreasing bijection from \((0,+\infty)\) to itself. Hence there exists a unique critical value of \(Z>0\) such that
\[\tfrac{d}{dZ}(-\eta Z-1-Z+(1+Z^{1/p})^{p})=-(\eta+1)+(Z^{-1/p}+1)^{p-1}=0\,,\]
which is
\[Z^{1/p}:=\frac{1}{(\eta+1)^{1/(p-1)}-1}\,,\]
and \(f\) is given by the formula
\[f(\eta):= \left(1+\frac{1}{(\eta+1)^{1/(p-1)}-1}\right)^{p}-1-\frac{\eta+1}{(( \eta+1)^{1/(p-1)}-1)^{p}}\] \[= \left(\frac{(\eta+1)^{1/(p-1)}}{(\eta+1)^{1/(p-1)}-1}\right)^{p}-1 -\frac{\eta+1}{((\eta+1)^{1/(p-1)}-1)^{p}}\] \[= \frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p}}-1\,.\]
Summarizing, we have proved the following lemma.
**Lemma 2**.: For all \(X,Y\geq 0\) and all \(\eta>0\), it holds
\[(X+Y)^{p}\leq(1+\eta)X^{p}+\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p -1)}-1)^{p}}Y^{p}\,.\]
(One easily checks that, in the case \(p=2\),
\[f(\eta)=\frac{(\eta+1)^{2}-(\eta+1)}{((\eta+1)-1)^{2}}-1=\frac{\eta^{2}+\eta} {\eta^{2}}-1=\frac{1}{\eta}\]
so that the inequality in Lemma 2 coincides with (5).)
Next we prove (4). Pick \(p\)-Legendre conjugate, optimal Kantorovich potentials for \(\mathcal{W}_{p}(\lambda,\nu)\), denoted by \(\alpha\) and \(\gamma\) as in the preceding section. Thus
\[\alpha(x)=\inf_{z\in\mathcal{E}}(d(x,z)^{p}-\gamma(z))\,,\qquad\gamma(z)=\inf _{x\in\mathcal{E}}(d(x,z)^{p}-\alpha(x))\,;\]
besides \(\alpha\in L^{1}(\mathcal{E},\lambda)\) while \(\gamma\in L^{1}(\mathcal{E},\nu)\), and
\[\mathcal{W}_{p}(\lambda,\nu)^{p}=\int_{\mathcal{E}}\alpha(x)\lambda(dx)+\int_ {\mathcal{E}}\gamma(z)\nu(dz)\,.\]
For each \(\eta>0\), set
\[\beta_{\eta}(y):=\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p }}\inf_{z\in\mathcal{E}}\left(d(y,z)^{p}-\frac{((\eta+1)^{1/(p-1)}-1)^{p}}{( \eta+1)^{p/(p-1)}-(\eta+1)}\gamma(z)\right)\,. \tag{6}\]
Obviously, the function
\[y\mapsto\frac{((\eta+1)^{1/(p-1)}-1)^{p}}{(\eta+1)^{p/(p-1)}-(\eta+1)}\beta_{ \eta}(y)\]
is \(d^{p}\)-concave, since \(\gamma\in L^{1}(\mathcal{E},\nu)\) is \(\nu\)-a.e. finite.
**Lemma 3**.: Under the assumptions above, it holds
\[\alpha(x)-\beta_{\eta}(y)\leq(1+\eta)d(x,y)^{p}\,,\qquad x,y\in\mathcal{E}\,.\]
Proof of Lemma 3.: For each \(\epsilon>0\), there exists \(z_{\epsilon}\in\mathcal{E}\) such that
\[\inf_{z\in\mathcal{E}} \left(\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p }}d(y,z)^{p}-\gamma(z)\right)\] \[\leq\left(\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}- 1)^{p}}d(y,z_{\epsilon})^{p}-\gamma(z_{\epsilon})\right)\] \[<\inf_{z\in\mathcal{E}}\left(\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{( (\eta+1)^{1/(p-1)}-1)^{p}}d(y,z)^{p}-\gamma(z)\right)+\epsilon\]
Thus
\[\alpha(x)-\beta_{\eta}(y)= \inf_{z\in\mathcal{E}}(d(x,z)^{p}-\gamma(z))-\inf_{z\in\mathcal{E}} \left(\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p}}d(y,z)^{p}- \gamma(z)\right)\] \[< (d(x,z_{\epsilon})^{p}-\gamma(z_{\epsilon}))-\left(\frac{(\eta+1)^ {p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p}}d(y,z_{\epsilon})^{p}-\gamma(z_ {\epsilon})\right)+\epsilon\] \[\leq (d(x,y)+d(y,z_{\epsilon}))^{p}-\frac{(\eta+1)^{p/(p-1)}-(\eta+1)} {((\eta+1)^{1/(p-1)}-1)^{p}}d(y,z_{\epsilon})^{p}+\epsilon\] \[\leq (1+\eta)d(x,y)^{p}+\epsilon\]
by Lemma 2. We conclude by letting \(\epsilon\to 0^{+}\).
Assume that \(\mathcal{W}_{p}(\lambda,\mu)\mathcal{W}_{p}(\mu,\nu)\neq 0\). First observe that
\[|\beta_{\eta}(y)|\leq\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1 )^{p}}(d(x_{0},y)+d(x_{0},z))^{p}+|\gamma(z)|\,,\]
and integrating both sides of this inequality with the measure \(\mu\otimes\nu\) shows that
\[\int_{\mathcal{E}}|\beta_{\eta}(y)|\mu(dy)\leq 2^{p-1}\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1 )^{p}}\left(\int_{\mathcal{E}}d(x_{0},y)^{p}\mu(dy)+\int_{\mathcal{E}}d(x_{0},z)^{p}\nu(dy)\right)\] \[+\int_{\mathcal{E}}|\gamma(z)|\nu(dz)<+\infty\,,\]
since \(\mu,\nu\in\mathcal{P}_{p}(\mathcal{E})\) and \(\gamma\in L^{1}(\mathcal{E},\nu)\). Hence \(\beta_{\eta}\in L^{1}(\mathcal{E},\mu)\) and the inequality in Lemma 3 implies that
\[\int_{\mathcal{E}}\alpha(x)\lambda(dx)-\int_{\mathcal{E}}\beta_{\eta}(y)\mu(dy )\leq(1+\eta)\mathcal{W}_{p}(\lambda,\mu)^{p}\]
by Kantorovich duality. On the other hand, the definition (6) implies that
\[\beta_{\eta}(y)+\gamma(z)\leq\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/ (p-1)}-1)^{p}}d(y,z)^{p}\,,\qquad y,z\in\mathcal{E}\,,\]
so that
\[\int_{\mathcal{E}}\beta_{\eta}(y)\mu(dy)+\int_{\mathcal{E}}\gamma(z)\nu(dz) \leq\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p}}\mathcal{W} _{p}(\mu,\nu)^{p}\]
again by Kantorovich duality. Hence
\[\begin{split}\mathcal{W}_{p}(\lambda,\nu)^{p}=&\int _{\mathcal{E}}\alpha(x)\lambda(dx)+\int_{\mathcal{E}}\gamma(z)\nu(dz)\\ \leq&(1+\eta)\mathcal{W}_{p}(\lambda,\mu)^{p}+ \frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p}}\mathcal{W}_{p} (\mu,\nu)^{p}\,,\end{split} \tag{7}\]
and this inequality holds for each \(\eta>0\). Choose
\[\eta+1:=(Z^{-1/p}+1)^{p-1}\,,\quad\text{ with }\quad Z:=\mathcal{W}_{p}( \lambda,\mu)^{p}/\mathcal{W}_{p}(\mu,\nu)^{p}\,.\]
Then
\[\begin{split}(1+\eta)Z+\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+ 1)^{1/(p-1)}-1)^{p}}=&(Z^{-1/p}+1)^{p-1}Z+\frac{(Z^{-1/p}+1)^{p- }-(Z^{-1/p}+1)^{p-1}}{(Z^{-1/p}+1-1)^{p}}\\ =&(1+Z^{1/p})^{p-1}Z^{1/p}+\frac{(Z^{-1/p}+1-1)(Z^{-1/ p}+1)^{p-1}}{(Z^{-1/p}+1-1)^{p}}\\ =&(1+Z^{1/p})^{p-1}Z^{1/p}+\frac{Z^{-1/p}(Z^{-1/p}+1) ^{p-1}}{Z^{-1}}\\ =&(1+Z^{1/p})^{p-1}Z^{1/p}+Z^{(p-1)/p}(Z^{-1/p}+1)^{ p-1}\\ =&(1+Z^{1/p})^{p-1}Z^{1/p}+(1+Z^{1/p})^{p-1}\\ =&(1+Z^{1/p})^{p}\,.\end{split}\]
In other words, with this choice of \(\eta\) and \(Z\), one finds that
\[(1+\eta)\frac{\mathcal{W}_{p}(\lambda,\mu)^{p}}{\mathcal{W}_{p}(\mu,\nu)^{p}}+ \frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{((\eta+1)^{1/(p-1)}-1)^{p}}=\left(1+\frac{ \mathcal{W}_{p}(\lambda,\mu)}{\mathcal{W}_{p}(\mu,\nu)}\right)^{p}\,.\]
Multiplying both sides of this identity by \(\mathcal{W}_{p}(\mu,\nu)^{p}\), one arrives at the identity
\[(1+\eta)\mathcal{W}_{p}(\lambda,\mu)^{p}+\frac{(\eta+1)^{p/(p-1)}-(\eta+1)}{( (\eta+1)^{1/(p-1)}-1)^{p}}\mathcal{W}_{p}(\mu,\nu)^{p}=(\mathcal{W}_{p}(\lambda,\mu)+\mathcal{W}_{p}(\mu,\nu))^{p}\]
where \(\eta\) is chosen as follows:
\[\eta:=\begin{pmatrix}\mathcal{W}_{p}(\mu,\nu)\\ \mathcal{W}_{p}(\lambda,\mu)\end{pmatrix}^{p-1}-1\,.\]
Inserting this value of \(\eta\) in the right-hand side of (7) and using the identity above shows that
\[\mathcal{W}_{p}(\lambda,\nu)^{p}\leq(\mathcal{W}_{p}(\lambda,\mu)+\mathcal{W }_{p}(\mu,\nu))^{p}\]
from which the triangle inequality immediately follows in the special case where
\[\mathcal{W}_{p}(\lambda,\mu)\mathcal{W}_{p}(\mu,\nu)\neq 0\,.\]
Otherwise, if one of the distances \(\mathcal{W}_{p}(\lambda,\mu)\) or \(\mathcal{W}_{p}(\mu,\nu)\) is equal to \(0\), the triangle inequality is trivial.
## 5. Conclusion
The proof of the triangle inequality for the Wasserstein distances presented in this short note is (perhaps) the most "elementary" proof -- which does not mean that it is the shortest -- in the sense that it does not use any information about optimal couplings. (For instance, it does not use the existence of an optimal transport map, as in the Brenier theorem, i.e. Theorem 2.12 (ii) in [12].) Also, it does not involve any nontrivial manipulation on optimal couplings (as in the "glueing" procedure described in Lemma 7.6 of [12]). The only ingredients in this proof are (a) Kantorovich duality, and (b) the inequality of Lemma 2, which boils down to computing the Legendre transform of a real-valued function on the half-line. The proof of Lemma 3 is based on (b) -- just as Lemma 1 is based on the elementary inequality \(2ab\leq\eta a^{2}+\frac{1}{\eta}b^{2}\) for all \(a,b,\eta>0\) -- and on the characterization of the infimum of a function as its larger lower bound. Since there is a Kantorovich-type duality for the quantum analogue of \(\mathcal{W}_{2}\) defined in [4] (see [3]), it seems that the validity of the triangle inequality for this quantum Wasserstein (pseudo)metric boils down to the existence of a quantum analogue of Lemma 1. Since D. Serre's example mentioned in the introduction rules out the possibility of proving the triangle inequality for the quantum analogue of \(\mathcal{W}_{2}\) defined in [4] by glueing quantum couplings as explained on p. 28 of [5], we hope that the approach to the triangle inequality presented here can shed light on the quantum case.
|
2306.10385 | Study of In-plane and Interlayer Interactions During Aluminum Fluoride
Intercalation in Graphite: Implications for the Development of Rechargeable
Batteries | The electrolyte intercalation mechanism facilitates the insertion/extraction
of charge into the electrode material in rechargeable batteries. Aluminum
fluoride (AlF$_{3}$) has been used as an electrolyte in rechargeable aluminum
batteries with graphite electrodes, demonstrating improved reversibility of
battery charging and discharging processes; however, the intercalation
mechanism of this neutral molecule in graphite is so far unknown. In this work,
we combine scanning tunneling microscopy (STM) in ultra-high vacuum conditions,
calculations based on density functional theory, and large-scale molecular
dynamics simulations to reveal the mechanism of AlF$_{3}$ intercalation in
highly oriented pyrolytic graphite (HOPG). We report the formation of AlF$_{3}$
molecules clusters between graphite layers, their self-assembly by graphene
buckling-mediated interactions, and explain the origin and distribution of
superficial {\it blisters} in the material. Our findings have implications for
understanding the relationship between the mobility and clustering of molecules
and the expansion of the anode material. This, in turn, paves the way for
future enhancements in the performance of energy storage systems. | Sindy J. Rodríguez, Adriana E. Candia, Igor Stanković, Mario C. G. Passeggi, Gustavo D. Ruano | 2023-06-17T16:09:23Z | http://arxiv.org/abs/2306.10385v2 | Study of In-plane and Interlayer Interactions During Aluminum Fluoride Intercalation in Graphite: Implications for the Development of
###### Abstract
The electrolyte intercalation mechanism facilitates the insertion/extraction of charge into the electrode material in rechargeable batteries. Aluminum fluoride (\(\mathrm{AIF_{3}}\)) has been used as an electrolyte in rechargeable aluminum batteries with graphite electrodes, demonstrating improved reversibility of battery charging and discharging processes; however, the intercalation mechanism of this neutral molecule in graphite is so far unknown. In this work, we combine scanning tunneling microscopy (STM) in ultra-high vacuum conditions, calculations based on density functional theory, and large-scale molecular dynamics simulations to reveal the mechanism of \(\mathrm{AIF_{3}}\) intercalation in highly oriented pyrolytic graphite (HOPG). We report the formation of \(\mathrm{AIF_{3}}\) molecules clusters between graphite layers, their self-assembly by graphene buckling-mediated interactions, and explain the origin and distribution of superficial _blisters_ in the material. Our findings have implications for understanding the relationship between the mobility and clustering of molecules and the expansion of the anode material. This, in turn, paves the way for future enhancements in the performance of energy storage systems.
Intercalation, Graphite, \(\mathrm{AIF_{3}}\), STM, DFT, Molecular dynamics, Battery
## I Introduction
Renewable and sustainable energy storage technologies are nowadays a strategy to mitigate climate change, environmental pollution, and fossil fuel scarcity.[1; 2] Electrochemical energy storage, particularly rechargeable batteries, is considered one of the solutions to supply or back up clean electricity in portable devices. While lithium-based rechargeable batteries are developed for various applications and successfully commercialized, much research has focused on exploring alternative materials that are abundant in nature and less reactive, which reduces self-ignition risk.[3; 4] In this sense, research on alternatives to lithium (Li) in rechargeable batteries is dominated mainly by systems with sodium (Na),[5; 6] magnesium (Mg)[7; 8] or aluminum (Al)[9; 10; 11]. The anode material is equally important to charge carrier ions. Graphite shows potential as an anode material for rechargeable metal-ion batteries because of its high abundance and low cost[12; 13]. The systems with superior rate performance and cycling stability are obtained through a unique potassium-solvent co-intercalation mechanism in natural graphite[13]. Recently, graphite intercalation compounds (GICs) involving aluminum ions (Al-GIC) have emerged as a promising type of rechargeable batteries due to their high gravimetric density, lower reactivity, and easy handling.[14; 15] Aluminum ion rechargeable batteries (AIBs) have the redox property of involving three electrons during electrochemical processes resulting in a higher volumetric energy density than Li batteries, which is attracting attention from researchers.[9; 16; 17; 18; 19] While the most investigated compounds in this class are \(\mathrm{AICI_{3}}\)[20; 21; 22; 23; 24], recently, aluminum fluoride \(\mathrm{AIF_{3}}\) is proposed as a potential candidate for electrolytes in batteries composed of graphite cathodes and aluminum anodes.[25; 26] However, one of the main challenges lies in understanding the mechanisms governing the electrochemical performance of graphene anodes for rechargeable batteries [12; 13]. Infrared spectroscopy and X-ray diffraction[13] measurements have demonstrated the intercalation and co-interaction mechanism of large molecular complexes. Yet a complete outline of the interplay of graphene interactions with molecules and between the molecules remains unclear. The reason for this is the inaccessibility of the system for institut measurements in well-defined conditions.
In this paper, we focus on graphite as an anode material and \(\mathrm{AIF_{3}}\) molecule as an electrolyte applying a two-fold multi-scale approach based on the systematic nondestructive experimental investigation of the intercalation process using scanning tunneling microscopy (STM) under ultra-high vacuum (UHV) conditions and large scale molecular dynamics simula
tions of multilayer graphite surface, complemented with density functional theory (DFT) calculations. While a complete outline of intricate interactions of various ions and molecular complexes remains elusive, the comprehensive findings concerning AlF\({}_{3}\) could provide valuable insights that can be extended to understand the intercalation process for a broader class of materials. On its own, AlF\({}_{3}\) has been proposed as an electrolyte in Al batteries with graphite cathodes, since cyclic voltammetry (CV) showed that it protects the electrode during the cycling process, as well as, improves the reversibility, durability, and charge transfer of the device.[25; 26; 27] The computational results suggest that using the AlF\({}_{4}^{-}\) anion instead of AlCl\({}_{4}^{-}\) could increase the specific capacity and operating voltage of an eventual rechargeable battery.[27] A notable experimental study of AlF\({}_{3}\) by Wang _et al.[25]_ documented a compelling phenomenon: the introduction of a small quantity of AlF\({}_{3}\) into the liquid electrolyte resulted in improved battery reversibility. The potential utilization of AlF\({}_{3}\) molecules as guest species within the graphite matrix not only provides a platform for understanding solvent attributes of AlF\({}_{3}\) but represents a route for comprehending its incorporation dynamics. Recently, we found that AlF\({}_{3}\) molecules are incorporated through the steps-edges present on the HOPG surface, locally separating the carbon layers from the substrate without forming chemical bonds and altering its local density of states (LDOS). [28] Our theoretical study on the bulk, [29] showed that the AlF\({}_{3}\) molecule is energetically unstable on HOPG surfaces and instead intercalates in stable non-planar configuration due to attractive van der Waals forces between the HOPG layers. Despite the progress in the study of AlF\({}_{3}\), the mechanism of intercalation of this molecule as a function of the concentration of intercalating molecules should be understood.
The results presented in this study support the concept of mixed staging for the intercalation of AlF\({}_{3}\) in graphite. We explore the topographical and electronic properties of AlF\({}_{3}\) molecules intercalating between HOPG layers. The STM images are acquired at room temperature (RT) for different exposure doses. The experimental results are compared with DFT calculations and molecular dynamics simulations allowing us to make conclusions about interactions and structures within the material - which influence the mobility of molecules and clustering, the anode material expansion, and the resulting performance of the system for energy storage and in particular rechargeable batteries.
## II Experimental and theoretical setup
In Figure 1, we describe the methodologies employed to obtain and characterize samples based on our previous investigations [28]. The roughness of the substrate plays a determinant role in the thickness restriction of AlF\({}_{3}\) interlayer aggregates and this phenomenon occurs due to diffusion from the stepped edges into the interlaminar space of the host material. Figure 1(a) shows a schematic lateral representation of two different quality types of HOPG used in the experiments, specifically HOPG with a low and high density of staggered edges, identified as 'HOPG A' and 'HOPG B' respectively, Bruker grade I and II. Figure 1(b) illustrates the deposition, intercalation, and characterization process performed by STM under ultra-high vacuum (UHV) conditions. On the other hand, Figures 1(c) and (d) show the experimental data obtained by RBS (represented by squares), accompanied by simulated spectra generated from fitting models, represented by lines. In addition, a general descriptive scheme encapsulating the various components used in the data fitting is included. The interleaving process originates predominantly from these staggered edges. As a result, in the context of an ideal defect-free graphite (not depicted here), intercalation would lack feasibility. In a more practical scenario, an 'A' type HOPG sample would result in a thin interlayer, while a 'B' type would lead to a thicker one. For our experiments, we selected substrates ascribed to the 'HOPG A' type resulting in a 24 nm penetration depth of AlF\({}_{3}\). In the supporting information, you can access a summary of the relevant experimental results obtained through previously published RBS studies. For a more comprehensive perspective refer to Candia et al.[28].
### Stm-Uhv
Highly ordered pyrolytic graphite (HOPG from Bruker, UK, 12 mm \(\times\) 12 mm \(\times\) 1 mm) substrates were used for all the experiments. The clean HOPG surfaces were obtained by the tape cleavage in air and immediately loaded into the secondary reaction chamber of the STM-UHV, to be transferred after depositions to the main chamber of the system. AlF\({}_{3}\) molecules (CERAC INC., Milwaukee, Wisconsin, USA, 99.5%) were thermally deposited in the HOPG surface normal direction, from a Knudsen cell charged with the anhydrous salt heated at 900 K, 200 mm apart from the substrate. The deposition was achieved under UHV conditions (during evaporations at a pressure in the high range of \(10^{-10}\) mbar) by a rate in between \(6\times 10^{-3}\) and \(2\times 10^{-2}\) ML s\({}^{-1}\), and keeping the substrate at room temperature (RT). STM imaging was performed using a homemade Beetle scanning tunneling microscope in a UHV chamber with a base pressure in the low \(10^{-10}\) mbar range. All STM measurements were acquired at room temperature in the constant current mode using electrochemically etched tungsten (W) tips, with bias voltages and tunneling currents between 200 to 800 mV and 0.001 to 0.02 nA, respectively. These polycrystalline W tips were routinely cleaned by Ar\({}^{+}\) ion bombardment in UHV. Acquisition and image processing was performed using the WS\(\times\)M free software.[30]
### Dft
_Ab initio_ calculations on the framework of DFT were performed using OpenMx3.9 package (Open source package for Material eXplorer),[31; 32] which incorporates norm-conserving pseudopotentials and pseudo-atomic localized orbitals (PAOs). The electronic exchange-correlation effects were treated within the generalized gradient approximation (GGA) as the functional proposed by Perdew, Burke, and
Ernzerhof (PBE) [33]. We used the DFT-D3 approach for the correction of van der Waals interactions [34]. Basis functions were created using a confinement scheme and labeled as: A17.0-s2p2d2, C6.0-s2p2d1, and F6.0-s3p3d2f1 where Al, C, and F denote the chemical element, followed by the cutoff radius (Bohr radius), and the last set of symbols represent the primitive orbitals e.g. p2 indicates the use of two orbitals for the p component.
A cut-off energy of 300 Ry in the numerical integration and solution of the Poisson equation and a **k**-point mesh of 3\(\times\)4\(\times\)1 was used for the self-consistency calculation. To study the changes in the graphite surface under AlF\({}_{3}\) intercalation, we employed three graphite layers --noted as L\({}_{1}\), L\({}_{2}\), and L\({}_{3}\) that adopt a Bernal AB stacking structure type for the HOPG-- to model six systems which we denote as, (i) AlF\({}_{3}\)\({}_{0}^{\parallel}\), (ii) AlF\({}_{3}\)\({}_{0}^{\parallel}\), (iii) AlF\({}_{3}\)\({}_{0}^{\parallel}\), (iv) AlF\({}_{3}\)\({}_{1}^{\parallel}\), (v) AlF\({}_{3}\)\({}_{2}^{\parallel}\), and (vi) AlF\({}_{3}\)\({}_{1}^{\parallel}\), where the superscript indicates the number of molecules intercalated between layers L\({}_{1}\) and L\({}_{2}\) and the subscript the number of molecules intercalated between layers L\({}_{2}\) and L\({}_{3}\) (see Figure **S2** in the Supporting Information). The orthorhombic supercells have dimensions of a= 1.704 nm (**x**-axis), b= 1.721 nm (**y**-axis), and c= 3.00 nm (**z**-axis), with 336 carbon atoms belonging graphite. The interlayer initial spacing was set as 0.4 nm. Full structural relaxation of atomic positions was performed up to a convergence force below 0.02 eV/A.
The redistribution of charge density induced by the interaction between graphite and AlF\({}_{3}\) molecules was defined as follows \(\Delta\mathrm{D}=\mathrm{D}_{\mathrm{AlF}_{3}^{\parallel}\mathrm{\parallel}} -\mathrm{D}_{\mathrm{AlF}_{3}}-\mathrm{D}_{\mathrm{Gr}}\), where \(\Delta\mathrm{D}\) is the difference charge density, \(\mathrm{D}_{\mathrm{AlF}_{3}^{\parallel}\mathrm{\parallel}}\) is the charge density of each one of the aforementioned systems, \(\mathrm{D}_{\mathrm{AlF}_{3}}\) and \(\mathrm{D}_{\mathrm{Gr}}\) are the charge densities of the isolated AlF\({}_{3}\) and pristine graphite molecules, respectively. The charge transfer between graphite and AlF\({}_{3}\) molecules was calculated via a Mulliken population analysis.
Figure 1: (a) Schematic side view of the different HOPG roughness quality used in the RBS experiments to determine the dependence of the vertical penetration depth of AlF\({}_{3}\) on the topology. (b) STM-UHV preparation and characterization of AlF\({}_{3}\) intercalated in the HOPG. (c) and (d) RBS experimental spectra (represented by black squares) and fit (full line). In their insets: a bullet of the model obtained from each fit indicates the atomic percentage and depth of the intercalations. Schemes (a), (c), and (d) adapted from Candia et al. [28].
### Md
In our atomistic model, a graphite region of 25 nm \(\times\) 25 nm and seven layers of carbon atoms was filled with AlF\({}_{3}\) molecules. Periodic boundary conditions were established in the plane, leaving the system free on the axis orthogonal to the carbon layers. The bottom-most layer of carbon atoms is rigid and plays the role of the bulk, while all others are allowed to thermally move at 300 K. Thus bottom-most layer supports and mechanically stabilizes the other layers. The AlF\({}_{3}\) molecules are intercalated in the same number between the top six layers. There are no AlF\({}_{3}\) molecules present between the bottommost carbon layer and the subsequent mobile one. If there are no AlF\({}_{3}\) molecules situated in other carbon layers, it is due to the selection of the cut position. The density of AlF\({}_{3}\) molecules per layer varies between 0.015 and 0.03 AlF\({}_{3}\)/nm\({}^{2}\). We opted for these densities to align with the overall apparent area of the blisters observed in the experiments, namely Figures 2(a) and (b).
The interatomic forces within graphite were derived using the appropriate Airebo potential [35]. Interactions between AlF\({}_{3}\) molecules were modeled using Born-Mayer potential parameters [36]. The adhesion forces between the carbon atoms in graphite and AlF\({}_{3}\) molecules are modeled as pure van der Waals interaction with parameters \(\epsilon=6.68\) meV and \(\sigma=3.166\) A, chosen in accordance with energies and separations obtained from DFT.
The molecular dynamics (MD) simulations were performed using time steps of 0.5 fs and an NVT thermostat in LAMMPS, a commonly used distributed classical MD code [37]. The results are shown after 1 ns when the energy and structure of the system stopped evolving.
Figure 2: Topography STM images (30 nm \(\times\) 30 nm) of HOPG after the deposition of (a) 1.2 and (b) 3 ML of AlF\({}_{3}\) at 300 K. Both images were acquired with a sample bias voltage of V\({}_{\mathrm{g}}\)=500, 375 mV and tunnel currents I\({}_{\mathrm{T}}\)=0.001, 0.002 nA, respectively. (c) and (d) Height threshold obtained by applying the “flooding” procedure to images (a) and (b).
## III Results and discussion
In GICs, it is known that invited species (guests) do not intercalate simultaneously in all the interlayer spaces of the host material. Instead, they do it according to a unique modulated pattern called _staging_.[38; 39] The stage number \(n\) is defined as the number of pristine layers of the material separating two consecutive intercalated interlayer spaces. Therefore, in _stage I_ layers of graphite and guest molecules alternate one by one, in _stage II_, two pristine graphite layers separate two guest molecules layers, in _stage III_, three pristine graphite layers separating two guest molecules layers, and so on. In general, for any GICs, the arrangement of the guest in the layers is not perfect since, depending on the interaction between the host material and the guest, different configurations can be observed, such as the formation of small domains of guest material between the layers. In GICs, the most studied models to explain the intercalation mechanisms are those of Radorff-Hoffman (RH)[40] and Daumas-Herold (DH).[41] The RH model proposes a sequential filling in which empty and guest-baring layers are obtained alternatively without structural distortions of the graphite sheets. In contrast, the DH model proposes that the guest can be intercalated between all layers of the host by deforming the material. Some authors have proposed the coexistence of different stages at the same time, i.e. _staging stage_ model.[42] This could explain some anomalies observed in studies of lithium-ion intercalation in graphite from X-ray diffraction and entropy measurements for stages I and II experiments.[43; 44]
So far, it has been demonstrated experimentally and theoretically that the intercalation of AlF\({}_{3}\) in graphene generates deformations or "blisters" on the surface.[28] In this work, we will focus on the intercalation mechanism of AlF\({}_{3}\) in HOPG, and to facilitate the discussion of results, we will divide this section into two parts: (i) _Morphology of AlF\({}_{3}\) blisters in HOPG_, where we present the experimental arrangement of the blisters on the graphite surface using STM, (ii) _Revelation of the arrangement of blisters in HOPG_, where we make use of theoretical DFT calculations and MD simulations to discuss the modeled topography of three and seven layers of graphite with intercalated molecules. We compare the experimental and theoretical results to determine the number of AlF\({}_{3}\) molecules that form the blisters. At the end of this section, we propose a potential mechanism of the AlF\({}_{3}\) molecules intercalation.
### Morphology of the AlF\({}_{3}\) blisters in HOPG
In order to experimentally explore the topography and electronic characteristics of the intercalation of AlF\({}_{3}\) molecules between the HOPG layers, we performed STM images of the system at RT and different exposure doses. The deposition rate of AlF\({}_{3}\) molecules was varied between \(6\times 10^{-3}\) and \(2\times 10^{-2}\) ML s\({}^{-1}\), while the coverage ranged from 0.8 to 3.5 ML. Our experimental analysis is based on data collected from four different experiments; however, in this section, we only present the corresponding result at those doses comparable with the results obtained with molecular dynamics. In Figures 2(a) and 2(b), we show the topography STM images of HOPG surfaces after the deposition of 1.2 and 3.0 ML of AlF\({}_{3}\), respectively. The results for the other two doses can be found in the Supporting Information. In both images, we observe bright regions on a dark brown background, corresponding to blisters generated by clusters of AlF\({}_{3}\) molecules intercalated with an inhomogeneous distribution beneath the topmost graphite layer.[28] The density of these clusters is observed to be higher for the sample that has been more exposed, indicative of a correlation between intercalation density and exposure dose. Comparing both images, it is apparent that the size of the blisters varies with the deposited dose. In addition, it is observed that although the blister distribution across the graphite surface seems to be random for both doses, we found clusters of blisters characteristically located along a straight line, which we have indicated by the black dashed-line rectangles in the Figures 2(a) and 2(b). To clarify both, the size and the positions of the blisters, using the free software WS\(\times\)M,[30] a topographic flooding-type procedure is applied to images of Figures 2(a) and 2(b), which are shown in Figures 2(c) and 2(d), respectively. This procedure allows us to identify, count and measure the clusters of AlF\({}_{3}\) molecules, highlighting the areas that are above a threshold defined by the program. In our case, we set this threshold to 0.25 nm enough to minimize the noise inherent in image acquisition of the images and highlight only the AlF\({}_{3}\) clusters without affecting morphological characteristics. For ease of visualization, a color scheme was chosen to represent the cluster height. Thus, focusing preferentially on clusters within the dashed line rectangles, values between 2.5 - 3.0 nm and 0.3 - 0.5 nm are obtained for the diameters and apparent heights, respectively. At the same time, it is clearly observed how the average distance between the nearest blisters in each correlated set (we compared between dashed line rectangles in Figures 2(a) and 2(b)) decreases from 5 to 1.5 nm when the AlF\({}_{3}\) dose is increased. Figure 3 shows the evolution of the density of blisters (number per area, nm\({}^{-2}\)) formed on the substrate and their average apparent area as a function of dose. As expected, the blister density cells on the surface increase monotonically with the dose. The trend for the average apparent area of a blister is different. There is a significative increase at low doses (from 0.8 to 1.2 ML), followed by a plateau in the size of a single blister.
Given that STM solely focuses on the topography of the outermost atomic layer of the system, we are faced with a substantial challenge to achieve a definitive understanding of the intercalation dynamics. While our recent publications have confirmed that AlF\({}_{3}\) molecules integrate into the substrate via step edges and surface defects, inducing local separation between the carbon layers of HOPG without forming chemical bonds, yet influencing the local density of states (LDOS), we are still in a stage of partial understanding regarding the dynamics of this intercalation process. Therefore, in the following sections, we will address this topic in depth. Utilizing density functional theory (DFT) and molecular dynamics (MD) calculations, we will present a comprehensive interpretation of the intercalation mechanism that aligns with results
obtained from scanning tunneling microscopy (STM) on the surface.
### Revealing the arrangement and size of the blisters in HOPG
To understand the inherent dynamics of the mechanism of intercalation and to determine if any of the alternative RH, DR, or mixed-stage models fit the system under study, DFT calculations, and MD simulations are performed from modeling three and seven layers of graphite with intercalated AlF\({}_{3}\) molecules, respectively. Figures 4(a), 4(b), and 4(c) show the results obtained for the geometric optimization of two intercalated systems by DFT. In Figures 4(a) and 4(b), we depict side views for the AlF\({}_{3}\)\(|_{2}^{2}\) and AlF\({}_{3}\)\(|_{3}^{3}\) systems respectively, along with a top view in Figure 4(c) showing the relaxed geometry of the clusters with two and three molecules involved in each of the aforementioned models. In addition, Figures 4(a) and 4(b) show superimposed the charge density differences induced by the interaction between the molecules and the graphite, represented by blue and green shaded regions. Figures 4(d) and 4(e) provide the MD results where the top and side views of the spatial arrangement of the clusters are observed for two densities of molecules per layer 0.015 and 0.03 AlF\({}_{3}\)/nm\({}^{2}\), respectively. To facilitate visualization, we have established a color scale for the clusters, according to the height at which they are on the **z**-axis, set to zero at the bottom-most graphite layer. On the other hand, Figures 4(f) and 4(g) show images as a color map of the topography of the topmost layer produced by the intercalated clusters shown in Figures 4(d) and 4(e). In this case, the height difference (\(\Delta\)z) is measured from the level of the topmost unperturbed graphite layer. Since intercalation mechanisms are complex processes, we will discuss our results based on (1) _In-plane interactions_, defined as those produced between molecules located in the same layer of the host material, and (2) _Interlayer interactions_, established between molecules located in different layers of the host material.
#### iii.2.1 **In-Plane interactions**
From DFT calculations, we determined that the intercalation process causes a charge transfer up to 2.63 e\({}^{-}\) from the graphite to the molecules for three-molecule clusters (see more details in the Supporting Information, section 3A). The electronic transfer redistributes the charges by polarizing the graphite sheets, which leads to the formation of local "transverse dipoles" between the molecules and the graphite, where molecules gain electrons while the graphite loses them. The transverse dipoles calculated by DFT, are visualized in the shaded regions of Figures 4(a) and 4(b), where the green regions correspond to the different charge density \(\Delta\)D\(>\)0, i.e., higher electron density after the charge transfer due to intercalation, and the blue regions correspond to difference charge density \(\Delta\)D\(<\)0, i.e., lower charge density upon intercalation [945]. These dipoles, which have components mainly on the **z**-axis, locally separate the graphite layers and elastically deform the material, playing a relevant role in the interactions between molecules in the same **xy**-plane, as described below (for more details on the theoretically calculated heights, see Table S1 of the Supporting Information).
According to DFT and MD theoretical results, the AlF\({}_{3}\) molecules approach each other forming small clusters parallel to the graphite **xy**-plane. Each intercalated molecule extends the C-C \(\sigma\) bonds of graphene up to 2%. A way for the system to reduce elastic energy is by pushing individual molecules into clusters. In turn, the elastic deformation of graphite results in an effective attractive force between molecules in the same plane. The pressure generated by the deformation of the layers is transferred into the internal pressure of the blisters. [46; 47] In this sense, the deformation of graphite drives the kinetics of the clustering and coalescence of AlF\({}_{3}\) molecules at a given graphite plane. Since the blister obtained theoretically and experimentally have a height and radii of less than 1.0 and 1.5 nm, we estimate the pressure inside the blisters from the linear plate model [48], using the expression of eq. S2 (see more details in the Supporting Information). According to this model, the pressure, expressed in terms of the adhesion energy, is inversely proportional to the square of the radius of the blister. [47] Consequently, the pressure is higher in small blisters, and the pressure difference drives the molecules to diffuse from the smaller blisters to the larger ones. In this work, with the linear plate model and DFT, we estimate a pressure of 6.40, 4.06, and 3.01 GPa for blisters generated by intercalation of groups of one, two, and three AlF\({}_{3}\) molecules, respectively --these values are on the order of the pressures reported by Wang _et al._[46] and Villareal _et al._[49] for blisters with radii smaller than 1 nm--. It is important to note both from experimental (Figure 3) and theoretical results (Figures 4(f) and 4(g)), that no **xy**-plane contains blisters with diameters larger than 3 nm. This appears to indicate the existence of a critical coalescence value. This outcome is mainly due to elastic deformation and the generation of transverse dipoles by the
Figure 3: Variation of the blisters density and their average apparent area as a function of the AlF\({}_{3}\) molecules dose deposited on a HOPG substrate at 300 K.
molecules between adjacent layers of the graphite. This critical coalescence number could explain why the average area of the blisters remains almost constant for doses higher than 1.2 ML, as shown in Figure 3.
By analyzing the STM images and our DFT and MD results, we have estimated that blisters with diameters in the range of 1.3 to 2.0 nm are made up of clusters of two to six AlF\({}_{3}\) molecules, respectively. The size of the blisters formed between the L\({}_{1}\) and L\({}_{2}\) layers depends on the concentration of intercalated molecules. According to the results obtained by MD, when the density of molecules per layer is 0.015 AlF\({}_{3}\)/nm\({}^{2}\), the blisters on the surface have smaller diameters. In contrast, for a density of 0.03 AlF\({}_{3}\)/nm\({}^{2}\), the diameters increase. This is consistent with the experimental data presented in Figure 3, which indicates that the most significant increase in the average area of the blisters occurs for doses lower than 1.2 ML.
Regarding the height of the blisters, and according to the topography modeled by calculation with three graphite layers by DFT and seven layers by MD, the clusters deform the graphite layer such that the blister reaches the apparent height of 0.22 nm (see Figure 4). It should be noted that experimentally the apparent height could increase, since, on the one hand, the innermost clusters could push everything upwards and, on the other hand, there could be an overestimation during acquisition due to unknown tip conditions.[46]
#### iii.2.2 Interlayer interactions
Figures 4(d) and 4(e) depict the spatial arrangement of clusters along the **z**-axis. The atoms of the molecules are colored according to their vertical position, from blue for the molecules in the lower layer (\(z=0\) nm) to red for those in the upper layer (\(z=2\) nm). According to the results, it is not observed for any of the two densities of AlF\({}_{3}\) molecules that the average size of the clusters varies with the depth. This observation is evident in Figure 4(d), where the clusters in the lower layers (represented in blue) exhibit similar sizes to those in the upper layers (represented in red). However, the most striking feature is the alignment of the clusters obtained in the MD simulations. The topography images of the MD simulations in Figures 4(f) and 4(g), clearly show the formation of blisters of different diameters in the topmost layer, which are aligned and separated about 4 nm apart from each other in the **xy**-plane. The self-organization of clusters in a line is observed for both modeled densities. The alignment direction of the blisters is random and does not correlate to any crystallographic orientation of the HOPG, as indicated by the dashed boxes in the molecular dynamics figures. Such random alignment is also
Figure 4: (a)-(c) are the theoretical results from DFT. (a) and (b) are the geometric optimizations and difference charge density for AlF\({}_{3}|_{2}^{2}\) and AlF\({}_{3}|_{1}^{3}\) systems, respectively, where the superscript indicates the number of molecules intercalated between layer L\({}_{1}\) and L\({}_{2}\) and the subscript the number of molecules intercalated between layer L\({}_{2}\) and L\({}_{3}\). Green regions correspond to D\(>\)0, i.e. higher electron charge density due to intercalation, and blue regions correspond to D\(<\)0, i.e. lower charge density due to intercalation. (c) top view of the AlF\({}_{3}|_{2}^{2}\) and AlF\({}_{3}|_{1}^{3}\) systems. (d)-(g) are the theoretical results from MD. (d) and (e) show the top and side view of the spatial arrangement of the clusters for two densities of AlF\({}_{3}\) molecules per layer of 0.015 AlF\({}_{3}\)/nm\({}^{2}\) and 0.03 AlF\({}_{3}\)/nm\({}^{2}\), respectively —in order to facilitate visualization, we have established the color of the clusters, according to the height at which they are on the z-axis—. (f) and (g) show the results of the topography of the last layer after the formation of the clusters intercalated for the same density per layer.
evident in the experimental STM image, see the dashed line box in Figure 2(b). A relevant aspect to highlight from these MD results is the role played by the innermost clusters since how the blisters are aligned on the surface depends on this, see the inset of Figures 4(d) and 4(e). Below the surface, the formation of clusters between the different layers prevents larger structures from forming within a given layer. As a result of this interlayer self-assembly blisters in the topmost layer remain approximately 4 nm apart from each other, despite the elastic forces that attempt to coalesce them, see Figure 4(f). Since the results of the MD simulations were able to reproduce the experimental characteristics of the blisters in terms of their alignment and size distribution, they can be employed to gain a deeper understanding of the processes taking place within the clusters below the surface. According to MD, the clusters between neighboring layers do not stack vertically. The reason behind this is the elastic deformations of graphite and the repulsive interaction of local transverse dipoles. The insets in Figures 4(d) and 4(e), which show a side view of the highlighted regions, reveal the depth distribution of the clusters that form blisters observed in the topmost layer (shown in Figures 4(f) and 4(g), respectively). Each region framed by the dashed line boxes represents a superstructure formed by intercalated clusters, and displaced laterally from each other, between different pairs of layers. A "local staging" is observed in each superstructure, indicating the coexistence of "mixed stages". For instance, for the dilute system of 0.015 AlF\({}_{3}\)/nm\({}^{2}\), the organization of the molecules combines stages III and IV, while for the densest system of 0.03 AlF\({}_{3}\)/nm\({}^{2}\), it combines stages IV and V. This arrangement gives rise to an aligned distribution of the clusters and explains the average distance of 4 nm between blisters at the topmost layer in Figures 4(f) and Figures 4(g), and also in the STM image of 2(b) corresponding to the experiment.
## IV Conclusions
In this work, we employed a combination of scanning tunneling microscopy (STM), density functional theory (DFT) calculations, and molecular dynamics (MD) simulations to gain insight into the process of AlF\({}_{3}\) intercalation in highly oriented pyrolytic graphite (HOPG). Experimentally, we investigated the intercalation mechanism of thermally dosed AlF\({}_{3}\) molecules perpendicularly to the HOPG surface at room temperature under ultra-high vacuum conditions (in the high range of 10\({}^{-10}\) mbar). STM images obtained for varying molecule doses revealed that the blisters are not uniformly distributed over the graphite surface. Besides, it was observed that some blisters align locally with random orientations and without following any preferential crystallographic direction, see the line and dashed rectangle in Figure 2(b). Since STM only probes the surface of the system, we supplemented our experimental results with theoretical DFT calculations and MD simulations, which allowed us to propose a plausible explanation for the local alignment of the blisters observed on the surface.
We used DFT and MD simulations to study in-plane and interlayer interactions between AlF\({}_{3}\) molecules and graphite. The in-plane interactions involve the arrangement of AlF\({}_{3}\) molecules between two graphite layers. The findings include the formation of AlF\({}_{3}\) molecule clusters resulting in localized elastic deformation graphite layers. Charge transfer between graphite and AlF\({}_{3}\) molecules induces the formation of transverse dipoles. The molecules tend to form clusters between two graphite layers, driven by pressure differences. The pressures stemming from adhesive van der Waals forces between layers of 6.40, 4.06, and 3.01 GPa are estimated from DFT calculations for blisters containing one, two, and three intercalated molecules, respectively. The interlayer ordering of clusters reveals that clusters formed in deeper layers of the material direct the alignment of surface blisters. These three-dimensional superstructures, extend several layers into the depth of the material and connect through the bulk of material surface blisters arranged in a local line. The movement of observed surface clusters is therefore constrained by neighboring deeper clusters. Our model-based findings provide insights into the arrangement and behavior of blisters on the material's surface.
An important finding is the existence of superstructures visible in the experiment as surface blisters arranged in a local line and separated by an average of 4 nm. Every cluster affects the layers around it. Due to neighboring clusters in graphite layers under or above, AlF\({}_{3}\) clusters are quenched together and cannot get closer or grow. They cannot also stack on top of each other since the stacking would result in a sharp increase in elastic energy required to deform graphite layers. Therefore AlF\({}_{3}\) clusters avoid stacking on top of each other. As a result, the clusters group laterally next to each other - sharing the deformation of graphene to reduce elastic energy. Daumas and Herold postulated a model, in which the graphene layers are flexible and deform around clusters of the intercalated species.[41] In the Rudolf-Hoffman model, layers are not elastic, and molecules intercalate by separating these layers.[40] Our results underline the effect of the elasticity (deformability) of the host material (graphite) on the evolution of intercalation. Based on our results, clusters arrange via elastic interactions in graphene. The local deformations of graphene sheets and resulting local strains lead to mixed stages since neighboring clusters are separated by one layer of graphene, while simultaneously superstructures are separated by several nanometers. We can deduct what is an outcome of this process at higher densities: one expects the growth of clusters while they avoid vertical stacking. The process will continue until distances between the AlF\({}_{3}\) clusters sufficiently decrease and the density of AlF\({}_{3}\) within one layer becomes sufficiently high. At this point, the clusters will coalesce into the full AlF\({}_{3}\) layers removing elastic strain on host material (graphite) layers.
## V Supporting Information
The supporting information is divided into three sections, the first is a summary of previous results by RBS of the system under study. The second section shows the experimental
STM images of the HOPG before and after evaporation of 0.8 and 3.2 ML of AlF\({}_{3}\). Finally, in the third section, we present the computational details of the six models studied by DFT, reporting the formation energies, charge transfer, and some distances of interest between the molecules. In addition, we include a detailed calculation of the pressures of the blisters formed by one, two, and three intercalated molecules.
## VI Acknowledgement
The authors acknowledge the financial support by the Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) through grant PIP 2021-0384 and the Agenica Nacional de Promocion Cientifica y Tecnologica (ANPCyT) through the grant PICT-2019-04545 and Universidad Nacional del Litoral through grant CAI+D 2020-50620190100016Li. The present work used computational resources of the Pirayu cluster, funded by the Agenica Santafesina de Ciencia, Tecnologia e Innovacion (ASACTEI), Province of Santa Fe, Argentina, through grant AC-00010- 18, resolution N\({}^{\circ}\)117/14. This equipment is part of the National System of High-Performance Computing of the Ministry of Science and Technology, Argentina. I.S. acknowledges the support of the Ministry of Education, Science, and Technological Development of the Republic of Serbia through the Institute of Physics Belgrade. Molecular dynamics calculations were run on the PARADOX supercomputing facility at the Scientific Computing Laboratory, Center for the Study of Complex Systems of the Institute of Physics Belgrade. Last but not least, the authors acknowledge the financial support of the European Commission through the project ULTIMATE-I, grant ID 101007825.
|
2303.15423 | Comment on "Comment on "Traversable wormhole dynamics on a quantum
processor" " | We observe that the comment of [1, arXiv:2302.07897] is consistent with [2]
on key points: i) the microscopic mechanism of the experimentally observed
teleportation is size winding and ii) the system thermalizes and scrambles at
the time of teleportation. These properties are consistent with a gravitational
interpretation of the teleportation dynamics, as opposed to the late-time
dynamics. The objections of [1] concern counterfactual scenarios outside of the
experimentally implemented protocol. | Daniel Jafferis, Alexander Zlokapa, Joseph D. Lykken, David K. Kolchmeyer, Samantha I. Davis, Nikolai Lauk, Hartmut Neven, Maria Spiropulu | 2023-03-27T17:45:47Z | http://arxiv.org/abs/2303.15423v1 | # Comment on "Comment on 'Traversable wormhole dynamics
###### Abstract
We observe that the comment of Kobrin _et al._[1] is consistent with Jafferis _et al._[2] on key points: i) the microscopic mechanism of the experimentally observed teleportation is size winding and ii) the system thermalizes and scrambles at the time of teleportation. These properties are consistent with a gravitational interpretation of the teleportation dynamics, as opposed to the late-time dynamics. The objections of Kobrin _et al._[1] concern counterfactual scenarios outside of the experimentally implemented protocol.
1. The first scenario of [1] asks about times after the teleportation: they extend the dynamics via single-sided evolution to conclude that the learned Hamiltonian does not thermalize. We find that wormhole teleportation persists despite the addition of a non-commuting perturbation that reduces single-sided revivals. Moreover, we show thermalization under the learned eternal traversable wormhole Hamiltonian, which, like the experimentally implemented protocol, couples the left and right systems.
2. The second scenario of [1] asks about teleporting different fermions: they claim no size winding occurs on untrained fermions. We find all fermions are teleported by size winding albeit at different times. A strongly gravitational signature emerges: fermions that thermalize more slowly exhibit size winding at later times, consistent with a holographic interpretation of less massive fermions having a more delocalized wavepacket, taking them longer to traverse the wormhole.
3. The third scenario of [1] asks about different Hamiltonians: they find similar behavior in random commuting Hamiltonians. Besides identifying technical issues in their analysis, we note that an ensemble of similar Hamiltonians to the learned Hamiltonian should exist and exhibit similar properties. We also show the commuting structure is unrelated to the presence of gravitational physics by adding a large non-commuting term and finding that size winding is preserved.
+
Footnote †: preprint: CaltechAUTHORS:20230316-190343094; FERMILAB-PUB-23-120-ETD
## I Introduction
The purpose of Jafferis _et al._[2] was to perform quantum many-body teleportation with a traversable wormhole interpretation. This requires satisfying key properties _at the time of the teleportation_. Before the interaction between the left and right systems is applied, the transmitted fermions must thermalize and scramble in the left system, as defined by the decay of two-point and out-of-time-order correlators. The fermions must spread through the left system via operator growth with the particular phase coherence described by size winding. Applying the interaction must then reverse the direction of the size winding, ensuring that time evolution causes the fermions to unthermalize and unscramble on the right system. We observe that the numerical results of Kobrin _et al._[1] are fully consistent with this picture.
Kobrin _et al._[1] raises objections over a counterfactual scenario regarding the dynamics of a single-sided system with no interaction to perform the traversable wormhole protocol. We show that introducing a large non-commuting perturbation that damps revivals of the single-sided system at late times has minimal impact on transmission dynamics at the time of teleportation; this shows a decoupling between the issue of late-time dynamics and a gravitational interpretation of the teleportation. Moreover, we consider a gravitationally interpretable counterfactual scenario: the evolution of the coupled left-right system, corresponding to an eternal traversable wormhole [3]. The wormhole teleportation protocol proposed in [4] and implemented in our experiment is strictly equivalent to a single Trotter step of time evolution under the eternal traversable wormhole Hamiltonian. Due to the left-right interaction, we find that this coupled system thermalizes at high temperature as expected from the gravitational interpretation [3].
A second counterfactual discussed by [1] is the teleportation of different fermions: they claim that the learned Hamiltonian is biased to have good size winding only on the two fermions implemented in the experiment. This objection is not directly relevant to general holographic models, which can have very different properties across different fermions; not all fermions must have size wind
ing. Nevertheless, in the learned Hamiltonian, we find that all fermions show size winding at times \(2\lesssim t\lesssim 5\); their claim is an artifact of only analyzing size winding at \(t=2.8\). We find that fermions that thermalize more slowly in the eternal traversable wormhole Hamiltonian achieve good size winding at later times. This is a gravitationally meaningful feature associated with different masses across fermions: a lighter fermions has a delocalized wave packet that causes it to thermalize more slowly and take longer to traverse the wormhole, hence exhibiting size winding later. A late time corresponds to a boost in the near-horizon region, which makes the wavepacket more localized via length contraction. We view this time dependence as evidence for a holographic interpretation of the learned Hamiltonian. We note that the size winding behavior (\(2\lesssim t\lesssim 5\)) occurs before the model experiences revivals from its commuting structure. To ensure gravitational dynamics at later times (\(t\gtrsim 10\)), one needs to apply a perturbation such as a left-right coupling or non-commuting Floquet evolution, as described in the previous paragraph.
Finally, Kobrin _et al._[1] comment on the properties of random Hamiltonians of the same structure as the learned Hamiltonian [2]. Below, we note technical issues with this analysis and find that a strict minority of random Hamiltonians have as good size winding as the learned Hamiltonian at the time of teleportation. Crucially, however, we see no contradiction in non-bulk explanations of perfect size winding. We expect similar Hamiltonians to exhibit similar properties. Moreover, the commuting structure of the Hamiltonian is not central to achieving size winding properties. We provide a large perturbation of the learned Hamiltonian that thermalizes but has similar size winding and teleportation behavior, suggesting that the commuting structure is irrelevant to the presence of gravitational physics.
## II Gravitational dynamics after teleportation
In the setting of a traversable wormhole, the teleportation time of the coupled system defines the relevant timescale of the dynamics. On this timescale, the two-point and out-of-time-order correlators decay for individual fermions and for the average over all fermions. Our usage of "thermalization" and "scrambling" in the main text of [2] reflects the properties of the learned Hamiltonian on the timescale of the teleportation (\(t\approx 2.8\)). At \(t=2.8\), we found that the two Majorana fermions being teleported each spread onto 8 operators, with the largest operator of size 5. (In the absence of the commuting structure, each fermion would spread onto 36 operators in an \(N=7\) SYK model.)
To build a minimal example of a system that is interpretable as a traversable wormhole, the learning procedure coarse-grained out details concerning behavior after the teleportation time. Consequently, different counterfactual scenarios that occur after the teleportation can be subject to different interpretations. Kobrin _et al._[1] considered a direct extension of system dynamics: evolving the single-sided Hamiltonians independently. Due to the commuting structure of the Hamiltonian, the two-point function exhibits revivals that prevent thermalization and hence prevents interpreting each single-sided system as a black hole. Since these revivals occur by time \(t\approx 10\), dynamics such as the mutual information can show non-gravitational behavior at times \(t_{0}+t_{1}\gtrsim 10\).
The revivals of the single-sided system after the time of teleportation can be straightforwardly suppressed by introducing a periodic non-commuting perturbation, producing behavior more similar to the SYK model as we shall demonstrate in Sec. IV. We will find that although the late-time dynamics are now governed by a non-commuting Hamiltonian, the behavior of the system at the time of teleportation (\(t\approx 2.8\)) is unchanged. In particular, the teleportation dynamics at \(t\approx 2.8\) remain consistent with the expected gravitational signature. This shows that the late-time thermalization of the model -- satisfying the single-sided counterfactual proposed by [1] -- is a feature that is decoupled from the existence of gravitational dynamics at early times. For a full discussion of this decoupling, see Sec. IV.
Here, we examine a different counterfactual scenario that leads to meaningful gravitational behavior. We observe that the entire coupled system
\[H_{\rm tot}=H_{L}+H_{R}+\mu_{\rm MQ}H_{\rm int},\ H_{\rm int}=i\sum_{j}\psi_{L} ^{j}\psi_{R}^{j}\]
behaves as expected from the eternal traversable wormhole interpretation [3]. Note that the protocol implemented in the traversable wormhole experiment is equivalent, at the circuit level, to a first-order Trotterization of
Figure 1: **Thermalization and operator growth.****(a)**, The left-left two-point function of the coupled eternal traversable wormhole Hamiltonian (\(\beta=0.001\)). The two fermions teleported in the traversable wormhole experiment are shown in black (\(\psi^{1},\psi^{2}\)); the two fermions identified by [1] to have poor size winding are shown in red (\(\psi^{4},\psi^{7}\)). Fermions \(\psi^{4},\psi^{7}\) are the slowest to thermalize. **(b)**, The single-sided operator growth of individual fermions. Solid lines show support \(p(l)\) (Eq. 1) over operators of size \(l=3\), and dashed lines show support over operators of size \(l=5\). Fermions \(\psi^{4}\) and \(\psi^{7}\) grow the slowest, indicating that they traverse the wormhole later.
\(e^{-iH_{\rm tot}t}\) with a single Trotter step. The Hamiltonian \(H_{\rm tot}\) exhibits operator growth and thermalizes at high temperature (Fig. 1a), as is expected of the eternal traversable wormhole. No revivals occur in the two-point function, even at late times, due to the presence of \(H_{\rm int}\). At low temperature, since the ground state of \(H_{\rm tot}\) is the thermofield double state and has an \(O(1)\) gap to the first excited state, the two-point function appropriately exhibits larger oscillations.
In Fig. 1a, a diversity of decay rates of the two-point function is seen across different fermions; highlighted in red are the two fermions that decay the slowest. The gravitational interpretation of thermalization rates is that each fermion corresponds to a different mass. Since the wave packet of a lighter fermion is more spread out, its two-point function decays more slowly.
We check this behavior by examining the single-sided system \(H_{L}\). Inserting a fermion onto the TFD and time evolving under \(H_{L}\) should result in operator growth, with lighter fermions growing more slowly. In Fig. 1b, we see that the fermions that exhibit the slowest operator growth in \(H_{L}\) are precisely the same fermions that thermalize the slowest in \(H_{\rm tot}\). When examining size winding in the following section, we shall define the precise measure of operator growth (Eq. 1) and observe further behavior consistent with the gravitational interpretation of fermions with different masses.
## III Size winding
The size winding analysis included in the main text of [2] depicts the first fermion of the learned Hamiltonian. Here, we also show the size winding of all fermions at different times.
A holographic dual need not have size winding across all fermions; if a subset of fermions have additional non-gravitational interactions, they would not necessarily exhibit size winding. Consequently, the absence of size winding on fermions other than those teleported have no known implication for the holographic dual of the teleported fermions.
Attaching different masses to the fermions has an implication for size winding. Since lighter fermions traverse the wormhole more slowly, the event of traversing the wormhole occurs later. Consequently, the microscopic mechanism of size winding should occur at later times. In Fig. 2a, we see this expectation holds for the learned Hamiltonian.
Size winding decomposes a time-evolved fermion \(\psi_{L}^{j}(t)\) over Majorana strings of size \(|P|\)
\[\rho_{\beta}^{1/2}\psi_{L}^{j}(t)=\sum_{P}c_{P}(t)\psi_{L}^{P},\]
where the support of the fermion on operators of size \(l\) is given by
\[p(l,t)=\sum_{|P|=l}|c_{P}(t)|^{2} \tag{1}\]
and the winding size distribution is given by
\[q(l,t)=\sum_{|P|=l}c_{P}(t)^{2}. \tag{2}\]
We measure the quality of size winding by evaluating the linearity of size winding phases \(\arg q(l,t)\); in perfect size winding at time \(t\), \(q(l,t)\) is linear in \(l\). In Fig. 2b, we show the phases of each of the fermions at the time they are most linear. This demonstrates that all fermions eventually achieve near-perfect linearity [5].
Kobrin _et al._[1] observe that fermions \(\psi^{4}\) and \(\psi^{7}\) (colored red) have poor size winding at \(t=2.8\). We see in Fig. 2b that those fermions achieve near-perfect linearity of size winding phases at slightly later times (\(t\approx 4\), shown in Fig. 2a). Those fermions also thermalize more slowly in the coupled system (Fig. 1a) and experience slower operator growth in the single-sided system (Fig. 1b). This is consistent with interpreting them as taking longer to traverse the wormhole. While the
Figure 2: **Size winding of individual fermions.****(a)**, The time of best size winding vs. the value of the two-point function shown in Fig. 1(a) at \(t=2.8\). Fermions that thermalize more slowly have good size winding at later times. As in Fig. 1, fermions \(\psi^{4}\) and \(\psi^{7}\) are shown in red, and fermions \(\psi^{1}\) and \(\psi^{2}\) are shown in black. **(b)**, The size winding phase \(\arg q(l)\) (Eq. 2) for each of the fermions at the time of their best size winding. All fermions achieve near-linear phase dependence. **(c)**, The ratio \(|q(l)|/p(l)\) for all fermions. **(d)**, The wormhole teleportation protocol using fermions \(\psi^{4}\) and \(\psi^{7}\) and applying the interaction at \(t=4\). The fermions with the worst size winding (previously colored red) show a clear mutual information peak asymmetry at the time of teleportation, showing that they teleport by size winding.
ratio \(|q(l)|/p(l)\) is not unity for all fermions (Fig. 2c), it is sufficiently large to achieve a teleportation signal with sign dependence on the coupling for fermions \(\psi^{4}\) and \(\psi^{7}\) (Fig. 2d).
## IV Commuting Hamiltonian structure
In [2], we used a learning procedure to identify a minimal example of a Hamiltonian with traversable wormhole dynamics. The learning procedure involves a free parameter that controls the sparsity of the SYK model, generating an ensemble that includes non-commuting Hamiltonians but preserves gravitationally relevant properties.
A priori, it was not known that a small commuting Hamiltonian (Eq. 3) would exhibit behavior such as size winding. After this finding, it may be expected that additional Hamiltonians of similar form have similar properties. Indeed, while the learning procedure may aid in the discovery of Hamiltonians with gravitational behavior, there always exists a non-bulk explanation of its behavior. For this reason, the presence of size winding in Hamiltonians explicitly constructed to be similar to the learned Hamiltonian is unsurprising; it has no relevance in determining if the learned Hamiltonian behaves gravitationally.
Here, we examine if the commuting structure of the Hamiltonian is intrinsically related to its size winding properties and dynamics. We take the learned Hamiltonian
\[H_{0}= -0.36\psi^{1}\psi^{2}\psi^{4}\psi^{5}+0.19\psi^{1}\psi^{3}\psi^{4 }\psi^{7}\] \[-0.71\psi^{1}\psi^{3}\psi^{5}\psi^{6}+0.22\psi^{2}\psi^{3}\psi^{4 }\psi^{6}\] \[+0.49\psi^{2}\psi^{3}\psi^{5}\psi^{7}, \tag{3}\]
and add a perturbation with coefficient roughly equal to the median coefficient in \(H_{0}\),
\[H_{1}=0.3\psi^{1}\psi^{2}\psi^{3}\psi^{5}, \tag{4}\]
which forces the system to thermalize at long timescales but does not significantly modify the two-point function or mutual information dynamics at the timescale \(t=2.8\) of the wormhole teleportation (Fig. 3ab). In particular, the two fermions \(\psi^{1},\psi^{2}\) that traverse the wormhole do not exhibit revivals in the perturbed system \(H_{0}+H_{1}\), unlike the revivals seen in just \(H_{0}\). The perturbed system \(H_{0}+H_{1}\) shows size winding (Fig. 3c) sufficient to produce an asymmetric teleportation signal.
To emphasize the independence of 1) exhibiting gravitational physics at the time of teleportation and 2) thermalizing at late times, we show that applying the non-commuting perturbation in Trotterized time evolution still produces the mutual information dynamics characteristic of a gravitational signal. We prepare the thermofield double state using only the learned Hamiltonian, but evolve under a time-dependent Floquet Hamiltonian that alternates between the perturbation (\(H_{1}\)) and the
Figure 3: **Behavior of perturbed Hamiltonian.****(a)**, Two-point function (top) and mutual information dynamics (bottom) of individual fermions in the learned Hamiltonian (\(\psi^{1},\psi^{2}\) in black; other fermions in gray). **(b)**, Two-point function (top) and mutual information dynamics (bottom) of individual fermions in the perturbed Hamiltonian (same colors). Compared to (a), the two-point function of \(H_{0}+H_{1}\) does not experience as large revivals due to a non-commuting term and shows slightly damped oscillatory behavior. The mutual information dynamics preserve the gravitational signature of asymmetry in the interaction sign. **(c)**, The perturbed Hamiltonian’s size winding phases \(\arg q(l)\) (top) and ratios \(|q(l)|/p(l)\) (inset) shown at the time of best size winding (bottom). All fermions show size winding at different times, which is seen in (b) to be sufficient to generate a gravitational teleportation signature. Lighter fermions thermalize more slowly and traverse the wormhole at later times.
commuting system (\(H_{0}\)), with the alternation occurring in intervals of \(\Delta t=2.8\). A fermion explores a space of 12 operators under the Floquet Hamiltonian instead of 8 operators under \(H_{0}\).
We compare the dynamics of the Floquet Hamiltonian to the learned Hamiltonian \(H_{0}\) and an \(N=7\) SYK model (35 terms) in Fig. 4a. In Fig. 4b we show the asymmetry in mutual information in the Floquet case, demonstrating decoupling between the gravitational behavior at the time of teleportation from late-time dynamics. The perturbation would satisfy the late-time single-sided counterfactual argued by Kobrin _et al._[1]; however, it has no significant effect on the physics at the time of the teleportation.
This example demonstrates that adding a large non-commuting term does not introduce any meaningful change to the gravitationally interesting regime (\(t\approx 2.8\), \(\mu=-12\)). It suggests that the commuting structure of the learned Hamiltonian is not intrinsic to the relevant behavior we observe at the time of teleportation.
Kobrin _et al._[1] comment on two methods of generating Hamiltonians similar to Eq. 3. First, they consider randomizing the coefficients by sampling from a normal distribution; second, they randomize both the coefficients and the choice of terms, such that they obtain a 7-fermion 5-term commuting Hamiltonian. We point out that these are strictly equivalent procedures: there exists a unique 7-fermion 5-term commuting Hamiltonian, up to relabeling of fermions, given by
\[H = \alpha_{1}\psi^{1}\psi^{2}\psi^{3}\psi^{4}+\alpha_{2}\psi^{1} \psi^{2}\psi^{5}\psi^{6}\] \[+\alpha_{3}\psi^{3}\psi^{4}\psi^{5}\psi^{6}+\alpha_{4}\psi^{1} \psi^{3}\psi^{5}\psi^{7}\] \[+\alpha_{5}\psi^{2}\psi^{4}\psi^{5}\psi^{7}.\]
Hence, "randomizing coefficients and terms" is precisely the same as simply randomizing coefficients.
The authors of [1] claim that the quality of size winding "over all operators resembles that of generic random small-size fully commuting models." We reproduce the method of generating random Hamiltonians reported by [1] and observe that at \(t=2.8\), from 1000 random instances, 29% of random models have as good size winding on the best two fermions and only 3% have as good size winding on all fermions, as measured by linearity of \(\arg q(l)\). Measured by the variable \(\chi\) proposed by [1], these numbers are 25% and 2% respectively. This contradicts the claim of [1], which analyzed size winding at \(t=2.8\).
Setting aside these technical issues with the analysis of [1], we reiterate that the presence of similar Hamiltonians with similar properties does not have any bearing on a gravitational interpretation of the learned Hamiltonian \(H_{0}\).
This work is supported by the Department of Energy Office of High Energy Physics QuantISED program grant SC0019219 on Quantum Communication Channels for Fundamental Physics. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
|
2309.00713 | A preliminary study of photometric redshifts based on the Wide Field
Survey Telescope | The Wide Field Survey Telescope (WFST) is a dedicated time-domain multi-band
($u$, $g$, $r$, $i$, and $z$) photometric survey facility under construction.
In this paper, we present a preliminary study that assesses the quality of
photometric redshifts based on WFST by utilizing mock observations derived with
the galaxy catalog in the COSMOS/UltraVISTA field. We apply the template
fitting technique to estimate photometric redshifts by using the ZEBRA
photometric-redshift code and adopting a modified set of adaptive templates. We
evaluate the bias (median relative offset between the output photometric
redshifts and input redshifts), normalized median absolute deviation
($\sigma_{\rm NMAD}$) and outlier fraction ($f_{\rm outlier}$) of photometric
redshifts in two typical WFST observational cases, the single 30-second
exposure observations (hereafter shallow mode) and co-added 50-minute exposure
observations (hereafter deep mode). We find bias$\la0.006$, $\sigma_{\rm
NMAD}\la0.03$, and $f_{\rm outlier}\la5\%$ in the shallow mode and bias$\approx
0.005$, $\sigma_{\rm NMAD}\approx 0.06$, and $f_{\rm outlier}\approx
17\%$--$27\%$ in the deep mode, respectively, under various lunar phases.
Combining the WFST mock observational data with that from the upcoming CSST and
Euclid surveys, we demonstrate that the $z_{\rm phot}$ results can be
significantly improved, with $f_{\rm outlier}\approx 1\%$ and $\sigma_{\rm
NMAD}\approx 0.02$. | Yu Liu, Xiao-zhi Lin, Yong-quan Xue, Huynh Anh N. Le | 2023-09-01T19:44:12Z | http://arxiv.org/abs/2309.00713v1 | # A preliminary study of photometric redshifts based on the Wide Field Survey Telescope
###### Abstract
The Wide Field Survey Telescope (WFST) is a dedicated time-domain multi-band (\(u\), \(g\), \(r\), \(i\), and \(z\)) photometric survey facility under construction. In this paper, we present a preliminary study that assesses the quality of photometric redshifts based on WFST by utilizing mock observations derived with the galaxy catalog in the COSMOS/UltraVISTA field. We apply the template fitting technique to estimate photometric redshifts by using the ZEBRA photometric-redshift code and adopting a modified set of adaptive templates. We evaluate the bias (median relative offset between the output photometric redshifts and input redshifts), normalized median absolute deviation (\(\sigma_{\rm NMAD}\)) and outlier fraction (\(f_{\rm outlier}\)) of photometric redshifts in two typical WFST observational cases, the single 30-second exposure observations (hereafter shallow mode) and co-added 50-minute exposure observations (hereafter deep mode). We find bias\(\la 0.006\), \(\sigma_{\rm NMAD}\la 0.03\), and \(f_{\rm outlier}\la 5\%\) in the shallow mode and bias\(\approx 0.005\), \(\sigma_{\rm NMAD}\approx 0.06\), and \(f_{\rm outlier}\approx 17\%\)-\(27\%\) in the deep mode, respectively, under various lunar phases. Combining the WFST mock observational data with that from the upcoming CSST and Euclid surveys, we demonstrate that the \(z_{\rm phot}\) results can be significantly improved, with \(f_{\rm outlier}\approx 1\%\) and \(\sigma_{\rm NMAD}\approx 0.02\).
galaxies: distances and redshifts -- galaxies: high-redshift -- galaxies: photometry
## 1 Introduction
The development of modern astronomy has given rise to an increasing demand for powerful multi-band photometric sky surveys. Such surveys, e.g., the Sloan Digital Sky Survey (SDSS; e.g., Brescia et al.
DES-Collaboration et al. 2016; Ivezic et al. 2019), and Hyper Suprime-Cam Subaru Strategic Program Survey (HSC-SSP; e.g., Aihara et al. 2018; Hikage et al. 2019), with well-designed equipments, reasonable observational strategies, and fruitful scientific results in stellar physics, galaxy physics, and cosmology, have demonstrated their strong impacts on modern astronomy.
The Wide Field Survey Telescope (WFST) is a dedicated time-domain multi-band (\(u\), \(g\), \(r\), \(i\), and \(z\)) photometric survey facility under construction jointly by the University of Science and Technology of China and Purple Mountain Observatory, which is expected to start commissioning observations around August 2023. WFST has a 2.5-meter primary mirror, an active optical system, and a 0.73-Gigapixel mosaic CCD camera on the main focus plane; moreover, WFST is located near the summit of the Saishiteng Mountain in the Lenghu area that is a world-class observational site (Deng et al. 2021), thereby achieving high-quality imaging over a field of view of 6.5 deg\({}^{2}\). The main science goals of WFST surveys are time-domain sciences including supernovae, tidal disruption events, multi-messenger events, and active galactic nuclei (AGNs), asteroids and the solar system, the Milky Way and its satellite dwarf galaxies, and galaxy formation and cosmology (WFST-Collaboration et al. 2023).
Robust determination of cosmological redshifts is one of the most crucial factors in fulfilling the above WFST science goals. However, high-precision galaxy redshift measurements require spectroscopic observations for each source (i.e., obtaining spectroscopic redshifts, \(z_{\rm spec}\)). This task is not only expensive but also time consuming. Alternatively, there is another way to measure redshifts using photometric surveys (i.e., obtaining photometric redshifts, \(z_{\rm phot}\)), which is much more efficient than spectroscopic observations. This method, although not as precise as the \(z_{\rm spec}\) measurement, has demonstrated its extensive use in the \(z_{\rm phot}\) determination of an overall huge amount of survey targets at one time (e.g., Benjamin et al. 2010; Brescia et al. 2014; Cavuoti et al. 2017; Sanchez & Bernstein 2019). The application of \(z_{\rm phot}\) has enabled a wide range of exciting extragalactic sciences as mentioned above.
To date, a series of methods have been developed to estimate \(z_{\rm phot}\). In general, they can be divided into two main categories. One is based on template fitting that works as follows: the observed photometry is compared to a given set of pre-assumed galaxy templates to determine the best-fit redshift corresponding to the maximum likelihood (e.g., Benitez 2000; Feldmann et al. 2006; Brammer et al. 2008; Luo et al. 2010; Rafferty et al. 2011; Yang et al. 2014; Cao et al. 2018). The other is the so-called training-set method, which constructs a neural network (e.g., Collister & Lahav 2004; Blake et al. 2007; Sanchez et al. 2014; Pasquet et al. 2019) and performs machine learning to obtain \(z_{\rm phot}\), focusing on finding empirical relations between the redshift and galaxy properties (e.g., magnitudes and colors). This method is usually based on a large sample of secure \(z_{\rm spec}\), which are mostly available in the lower-redshift universe. However, since the magnitude limits of all WFST bands are deeper than most of the current \(z_{\rm spec}\) surveys, it is difficult to find a sample of well-measured \(z_{\rm spec}\) that can be representative of the full survey sample. Therefore, in this paper, we choose to measure \(z_{\rm phot}\) of mock WFST observations based on the former technique, i.e., template fitting.
The main goal of this paper is to assess \(z_{\rm phot}\) quality of the WFST photometry system preliminarily. We utilize the COSMOS/UltraVISTA multiwavelength galaxy photometry catalog (Muzzin et al. 2013)
is suitable to select a subsample of galaxies from this survey, the magnitudes of which meet the WFST detection limits. Utilizing this subsample, we generate the mock flux of each WFST filter passband based on WFST instrumental parameters with good data quality, and then estimate the corresponding observational error. We choose to use the ZEBRA code (Feldmann et al., 2006) for \(z_{\rm phot}\) estimation. The main advantage of this code is that it can generate a new set of templates adaptive to the observations to minimize the mismatch between observed spectral energy distributions (SEDs) and the galaxy templates that are either from theoretical synthesis models or observed certain types of galaxy SEDs in the local universe, thereby improving \(z_{\rm phot}\) quality.
This paper is organized as follows. In Section 2, we introduce the WFST photometry system, COSMOS/UltraVISTA galaxy catalog, and generation of mock WFST data; in Section 3, we introduce the process of \(z_{\rm phot}\) computation; in Section 4, we show \(z_{\rm phot}\) results and make comparisons with other works; and in Section 5, we summarize our results. All the magnitudes quoted are AB magnitudes.
## 2 Data
### Overview of the WFST photometry system
WFST has six filters, i.e., \(u\), \(g\), \(r\), \(i\), \(z\) (see Figure 1) and \(w\), with the white-light \(w\) band specifically designed for detecting asteroids in the solar system and thus being excluded from \(z_{\rm phot}\) computation in this paper. There are two planned key programs of the 6-year WFST survey: the wide-field survey (WFS) program and the deep high-cadence \(u\)-band survey (DHS) program. The WFS program aims to survey a total of \(\approx 8000\ {\rm deg}^{2}\) sky area in the \(u\), \(g\), \(r\), and \(i\) bands in the northern hemisphere, with about 90 visits in each band over 6 years given a single exposure of 30 seconds for each visit; while the DHS program plans to routinely monitor a total of \(\approx 720\ {\rm deg}^{2}\) sky area in the highly sensitive \(u\) band surrounding the equator every year, with a much higher observing cadence (down to hours) and being supplemented by a multi-band ancillary survey. The \(z\)-band imaging is excluded in the WFS program due to its relatively low efficiency and limited contribution to time-domain sciences; moreover, high-quality z-band imaging data will be achieved by other northern-hemisphere surveys such as Wide Imaging with Subaru HSC of the Euclid Sky (WISHES). However, WFST will allocate some additional observational time (about 1,300 hours over 6 years) for specific purposes or particular interests, e.g., capturing time-critical targets and mapping the Galactic plane, which require intensive scanning of certain sky areas using the \(z\)-band imaging.
In this paper, we compute \(z_{\rm phot}\) in two typical WFST observational cases, i.e., the single 30-second exposure observations (hereafter shallow mode) and co-added 50-minute exposure observations (hereafter deep mode). The deep mode can be realized by integrating all the observational time in each band mainly with the WFS program, thus achieving deeper detection limits than any existing single-telescope surveys with comparable survey areas in the northern hemisphere (Lei et al., 2023; WFST-Collaboration et al., 2023).
The average night sky background brightness at the WFST site (i.e., the Saishiteng Mountain, Lenghu Town, Qinghai Province) is approximately \(V=22.0\ {\rm mag\ arcsec}^{-2}\) when the moon is below the horizon; under new moon conditions, the best sky level can reach \(22.3\ {\rm mag\ arcsec}^{-2}\), which is measured in the extreme case when the bright part of the Galactic Disk is far away from the local zenith (Deng et al., 2013).
Under this circumstance and with no moon, the \(5\sigma\) limiting magnitudes can reach depths of \(ugriz=[22.31,23.42,22.95,22.43,21.50]\) in the shallow mode and \(ugriz=[24.86,25.95,25.48,24.96,24.03]\) in the deep mode, respectively (WFST-Collaboration et al. 2023). The modeling results of the \(5\sigma\) limiting magnitudes are shown in Figure 1. The \(5\sigma\) limiting magnitudes are shown in Figure 2. The \(5\sigma\) limiting magnitudes are shown in Figure 3. The \(5\sigma\) limiting magnitudes are shown in Figure 4. The \(5\sigma\) limiting magnitudes are shown in Figure 5. The \(5\sigma\) limiting magnitudes are shown in Figure 6. The \(5\sigma\) limiting magnitudes are shown in Figure 7. The \(5\sigma\) limiting magnitudes are shown in Figure 8. The \(5\sigma\) limiting magnitudes are shown in Figure 9. The \(5\sigma\) limiting magnitudes are shown in Figure 10. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 10. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 10. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 10. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 10. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 11. The \(5\sigma\) limiting magnitudes are shown in Figure 12. The \(5\sigma\) limiting magnitudes are shown in Figure 13. The \(5\sigma\) limiting magnitudes are shown in Figure 14. The \(5\sigma\) limiting magnitudes are shown in Figure 15. The \(5\sigma\) limiting magnitudes are shown in Figure 16. The \(5\sigma\) limiting magnitudes are shown in Figure 17. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 19. The \(5\sigma\) limiting magnitudes are shown in Figure 18. The \(5\sigma\) limiting magnitudes are shown in Figure 19.
### The COSMOS/UltraVISTA galaxy catalog
In this paper, we adopt the multiwavelength galaxy photometry catalog in the COSMOS/UltraVISTA field (Muzzin et al., 2013) to produce mock WFST data, given that it has deep optical coverage, broadband photometry, and high-quality \(z_{\rm phot}\) and corresponding best-fit galaxy SEDs.
This catalog covers a sky area of 1.62 \(\rm deg^{2}\) with point-spread function (PSF) matched photometry in 30 bands, with the wavelength range extending from \(0.15~{}\mu\rm m\) to \(24~{}\mu\rm m\), including 2 ultraviolet bands (FUV and NUV) from the \(GALEX\) satellite (Martin et al., 2005), 7 broadband (\(u^{*}\), \(g^{+}\), \(r^{+}\), \(i^{+}\), \(z^{+}\), \(B_{j}\), \(V_{j}\)) and 12 medium-band (IA427-IA827) optical data from the Subaru and Canada-France-Hawaii Telescope (Taniguchi et al., 2007; Capak et al., 2007), 4 near-infrared imaging bands (\(Y,J,H,K_{s}\)) from the UltraVISTA survey (McCracken et al., 2012), and the \(3.6~{}\mu\rm m\), \(4.5~{}\mu\rm m\), \(5.8~{}\mu\rm m\), \(8.0~{}\mu\rm m\), and \(24~{}\mu\rm m\) channels from \(Spitzer\)'s IRAC and MIPS cameras (Sanders et al., 2007). The \(5\sigma\) depths of the COSMOS/UltraVISTA survey in all bands are tabulated in Table 2, with typical depths in optical bands being deeper than those of WFST (see Table 1).
Photometric redshifts of galaxies in the COSMOS/UltraVISTA catalog are computed based on the template-fitting technique with the EAZY photometric-redshift code (Brammer et al., 2008). The default 7 EAZY templates are comprised of six templates derived from the PEGASE models (Fioc & Rocca-Volmerange, 1999) and a red galaxy template from the models of Maraston (2005). To improve the quality of the fitting, Muzzin et al. (2013) added two additional galaxy templates: one is a one-gigayear-old single-burst galaxy template generated from the Bruzual & Charlot (2003) model to improve the template fitting for galaxies at \(z>1\) with post starburst-like features; and the other is a slightly dust-reddened young galaxy template to improve the fitting of \(UV\) bright Lyman break galaxies (LBGs) with heavy dust extinction at \(1.5<z<3.5\). EAZY fits the observed multiwavelength photometry of galaxies utilizing linear combinations of the above 9 initial templates (as shown in Figure. 2) based on the \(\chi^{2}\) minimization algorithm. Muzzin et al. (2013) provided in their COSMOS/UltraVISTA catalog the best template combination coefficients for each of the galaxies, so that we can generate its best-fit SED. We show some of the best-fit galaxy SED examples at their respective redshifts from the COSMOS/UltraVISTA catalog in Figure 3. Photometric redshifts derived by Muzzin et al. (2013) are of high quality, being consistent with \(z_{\rm spec}\) from the zCOSMOS survey: up to \(z\sim 1.5\), their \(z_{\rm phot}\) are accurate to \(\Delta z/(1+z)=0.013\), with an outlier fraction of only 1.6%; up to \(z\sim 3\), their \(z_{\rm phot}\) show good agreement with \(z_{\rm phot}\) from the NEWFIRM Medium Band
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Filter & FUV & NUV & \(u^{*}\) & \(B_{j}\) & \(g_{+}\) & \(V_{j}\) & \(r^{+}\) & \(i^{+}\) \\ \(5\sigma\) Depth & 25.2 & 25.1 & \(26.4\) & \(27.3\) & \(27.0\) & \(26.6\) & \(26.8\) & \(26.2\) \\ \hline Filter & \(z^{+}\) & IA427 & IA464 & IA484 & IA505 & IA527 & IA574 & IA624 \\ \(5\sigma\) Depth & \(25.2\) & \(25.8\) & \(25.6\) & \(25.9\) & \(25.6\) & \(25.7\) & \(25.4\) & \(25.7\) \\ \hline Filter & IA679 & IA709 & IA738 & IA767 & IA827 & \(Y\) & \(J\) & \(H\) \\ \(5\sigma\) Depth & \(25.3\) & \(25.4\) & \(25.4\) & \(25.1\) & \(25.1\) & \(24.6\) & \(24.4\) & \(23.9\) \\ \hline Filter & \(K_{s}\) & \(3.6~{}\mu\rm m\) & \(4.5~{}\mu\rm m\) & \(5.8~{}\mu\rm m\) & \(8.0~{}\mu\rm m\) & \(24~{}\mu\rm m\) & \\ \(5\sigma\) Depth & \(23.7\) & 23.9 & 23.3 & 21.3 & 21.0 & \(45~{}\mu\rm Jy\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Depths of the 30 bands in the COSMOS/UltraVISTA photometry catalog
Figure 3: Some best-fit galaxy SED examples in the observed frame in the COSMOS/UltraVISTA field (Muzrin et al. 2013).
Figure 2: The set of 9 initial galaxy templates adopted by both Muzzin et al. (2013) and this work for \(z_{\rm phot}\) derivation (the templates have been normalized appropriately for displaying purpose; see main texts for details).
### Generation of mock WFST data
First, the mock flux in each band for each galaxy in the given catalog can be calculated by convolving the galaxy redshifted SED with the filter transmission curve, using
\[F_{\lambda}^{\rm mock}=\frac{\int_{-\infty}^{+\infty}S_{\lambda}\lambda R(\lambda )d\lambda}{\int_{-\infty}^{+\infty}\lambda R(\lambda)d\lambda}, \tag{1}\]
where \(S_{\lambda}\) is the best-fit observed SED of the COSMOS/UltraVISTA galaxy, and \(R(\lambda)\) is the transmission curve of one of the 5 WFST filters. The mock flux \(F_{\lambda}^{\rm mock}\) is then calibrated to the mock observational flux according to the \(i\) band apparent magnitude (Subaru \(i^{+}\) flux, \(F_{i^{+}}^{\rm obs}\)) given in the COSMOS/UltraVISTA galaxy catalog. This conversion is done by using \(F_{\lambda}^{\rm obs}=(F_{\lambda}^{\rm mock}/F_{i^{+}}^{\rm mock})F_{i^{+}}^{ \rm obs}\), where \(F_{\lambda}^{\rm obs}\) is the mock observational flux, and \(F_{\lambda}^{\rm mock}\) is the mock flux of a galaxy SED in each of the 5 WFST bands.
Dust extinction is taken into account when generating mock flux data. The SED flux density after dust reddening from interstellar medium (Calzetti et al., 1994; Galametz et al., 2017) can be expressed as
\[S_{\rm extinct}(\lambda_{\rm rest})=S_{\rm intrinsic}(\lambda_{\rm rest})10^{-0.4 E(B-V)k(\lambda_{\rm rest})}, \tag{2}\]
where \(E(B-V)=A_{V}/R_{V}\) is the color excess and \(k(\lambda)\) is the dust extinction curve. We adopt the Calzetti et al. (2000) extinction curve, with \(R_{V}\) for this attenuation law set as 4.05. For each galaxy, the value of attenuation \(A_{V}\) is given by the COSMOS/UltraVISTA catalog, which is derived through the SED fitting technique. We directly use it to generate the mock extinction-corrected fluxes.
We also consider intergalactic medium (IGM) absorption for high-redshift galaxies. At wavelengths shorter than the Ly\(\alpha\) line, the emission can be absorbed by neutral hydrogen clouds in the IGM along our line of sight to the high-redshift galaxy. We account for this extinction by making use of the Madau (1995) IGM attenuation law. This is done by applying the average flux decrement \(<D_{A}>\) between Ly\(\alpha\) and Ly\(\beta\), and \(<D_{B}>\) between Ly\(\beta\) and the Lyman limit, such that the IGM absorption corrected flux can be written as
\[S_{\rm absorption}(\lambda_{\rm rest})=(1-<D_{i}>)S_{\rm initial}(\lambda_{ \rm rest})\ \ (i=A,B), \tag{3}\]
where \(S_{\rm initial}\) is the initial flux density in the rest frame, adopted as the interstellar dust extinction-corrected galaxy flux \(S_{\rm extinct}(\lambda_{\rm rest})\) obtained from Equation 2. After these correction procedures, the galaxy SED flux density \(S_{\rm absorption}\), with dust extinction and IGM absorption corrected, is substituted into Equation 1 to generate mock flux data for all 5 WFST bands.
Next, we estimate flux errors with respect to mock WFST fluxes. For a ground-based telescope, the signal to noise ratio (SNR) can be evaluated via the following equation (Lei et al., 2023),
\[{\rm SNR}=\frac{S\cdot A\cdot\tau}{\sqrt{S\cdot A\cdot\tau+2\cdot n_{\rm pix }\cdot[({\rm Sky}\cdot A\cdot\alpha_{\rm pix}+D)\cdot\tau+R^{2}]}}, \tag{4}\]
where \(S\) is the source signal with a constant spectral flux, \(\tau\) is the exposure time, \(A\) is the effective area of the WFST primary mirror (\(\sim 4.12\times 10^{4}\ {\rm cm^{2}}\)), \(\alpha_{\rm pix}=0.111\ {\rm arcsec^{2}}\) is the area of one pixel, \(D\) is the dark current of the CCD (\(D=0.005\ {\rm e^{-}/pixel/s}\), at \(-100^{\circ}\)C), \(R^{2}\) is the readout noise of the CCD (\(R=8\ {\rm e^{-}\ rms}\)), and \(n_{\rm pix}\) is the total pixel number in the PSF region. The factor of 2 applied here is because we assume that the calculation is performed on sky subtracted images. We adopt an optimal PSF aperture of
Calculator of Gemini. Sky in Equation 4 is the sky background signal that actually lands on the detector in units of \(\rm e^{-}~{}s^{-1}~{}pixel^{-1}\), which is given by
\[\rm Sky=\int_{0}^{\infty}f_{\lambda}T_{\rm opt}T_{\rm band}QE_{CCD}d\lambda, \tag{5}\]
where \(f_{\lambda}\) is the surface brightness of the sky background, \(T_{opt}\) is the throughput of the optics (including the primary mirror, analog to digital converters and the 5 corrector lenses), and \(\rm QE_{CCD}\) is the quantum efficiency of the CCD.
The photometric error can be evaluated through the magnitude error given by the approximate relation \(\sigma_{\rm ph}\simeq 2.5\log(1+1/{\rm SNR})\)(Pozzetti et al., 1998; Bolzonella et al., 2000). We also add a systematic error \(\sigma_{\rm sys}=0.02~{}{\rm mag}\)(Cao et al., 2018), and the total magnitude error is then given by \(\sigma_{m}=\sqrt{\sigma_{\rm ph}^{2}+\sigma_{\rm sys}^{2}}\). Thus we can obtain the flux error \(\sigma_{F}\) of each band from \(\sigma_{m}\) via error propagation. Finally, a random error drawn from the Gaussian probability distribution function (with \(\sigma=\sigma_{F}\)) is added to the mock flux in each band as the final mock photometry.
After computing and correcting for these mock fluxes, the mock observational targets obtained in the WFST shallow mode and deep mode are generated. In this paper, we adopt 3-\(\sigma\) detections to include sources into various samples as in Muzzin et al. (2013), i.e., galaxies with fluxes that meet the \(3\sigma\) depth thresholds of the 5 WFST bands (cf. Table 1) are selected as the mock observational samples for subsequent \(z_{\rm phot}\) calculation (see Section 4).
## 3 Computation of Photometric Redshifts
In this paper, we compute \(z_{\rm phot}\) of galaxies using the mock WFST data and the ZEBRA photometric-redshift code (Feldmann et al., 2006) with default parameters unless stated otherwise. The main advantage of ZEBRA is that it can generate a new set of templates adaptive to observed galaxy SEDs to minimize the mismatch between observed SEDs and available templates. This is done by creating a training set of galaxies to optimize the shape of spectral templates that can better match predicted galaxy colors with observed ones. We adopt the same set of 9 initial galaxy templates (see Figure 2) as in Muzzin et al. (2013) for \(z_{\rm phot}\) calculation using ZEBRA. Since we have removed all the point sources that are likely bright stars or active galactic nuclei (AGNs) in the COSMOS/UltraVISTA catalog, we do not include any AGN templates during our template fitting.
First, we run ZEBRA in the photometry-check mode to identify and correct systematic errors in the photometry based on the maximum-likelihood algorithm. ZEBRA derives a simple photometric offset in each band that minimizes the residuals between the mock observed fluxes and that of the best-fit templates, with the redshifts set as the input ones (i.e., \(z_{\rm spec}\) or high-quality \(z_{\rm phot}\), if \(z_{\rm spec}\) are not available, from Muzzin et al. (2013)). These corrections are then applied to the mock WFST photometry data, and ZEBRA iterates this procedure for 5 times to ensure that the median offset in each band converges.
Second, we run ZEBRA in the non-template-improvement mode based on this photometric systematic offset-corrected mock catalog, using the 9 initial galaxy templates shown in Figure 2. ZEBRA iteratively performs 5 logarithmic interpolations in the magnitude space between any adjacent pair of the 9 templates,
Third, we run ZEBRA in the template-improvement mode, where ZEBRA transforms the discrete template space into a linearly continuous space, using a Karhunen-Lo\(\grave{e}\)ve expansion to iteratively correct the eigenbases of a lower dimensional subspace through a \(\chi^{2}\) minimization scheme. As a result, adaptive spectral templates are generated to better match the galaxy SEDs of the training set than the set of 49 templates.
For each galaxy sample considered, we randomly divide all its galaxies into two equal halves: one half serves as the training set to generate a new set of adaptive templates, and \(z_{\rm phot}\) computation is performed on the other half as the validation set based on these new adaptive templates plus the above 49 templates as a blind test of \(z_{\rm phot}\) quality. We compare multiwavelength photometry and input redshifts of both the training and validation sets in Figures 4 and 5, respectively. We find that they have almost identical photometric and redshift properties, such that the templates generated based on the randomly selected training set of galaxies can be adaptive to the full galaxy sample, and that \(z_{\rm phot}\) computation on the validation set as a blind test
Figure 4: Magnitude distributions of the full, training, and validation mock galaxy samples (indicated by different colors) in the (a) shallow mode and (b) deep mode, respectively, with the lunar phase being fixed to 90 degree (i.e., half moon). The color plus symbols show the medians of the distributions and the horizontal error bars indicate the 1-\(\sigma\) ranges.
Figure 5: Distributions of input redshifts from Muzzin et al. (2013) of the full, training, and validation mock galaxy samples (indicated by different colors) in the (a) shallow mode and (b) deep mode, respectively, with the lunar phase being fixed to 90 degree (i.e., half moon). The color plus symbols show the medians of the distributions and the horizontal error bars indicate the 1-\(\sigma\) ranges.
In the template-improvement mode, ZEBRA iterates twice, i.e., over the redshift of 0-3 as one single bin and in smaller redshift bins of 0.5, to train the 49 templates based on a chosen training set. Narrowing down the redshift bin (e.g., to \(\Delta z=0.2\)) only increases the total number of adaptive templates generated, but has little effect on the final \(z_{\rm phot}\) results. Therefore, we use a total of \(49\times 6+49=343\) final templates and run ZEBRA to compute \(z_{\rm phot}\) for each selected galaxy sample.
## 4 Results and Discussion
In this section, we show \(z_{\rm phot}\) results with mock WFST data in the shallow and deep modes given various lunar phases (see Sections 4.1 and 4.2, respectively), compare our WFST \(z_{\rm phot}\) results with that from some recent works (see Section 4.3), and assess the improvement of \(z_{\rm phot}\) quality with the addition of other data (see Section 4.4).
### \(z_{\rm phot}\) results in the shallow mode
The \(z_{\rm phot}\) results with mock WFST data in the shallow mode are shown in Figure 6, whose left and right panels are for the non-template-improvement and template-improvement modes under various lunar phases, respectively. To evaluate \(z_{\rm phot}\) quality, we adopt some commonly-used quantities: (1) normalized median absolute deviation (e.g., Brammer et al. 2008), i.e., \(\sigma_{\rm NMAD}=1.48\times\left(\left|\frac{\Delta z-{\rm median}(\Delta z) }{1+z_{\rm input}}\right|\right)\), where \(\Delta z=z_{\rm output}-z_{\rm input}\), with \(z_{\rm output}\) and \(z_{\rm input}\) being the output \(z_{\rm phot}\) and input redshifts from the COSMOS/UltraVISTA catalog (Muzzin et al. 2013), respectively; (2) outlier fraction \(f_{\rm outlier}\), with outliers being defined as sources with \(\left|\Delta z\right|/(1+z_{\rm input})>0.15\); and (3) bias, i.e., median of \(\Delta z/(1+z_{\rm input})\) with outliers being removed.
According to Figure 6, under various lunar phases, we have bias=\(-0.001\)-0.006, \(\sigma_{\rm NMAD}=0.015\)-0.031, and \(f_{\rm outlier}=3.23\%\)-5.19% in the non-template-improvement mode, and have bias=\(0.000\)-0.006, \(\sigma_{\rm NMAD}=0.011\)-0.029, and \(f_{\rm outlier}=3.72\%\)-5.27% in the template-improvement mode, respectively. The template-improvement mode delivers smaller biases and \(\sigma_{\rm NMAD}\) than the non-template-improvement mode, which is expected; however, the former mode provides comparable or even slightly larger \(f_{\rm outlier}\) than the latter mode, due to misidentification of Lyman break as Balmer break or vice versa that is caused by the relatively limited photometry (i.e., only \(ugriz\) bands) although the significantly enlarged template set can cover the full parameter space of the observed galaxy SEDs.
As shown in Figure 7, \(z_{\rm phot}\) quality shows some variation with lunar phase: \(z_{\rm phot}\) quality improves as the lunar phase increases, with the best \(z_{\rm phot}\) result achieved under the lunar phase of \(180\) deg (full moon). Two factors can influence \(z_{\rm phot}\) quality of the selected sample under different lunar phases. One is the lunar phase itself: under brighter lunar phases, the sky light background contributed by the moon becomes larger, resulting in larger uncertainties on photometry and eventually worse \(z_{\rm phot}\) quality. The other is sample selection effect: under brighter lunar phases, only brighter sources can be well observed, which usually have higher-SNR photometry that leads to higher-quality \(z_{\rm phot}\).
To make a more sensible evaluation of the lunar phase influence and consider the above two factors separately, we restrict the sample observable under full moon and measure \(z_{\rm phot}\) under different lunar
Figure 6: \(z_{\rm phot}\) results in the shallow mode with ZEBRA run in the non-template-improvement mode (left panels) and template-improvement mode (right panels) under various lunar phases (\(0\) deg: no moon; \(45\) deg: 1/4 moon; \(90\) deg: half moon; \(135\) deg: 3/4 moon; and \(180\) deg: full moon), respectively. In each panel, blue dashed lines depict the boundary of \(z_{\rm phot}\) outliers, and the number of sources considered, bias, \(f_{\rm outlier}\), and \(\sigma_{\rm NMAD}\) are annotated.
limited influence on \(z_{\rm phot}\) results of a fixed sample. Therefore, the variation in \(z_{\rm phot}\) quality under different lunar phases in the shallow mode is primarily driven by sample-selection effect.
### \(z_{\rm phot}\) results in the deep mode
The \(z_{\rm phot}\) results with mock WFST data in the deep mode under various lunar phases, with ZEBRA run in the template-improvement mode, are show in Figure 8. Apparently, the inclusion of large amounts of faint galaxies significantly reduces \(z_{\rm phot}\) quality: \(\sigma_{\rm NMAD}\) grows from 0.041 to 0.064 with the dimming of lunar phase; \(f_{\rm outlier}\) increases to \(26.6\%\) when there is no moon; bias\(\sim 0.005\), being almost constant and comparable to the situations of faint lunar phases in the shallow mode.
In the deep mode, \(z_{\rm phot}\) quality shows a stronger variation with lunar phase than in the shallow mode, as shown in Figure 9. However, this does not mean that dimming of moonlight will cause \(z_{\rm phot}\) quality to decrease for a fixed galaxy sample. When we consider the fixed sample of galaxies observable under full moon, we find that dimming of sky background caused by moonlight slightly reduces photometric uncertainties and thus improves \(z_{\rm phot}\) quality, e.g., \(f_{\rm outlier}\) decreasing from \(17.1\%\) (full moon) to \(\leq 14\%\) (no moon) (see the red dashed lines in Figure 9). Thus, the downgrade of \(z_{\rm phot}\) quality under fainter lunar
Figure 7: Dependences of \(\sigma_{\rm NMAD}\), \(f_{\rm outlier}\), and bias on lunar phase as well as distribution of \(\Delta z/(1+z_{\rm input})\) in the shallow mode, with the full mock sample and the specific sample of galaxies observable under full moon indicated by the black and red dashed lines and histogram, respectively. In the bottom-right panel, the lunar phase is fixed to 90 \(\rm deg\); the plus sign and its horizontal error bar show the median and 1-\(\sigma\) range of \(\Delta z/(1+z_{\rm input})\).
dimmer moonlight in the deep mode enables the detection of fainter populations of galaxies, which often exhibit poorer photometry qualities; consequently, this leads to a continuous decrease in the accuracy and reliability of \(z_{\rm phot}\) estimation. Therefore, we conclude that lunar phase only has negligible or very slight effects on \(z_{\rm phot}\) quality for a given sample of galaxies; however, it can have a strong influence on sample selection, resulting in "apparent" variation of \(z_{\rm phot}\) quality across different samples.
### Comparison with other \(z_{\rm phot}\) results
We compare our WFST \(z_{\rm phot}\) results with some relevant works; for simplicity, we fix the lunar phase in the WFST mock data to \(90\)\(\deg\) (half moon) here. Figure 10 shows \(\Delta z/(1+z_{\rm input})\) as a function of \(r\)-band magnitude in the shallow mode and deep mode (see the black contours), respectively. Overall, the bright
Figure 8: Similar to Figure 6, but for \(z_{\rm phot}\) results in the deep mode with ZEBRA run in the template-improvement mode under various lunar phases.
of \(\Delta z/(1+z_{\rm input})\) of the latter being \(\sim 2\)-3 times larger than that of the former. The red curves in Figure 10 show the average cumulative rms deviation between \(z_{\rm phot}\) and \(z_{\rm spec}\) as a function of \(r\)-band magnitude in the SDSS survey early data release (using the \(ugriz\)-band photometry; Csabai et al. 2003), where \(z_{\rm phot}\) were derived with a hybrid technique (empirical and template fitting methods) to calibrate galaxy SED templates to improve \(z_{\rm phot}\) quality, utilizing a training set of galaxies with secure \(z_{\rm spec}\). We find that, at \(m_{r}<22\), our \(\Delta z/(1+z_{\rm input})\) scatter is generally comparable to or smaller than that of Csabai et al. (2003). This is partly because their training set of galaxies are restricted to the bright population, which makes it difficult to constrain \(z_{\rm phot}\) scatter toward the faint end. Recently, Yang & Shen (2023) estimated \(z_{\rm phot}\) of galaxies and quasars in the Southern Hemisphere DES wide survey based on a Bayesian analysis algorithm in the multi-color space, using the \(grizY\)-band photometry. We show the standard deviation of \(\Delta z/(1+z_{\rm input})\) of their galaxies in the blue bars in Figure 10, which is comparable to our result in the shallow mode.
Figure 11 shows \(\Delta z/(1+z_{\rm input})\) as a function of \(z_{\rm input}\) in the shallow mode and deep mode, respectively, in comparison with several other works. In general, our \(\Delta z/(1+z_{\rm input})\) shows a smooth distribution in each smaller redshift bin; the biases and scatters of our \(z_{\rm phot}\) are smaller than many quoted results from other works up to \(z\sim 3\). This may be because the training sets we use to improve the galaxy SED templates are randomly selected, thereby having good coverage of various galaxy properties and being representative of the full galaxy sample (see Figures 4 and 5). However, in real observations, \(z_{\rm spec}\) of the training sets would be mostly limited to bright sources and low redshifts, being difficult to well cover the full properties
Figure 9: Same as Figure 7, but for dependences of \(\sigma_{\rm NMAD}\), \(f_{\rm outlier}\), and bias on lunar phase as well as distribution of \(\Delta z/(1+z_{\rm input})\) in the deep mode.
templates adopted here; therefore, a nonnegligible effect on actual biases and scatters of our \(z_{\rm phot}\) in real observations would be expected.
Figure 12 shows \(\sigma_{\rm NMAD}\) and \(f_{\rm outlier}\) as a function of \(z_{\rm input}\) in the shallow mode and deep mode, respectively, in comparison with the aforementioned works. Again, our \(z_{\rm phot}\) results are overall in line with those in the literature. At \(z<1.5\), our \(z_{\rm phot}\) quality is comparable to those of most other works,
Figure 11: \(\Delta z/(1+z_{\rm input})\) as a function of \(z_{\rm input}\) in the shallow mode (left) and deep mode (right), respectively. Also shown for comparison are those from the HSC survey (using convolutional neural network for \(z_{\rm phot}\) computation, NetZ; cyan; Schuldt et al. 2021), second Red-Sequence Cluster Survey (using Direct Empirical Photometric method, DEmP; green; Hsieh & Yee 2014), HSC-SSP survey (using K Nearest Neighbor, KNN; blue; Zou et al. 2022), DES survey (using KNN; brown; Zou et al. 2022), DESI survey (using KNN; purple; Zou et al. 2022), HSC survey (using DEmP; orange; Tanaka et al. 2018; no error bars provided), HSC survey (using Nearest Neighbor, NNPz; gray; Tanaka et al. 2018; no error bars provided), and SDSS survey (using random forest regression; pink; Carliles et al. 2010).
Figure 10: \(\Delta z/(1+z_{\rm input})\) as a function of \(r\)-band magnitude in the shallow mode (left) and deep mode (right), respectively. The black envelopes show the 2-\(\sigma\) and 3-\(\sigma\) contours surrounding the peak distributions. For comparison, the red curves show the derived average cumulative rms deviation of SDSS galaxies based on \(ugriz\)-band photometry as a function of \(r\)-band magnitude (Csabai et al. 2003); the blue horizontal bars indicate the single-value (i.e., derived with the entire sample) standard deviation of DES galaxies based on \(grizY\)-band photometry (Yang & Shen 2023).
convolutional neural networks. At \(z\geq 1.5\), both our \(z_{\rm phot}\) results and the quoted results deteriorate; our \(f_{\rm outlier}\) is larger than the results based on the 5-band HSC photometry that includes the near-infrared \(Y\) band conducive to \(z_{\rm phot}\) improvement at high redshifts. In contrast, our \(\sigma_{\rm NMAD}\) remains largely constant and acceptably small both in the shallow mode and deep mode, and within the full redshift range of 0-3 explored here.
It is clear that at low redshifts (\(z<1.5\)), to a certain degree, the machine deep learning procedures can effectively further improve \(z_{\rm phot}\) results compared to the traditional template-fitting techniques, which is usually done by applying a large training sample with secure \(z_{\rm spec}\) and high-quality observed SEDs. At higher redshifts (\(z\geq 1.5\)), however, such a training sample would become very incomplete, which makes it difficult to cover the full parameter space of all observed sources; thus, it is still unlikely to precisely constrain uncertainties of \(z_{\rm phot}\) measurement at high redshifts simply based on machine learning. At \(z\geq 1.5\), the traditional template-fitting technique still shows advantages in some respects, e.g., as shown in Figure 12, our \(\sigma_{\rm NMAD}\) outperform that of machine-learning results, because ZEBRA can extend the known templates in the multi-parameter space and improve the fitting result by creating new templates and optimizing their shapes to be adaptive to galaxy multiwavelength photometry. However, the ZEBRA template-improvement procedure does not seem to effectively reduce \(f_{\rm outlier}\) at \(z\geq 1.5\), mainly due to misidentification of spectral breaks or other spectral features in galaxy SEDs thanks to the limited \(ugriz\)-band photometry. In contrast, the most recent machine learning methods based on the Direct Empirical
Figure 12: \(\sigma_{\rm NMAD}\) and \(f_{\rm outlier}\) as a function of \(z_{\rm input}\) in the shallow mode (left) and deep mode (right), respectively. The comparison surveys are the same as those in Figure 11.
large extent. Therefore, in the future, we can combine the machine learning methods with adaptive template fitting procedures to further improve WFST \(z_{\rm phot}\) quality.
### Improvement of \(z_{\rm phot}\) quality with the addition of other data
We further investigate the improvement of WFST \(z_{\rm phot}\) quality by including mock data from the China Space Station Telescope (_CSST, to be launched around 2024_; Zhan 2011) and _Euclid_ space telescope (launched in July 2023; Laureijs et al. 2012), both of which can provide additional high-quality ultraviolet and/or near-infrared data in large sky areas that are critically supplementary to WFST data.
We consider the _CSST_\(NUV\)- and \(y\)-band mock data, whose photometric errors are measured via SNR (Ubeda 2011):
\[{\rm SNR}=\frac{C_{s}t}{\sqrt{C_{s}t+N_{\rm pix}(B_{\rm sky}+B_{\rm det})t+N_{ \rm pix}N_{\rm read}R_{n}^{2}}}, \tag{6}\]
where \(t\) is the exposure time and \(N_{\rm pix}\) is the number of detector pixels covered by a source. \(N_{\rm pix}\) is 16 by default, corresponding to the case of a point source in the image; changing \(N_{\rm pix}\) value does not significantly alter the final result. \(N_{\rm read}\) is the number of detector readouts, \(B_{\rm det}\) is the detector dark current, and \(R_{n}\) is the read noise. Default parameter settings of \(t=300\) s, \(N_{\rm read}=2\), \(B_{\rm det}=0.02~{}e^{-}~{}s^{-1}~{}{\rm pixel}^{-1}\), and \(R_{\rm n}=5~{}e^{-}~{}{\rm pixel}^{-1}\) are adopted. \(C_{s}\) is the count rate from the source in units of \(e^{-}~{}s^{-1}\). \(B_{\rm sky}\) in Equation 6 is the sky background in \(e^{-}~{}s^{-1}~{}{\rm pixel}^{-1}\). For more details about the CSST mock flux and error estimation, we refer readers to Section 2.3 in Cao et al. (2018).
We consider the _Euclid_\(Y_{\rm E^{-}}\)-, \(J_{\rm E^{-}}\) and \(H_{\rm E}\)-band mock data. Since we do not have specific details of _Euclid_ (such as those of _CSST_ shown in Equation 6), we adopt photometric errors in the similar \(Y\), \(J\), and \(H\) bands of the VISTA survey for approximation, i.e., photometric errors of mock _Euclid_ data are directly taken from the Muzzin et al. (2013) catalog, which are scaled proportionally to mock \(Y_{\rm E^{-}}\)-, \(J_{\rm E^{-}}\) and \(H_{\rm E}\)-band fluxes. Given that there is a slight bias between the ground-based VISTA telescope and _Euclid_, we apply a constant conversion factor to convert the VISTA errors to the _Euclid_ mock errors, which is defined as the ratio of flux error between the _CSST_\(y\) band and VISTA \(Y\) band for each source at the given magnitude. We then compute mock fluxes and flux errors in the _CSST_\(NUV\), \(y\) and _Euclid_\(Y_{\rm E}\), \(J_{\rm E}\) and \(H_{\rm E}\) bands, which are subsequently combined with WFST mock data for \(z_{\rm phot}\) improvement.
Figure 13 shows the \(z_{\rm phot}\) results in the deep mode with the addition of 5-band mock data from _CSST_ and _Euclid_. It is clear that the \(z_{\rm phot}\) quality is significantly improved (cf. Figure 8), because the 10-band mock photometry that well covers the wavelength from ultraviolet to near infrared is vital for both ZEBRA photometry-check mode and template-improvement mode. In the non-template-improvement mode, \(f_{\rm outlier}\) and \(\sigma_{\rm NMAD}\) are effectively reduced to \(\sim 5\%\) and \(\sim\)0.03, respectively; lunar phase has little influence on \(z_{\rm phot}\) results, mainly due to that mock _CSST_ and _Euclid_ data are almost unaffected by lunar phase. In the template-improvement mode, \(f_{\rm outlier}\) and \(\sigma_{\rm NMAD}\) are further reduced to \(\sim 1\%\) and \(\sim\)0.02, respectively; meanwhile, the bias is also better calibrated, being \(\sim 0.0\).
Fulfillment of many scientific goals is heavily dependent on \(z_{\rm phot}\) accuracy, e.g., \(z_{\rm phot}\) for future photometric weak lensing surveys need to at least achieve \(\sigma_{\rm NMAD}<0.05\), with many relevant studies setting \(\sigma_{\rm NMAD}\simeq 0.02\) as a goal (e.g., Zhan 2006; LSST-Collaboration et al. 2009), which is crucial to depict the
the redshift dependent weak lensing signal behind clusters of galaxies under the constraints of the framework of the dark energy state equation (Brimioulle et al. 2008). As shown above, such requirements on \(z_{\rm phot}\) accuracy can be met when the mock WFST, _CSST_ and _Euclid_ are combined.
## 5 Summary
In this paper, we conduct a preliminary study that assesses \(z_{\rm phot}\) qualtiy based on the mock WFST \(ugriz\)-band photometry in the shallow mode and deep mode. We adopt the multiwavelength photometric catalog in the COSMOS/UltraVISTA field to generate mock WFST data, as it has deeper limiting magnitudes than WFST observations; during this process, mock fluxes are computed through the convolution of galaxy SEDs with the 5 WFST filter transmission curves, with interstellar dust extinction and IGM absorption taken into account, and mock flux errors are evaluated through the consideration of instrumental parameters, sky background, and systematic errors.
We calculate \(z_{\rm phot}\) using the ZEBRA code, which can generate new adaptive templates that better describe observed galaxy SEDs. We find bias\(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}0.006\), \(\sigma_{\rm NMAD}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}0.03\), and \(f_{\rm outlier}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}5\%\) in the shallow mode and bias\(\approx 0.005\), \(\sigma_{\rm NMAD}\approx 0.06\), and \(f_{\rm outlier}\approx 17\%\)-\(27\%\) in the deep mode, respectively, under various lunar phases; lunar phase has limited influence on \(z_{\rm phot}\) results, and the decrease of \(z_{\rm phot}\) quality with dimming of lunar phase is primarily caused by sample-selection effect, i.e., the involvement of increasingly more fainter sources that have larger photometric uncertainties.
We compare our WFST \(z_{\rm phot}\) results with that of some relevant works, finding general agreement be
Figure 13: Similar to Figure 6, but for \(z_{\rm phot}\) results in the deep mode with the addition of mock data from the _CSST_-\(NUV\), _CSST_-\(y\), _Euclid_-\(Y_{\rm E}\), _Euclid_-\(J_{\rm E}\), and _Euclid_-\(H_{\rm E}\) bands.
respective merits, it would be sensible to use all these methods jointly to further improve WFST \(z_{\rm phot}\) quality in the future.
Finally, we compute \(z_{\rm phot}\) by combining the mock WFST data with ultraviolet and near-infrared data from _CSST_ and _Euclid_. As expected, we find significant improvement in \(z_{\rm phot}\) quality with \(f_{\rm outlier}\approx 1\%\) and \(\sigma_{\rm NMAD}\approx 0.02\), thanks to the full wavelength coverage from ultraviolet to near-infrared. Such high-quality \(z_{\rm phot}\) can help fulfill many scientific goals that highly rely on \(z_{\rm phot}\) accuracy.
###### Acknowledgements.
We thank the referee for a constructive report. We thank Wen-tao Luo, Lu-lu Fan, and Ning Jiang for valuable discussions and comments. This work is supported by the National Key R&D Program of China (2022YFF0503401), the National Natural Science Foundation of China (12203047, 12025303, 11890693, and 12003031), the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A06, the Fundamental Research Funds for the Central Universities (WK3440000006 and WK203000057), the K.C. Wong Education Foundation, and the Cyrus Chun Ying Tang Foundations.
|
2310.12274 | An Image is Worth Multiple Words: Discovering Object Level Concepts
using Multi-Concept Prompt Learning | Textural Inversion, a prompt learning method, learns a singular text
embedding for a new "word" to represent image style and appearance, allowing it
to be integrated into natural language sentences to generate novel synthesised
images. However, identifying multiple unknown object-level concepts within one
scene remains a complex challenge. While recent methods have resorted to
cropping or masking individual images to learn multiple concepts, these
techniques often require prior knowledge of new concepts and are
labour-intensive. To address this challenge, we introduce Multi-Concept Prompt
Learning (MCPL), where multiple unknown "words" are simultaneously learned from
a single sentence-image pair, without any imagery annotations. To enhance the
accuracy of word-concept correlation and refine attention mask boundaries, we
propose three regularisation techniques: Attention Masking, Prompts Contrastive
Loss, and Bind Adjective. Extensive quantitative comparisons with both
real-world categories and biomedical images demonstrate that our method can
learn new semantically disentangled concepts. Our approach emphasises learning
solely from textual embeddings, using less than 10% of the storage space
compared to others. The project page, code, and data are available at
https://astrazeneca.github.io/mcpl.github.io. | Chen Jin, Ryutaro Tanno, Amrutha Saseendran, Tom Diethe, Philip Teare | 2023-10-18T19:18:19Z | http://arxiv.org/abs/2310.12274v2 | # An Image is Worth Multiple Words:
###### Abstract
Textural Inversion, a prompt learning method, learns a singular embedding for a new "word" to represent image style and appearance, allowing it to be integrated into natural language sentences to generate novel synthesised images. However, identifying and integrating multiple object-level concepts within one scene poses significant challenges even when embeddings for individual concepts are attainable. This is further confirmed by our empirical tests. To address this challenge, we introduce a framework for _Multi-Concept Prompt Learning (MCPL)_, where multiple new "words" are simultaneously learned from a single sentence-image pair. To enhance the accuracy of word-concept correlation, we propose three regularisation techniques: _Attention Masking (AttMask)_ to concentrate learning on relevant areas; _Prompts Contrastive Loss (PromptCL)_ to separate the embeddings of different concepts; and _Bind adjective (Bind adj.)_ to associate new "words" with known words. We evaluate via image generation, editing, and attention visualisation with diverse images. Extensive quantitative comparisons demonstrate that our method can learn more semantically disentangled concepts with enhanced word-concept correlation. Additionally, we introduce a novel dataset and evaluation protocol tailored for this new task of learning object-level concepts.
## 1 Introduction
In nurseries, toddlers are shown pictures to learn new things. Teachers talk about each picture using sentences with new ideas, like sentences with unfamiliar words. In the Figure 1 right (ours) example, the describing sentence for the images is: _"a photo of brown * on a rolling & at time square"_. Here, _"* (teddy bear)"_ and _"& (skateboard)"_ are the unfamiliar concepts to be learned. This way of
Figure 1: **Multi-concepts learning and composition with previous vs. our approach. Textural Inversion (left) can only learn a single concept from each image and fails at composing multiple ones. In contrast, our method (right) can learn, compose, and edit multiple concepts simultaneously. The learning input consists of image(s) accompanied by descriptive sentences with learnable prompts, represented as coloured pseudo words. The average cross-attentions and the corresponding mask of the learned prompts denote a disentangled and precise prompt-concept correlation.**
learning with simple hints is more economical and preferred over the current method of teaching machines using detailed contours and masks.
Recent research (Gal et al. (2022); Ruiz et al. (2022)) shows that the appearance and style of an image can be encapsulated as a cohesive concept via a learned prompt ("word"). The textural embedding of this new prompt is optimised in the frozen embedding space of a pre-trained text-to-image diffusion model to reconstruct several example input images. The concept conveyed by the learned prompt can then be composed into natural language sentences to generate or edit various novel scenes. Despite the significant interest in object-level image editing, (Wu et al., 2020; Meng et al., 2021; Hertz et al., 2022), Gal et al. (2022) points out that recent prompt learning methods struggle with _learning and composing multiple prompts within multi-object scenes_ (Figure 1 left).
In this work, we start with a motivational study to investigate the capabilities and limitations of existing prompt learning methods in multi-concept settings. Our findings confirm that while applying careful sampling such as manual masking or cropping yields distinct embeddings, object-level learning and editing without manual intervention remains challenging. Motivated by this finding, we introduce _Multi-Concept Prompt Learning (MCPL)_ framework Figure 2 (top) for simultaneous learning of multiple prompts from one scene.
However, without further assumptions on the embedding relationships, jointly learning multiple prompts is problematic. The model may disregard the semantic associations and instead prioritise optimising multiple embedding vectors for optimal image-level reconstruction. To enhance the accuracy of prompt-object level correlation, we propose the following regularisation techniques: 1) To ensure a concentrated correlation between each prompt-concept pair, we propose _Attention Masking (AttmMask)_, restricting prompt learning to relevant regions defined by a cross-attention-guided mask. 2) Recognising that multiple objects within a scene are semantically distinct, we introduce _Prompts Contrastive Loss (PromptCL)_ to facilitate the disentanglement of prompt embeddings associated with multiple concepts. 3) To further enable accurate control of each learned embedding, we bind each learnable prompt with a related descriptive adjective word, referred to as _Bind adj._, that we empirically observe has a strong regional correlation. The middle and bottom row of Figure 2 illustrates the proposed regularisation techniques.
In this work we implement our proposed method based on Textural Inversion by Gal et al. (2022), but the method can be adapted to other prompt learning methods such as Dreambooth by Ruiz et al. (2022). To our knowledge, our technique is the first to address the novel and challenging problem of learning and composing multiple concepts within multi-object scenes. To evaluate this task, we assembled datasets of multi-concept images featuring a total of 16 categories of object-level concepts. These datasets include both natural images, familiar to the pre-trained model, and out-of-distribution biomedical images, each equipped with object-level masks. We evaluate and demonstrate that our framework enables enhanced precision in object-level concept learning, synthesis, editing, quantification, and understanding of relationships between multiple objects, as exemplified in Figure 1 (right) and further illustrated in Figure 9. Through extensive quantitative analysis of approximately 4000 learned object-level embeddings, using both t-SNE and four robust, pre-trained text/image embedding spaces, we validate that our method excels in discerning semantically distinct object-level concepts, ensuring enhanced prompt-to-concept correlation.
## 2 Related Works
Prompt learning for image concept inversion.Prompt tuning, first proposed by Lester et al. (2021), has been utilised to expedite the tuning of large language models for downstream tasks. Jia et al. (2022); Zhou et al. (2022) further extended this approach to vision-language models such as CLIP (Radford et al. (2021)). In the context of text-guided image synthesising, prompt learning would enable connecting the appearance and style of an unseen image to a learnable prompt and transferring to newly generated images, as demonstrated by Textural Inversion Gal et al. (2022) and DreamBooth Ruiz et al. (2022). To better composite multiple concepts Kumari et al. (2023) proposed to fine-tune a subset of cross-attention layers. However, this approach learns multiple concepts separately from carefully sampled images rather than from the same scene.
Mask and text-driven local image editing.In the context of diffusion mode, Meng et al. (2021) first proposed SDEdit for mask-guided image-to-image style translation. Lugmayr et al. (2022)
developed RePaint to enable mask-guided local image editing. Avrahami et al. (2022) further conditioned local editing with text condition. These methods use manual masks prior to guide local image editing. A set of recent works showed that text-guided local object-level editing can be achieved without using a mask prior but instead the attention-derived masks (Hertz et al. (2022); Tumanyan et al. (2023); Patashnik et al. (2023)). The success of these approaches heavily relies on the accurate text-concept semantic correlation in the pre-trained model and is limited to in-distribution concepts.
Disentangled per-concept image editing.Interpretable and disentangled per-concept image manipulation has garnered significant interest in the literature on Generative Adversarial Networks (GANs). Traditional approaches often focus on layer-wise or channel-wise control within a pre-trained generator network. The goal is to identify and modify a subset of parameters responsible for specific concepts (Brock et al., 2018; Karras et al., 2020; Wu et al., 2020). Although our work is not centred on GAN-based approaches, we emphasise that we directly optimise multiple embeddings rather than network parameters. This methodology has been shown to better adapt to unseen and novel concepts by Gal et al. (2022).
## 3 Methods
In this section, we outline the preliminaries in Section 3.1 and present a motivational study in Section 3.2. These tests investigate the challenges of applying existing image-level prompt learning methods in identifying and integrating multiple object-level concepts within one scene. Inspired by these results, we introduce the _Multi-Concept Prompt Learning (MCPL)_ framework for simultaneous learning of multiple prompts from one scene. To address the complexity of optimising multiple object-level prompts in tandem with a single image-level reconstruction goal, we propose several regularisation techniques in Section 3.4. _The code will be released here upon publication._
### Preliminaries: prompt learning in text-to-image diffusion model
**Text-guided diffusion models** are probabilistic generative models trained to approximate the training data distribution through a process of incremental denoising from Gaussian random noise, conditioned on text embeddings. Specifically, a denoising network \(\epsilon_{\theta}\) is trained to map an initial noise map \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and conditional textual embedding \(v=c_{\phi}(p)\) to generate images \(\tilde{x}\) close to the
Figure 2: **Method overview.**_MCPL_ takes a sentence (top-left) and a sample image (top-right) as input, feeding them into a pre-trained text-guided diffusion model comprising a text encoder \(c_{\phi}\) and a denoising network \(\epsilon_{\theta}\). The string’s multiple prompts are encoded into a sequence of embeddings which guide the network to generate images \(X_{0}\) close to the target one \(X_{0}\). MCPL focuses on learning multiple learnable prompts (coloured texts), updating only the embeddings \(\{v^{*}\}\) and \(\{v^{k}\}\) of the learnable prompts while keeping \(c_{\phi}\) and \(\epsilon_{\theta}\) frozen. We introduce _Prompts Contrastive Loss (PromoPCL)_ to help separate multiple concepts within learnable embeddings. We also apply _Attention Masking (AttnMask)_, using masks based on the average cross-attention of prompts, to refine prompt learning on images. Optionally we associate each learnable prompt with an adjective (e.g., “brown” and “rolling”) to improve control over each learned concept, referred to as _Bind adj._
target one \(x\). Here \(c_{\phi}\) is the text encoder and \(p\) is the text prompt. To enable sequential denoising, \(c_{\phi}\) and \(\epsilon_{\theta}\) are jointly optimised to minimize the loss:
\[L_{DM}=L_{DM}(x,\tilde{x}):=E_{x_{0},\epsilon\sim N(0,I),t\sim\text{Uniform}(1,T )}\|\epsilon-\epsilon_{\theta}(x_{t},t,c_{\phi}(p))\|^{2}, \tag{1}\]
where \(x_{t}\) is obtained by adding noise to the initial image \(x_{0}\) at a given time step \(t\) in the set \(T\). Intuitively, the objective here is to correctly remove the noise added to an image conditioned to a given text prompt. During inference, the pre-trained model iteratively eliminates noise from a new random noise map to generate a fresh image. Our work builds on Latent Diffusion Models (LDMs) (Rombach et al., 2022), which encode images \(x\) with an encoder \(\mathcal{E}\) to get latent representation \(z=\mathcal{E}(x)\), prior to the diffusion process and decode after generation to conserve computation.
The prompt learning method by (Gal et al. (2022)) is aimed at identifying the text embedding \(v^{*}\) for a new prompt \(p^{*}\) in a pre-trained text-guided diffusion model. Given a few (3-5) example images representing a specific subject or concept, the method optimises \(v^{*}\) in the frozen latent space of a pre-trained text encoder \(c_{\phi}\). The objective is to generate an image via the denoising network \(\epsilon_{\theta}\) that closely resembles the example images when conditioned on \(v^{*}\). The optimisation is guided by the diffusion model loss defined in equation 1, updating only \(v^{*}\) while keeping \(c_{\phi}\) and \(\epsilon_{\theta}\) frozen. The Textural Inversion is also trained with random sentences to generalise the learning, refer to Appendix A.6 for the full detailed algorithm.
**Cross-attention layers** play a pivotal role in directing the text-guided diffusion process. Within the denoising network, \(\epsilon_{\theta}\), the textual embedding, \(v=c_{\phi}(p)\), interacts with the image embedding, \(z=\mathcal{E}(x)\), via the cross-attention layer. Here, \(Q=f_{Q}(z)\), \(K=f_{K}(v)\), and \(V=f_{V}(v)\) are acquired using learned linear layers \(f_{Q},f_{K},f_{V}\). As Hertz et al. (2022) highlighted, the per-prompt cross-attention maps, \(M=\text{Softmax}(QK^{T}/\sqrt{d})\), correlate to the similarity between \(Q\) and \(K\). Therefore the average of the cross-attention maps over all time steps reflects the crucial regions corresponding to each prompt word, as depicted in Figure 2. In this study, the per-prompt attention map serves as one of the primary matrices to evaluate the correlation between prompt and concept. Our results will demonstrate that without proper constraints, the attention maps of newly learned prompts are not consistently disentangled and may lack accurate prompt-concept correlation.
Motivational study: is image-level prompt learning sufficient for object-level multi-concept learning?
Do multiple distinct embeddings arise from the same image?To understand the challenges in learning and composing multiple concepts, we explored whether _Textural Inversion_ can discern semantically distinct concepts from processed images, each highlighting a single concept. Following Wu et al. (2020), we used images with manual masks to isolate concepts, as seen in Figure 3 (left). We applied _Textural Inversion_ to these images to learn embeddings for the unmasked or masked images. Our findings indicate that when focusing on isolated concepts, _Textural Inversion_ can successfully learn distinct embeddings, as validated by the generated representations of each concept.
Is separate learning of concepts sufficient for multi-object image generation?While human-guided, separate learning of each concept in a multi-object scene deviates from our objective, it is valuable to evaluate its effectiveness. Specifically, we use Textural Inversion to separately learn concepts like "ball" and "box" from carefully cropped images, as shown in Figure 3 (second column). We then attempt to compose images using strings that combine these concepts, such as "a photo of a green ball on orange box." Our results indicate that the accurate composition of multi-object images remains challenging, even when individual concepts are well-learned.
### Multi-Concept Prompt Learning (MCPL)
Our motivational study confirm that: 1) multiple unique embeddings can be derived from a single multi-concept image, albeit with human intervention, and 2) despite having well-learned individual concepts, synthesizing them into a unified multi-concept scene remains challenging. To address these issues, we introduce the Multi-Concept Prompt Learning (MCPL) framework. MCPL modifies Textural Inversion to enable simultaneous learning of multiple prompts within the same string. In specific, MCPL learn a list of multiple embeddings \(\mathcal{V}=[v^{*},\dots,v^{k}]\) corresponds to multiple new prompts \(\mathcal{P}=[p^{*},\dots,p^{k}]\). The optimisation is still guided by the image-level \(L_{DM}\), but now
updating \(\{v^{*},\dots,v^{\&}\}\) while keeping \(c_{\phi}\) and \(\epsilon_{\theta}\) frozen. The MCPL algorithm is outlined in Appendix A.6, Algorithm A.6. Recognising the complexity of learning multiple embeddings with a single image-generation goal, we propose three training strategies: 1) _MCPL-all_, a naive approach that learns embeddings for all prompts in the string (including adjectives, prepositions and nouns, etc.); 2) _MCPL-one_, which simplifies the objective by learning single prompt (nouns) per concept; 3) _MCPL-diverse_, where different strings are learned per image to observe variances among examples. Preliminary evaluations of _MCPL-one_ and _MCPL-diverse_ methods on the "ball" and "box" multi-concept task are shown in Figure 3. Our findings indicate that _MCPL-one_ enhance the joint learning of multiple concepts within the same scene over separate learning. Meanwhile, _MCPL-diverse_ goes further by facilitating the learning of intricate relationships between multiple concepts.
Limitations of plain MCPL.Our primary aim is to facilitate accurate interpretation and modification of multi-concept scenes. To evaluate object-level prompt-concept correlation, we visualise the average cross-attention maps for each prompt. As depicted in Figure 4, both _MCPL-one_ and _MCPL-all_ inadequately capture this correlation, especially for the target concept. These results suggest that _naively extending image-level prompt learning techniques (Gal et al., 2022) to object-level multi-concept learning poses optimisation challenges_, notwithstanding the problem reformulation efforts discussed in Section 3.3. Specifically, optimising multiple object-level prompts based on a single image-level objective proves to be non-trivial. Given the image generation loss equation 1, prompt embeddings may converge to trivial solutions that prioritize image-level reconstruction at the expense of semantic prompt-object correlations, thereby contradicting our objectives. In the next section, we introduce multiple regularisation terms to overcome this challenge.
### Regularising the multiple object-level prompts learning
Encouraging focused prompt-concept correlation with Attention Masking (_AttnMask_).Previous results show plain _MCPL_ may learn prompts focused on irrelevant areas. To correct this, we apply masks to both generated and target images over all the denoising steps (Figure 2, middle-right). These masks, derived from the average cross-attention of learnable prompts (Figure 2, bottom-row), constrain the image generation loss (equation 1) to focus on pertinent areas, thereby improving prompt-concept correlation. To calculate the mask, we compute for each learnable prompt \(p\in\mathcal{P}\) the average attention map over all time steps \(\overline{M}^{p}=1/T\sum_{t=1}^{T}M_{t}^{p}\). We then apply a threshold to produce binary maps for each learnable prompt, where \(B(\overline{M}^{p}):=\{1\text{ if }M^{p}>k,0\text{ otherwise}\}\) and \(k=0.5\) throughout all our experiments. For multiple prompt learning objectives, the final mask \(\mathcal{M}\) is a union of multiple binary masks of all learnable prompts \(\mathcal{M}=\bigcup_{p\in\mathcal{P}}B(M^{p})\). We compute the Hadamard product of \(\mathcal{M}\) with \(x\) and \(\bar{x}\) to derive our masked loss \(L^{\textit{AttnMask}}_{DM}\) as equation 2. Our _AttnMask_ is inspired by Hertz et al. (2022), but a reverse of the same idea, where the _AttnMask_ is applied over the pixel-level loss equation 1 to constrain the prompt learning to only related regions.
\[L^{\textit{AttnMask}}_{DM}=L_{DM}(\mathcal{M}\odot x,\mathcal{M}\odot\bar{x}), \tag{2}\]
Encouraging semantically disentangled multi-concepts with Prompts Contrastive Loss (_PromptCL_)._AttnMask_ focuses the learning of multiple prompts on the joint area of target ob
Figure 3: **Motivational Study with “Watch Face-Band” and “Ball-Box” Images. Left: Embeddings are learned using Textural Inversion on both multi-concept (unmasked) and single-concept (masked) images. Right: Concepts of “ball” and “box” are learned and composed using different methods: _Textural Inversion (T.I.)_, which crops and learns each concept separately; _MCPL-one_, learning both concepts jointly from uncropped examples with a single string; and _MCPL-diverse_ accounting for per-image specific relationships. Refer to Appendix A.4 for more results.**
jects, eliminating the influence of irrelevant regions like the background. However, it doesn't inherently promote separation between the embeddings of different target concepts. Leveraging the mutual exclusivity of multiple objects in a scene, we introduce a contrastive loss in the latent space where embeddings are optimised. Specifically, we employ an InfoNCE loss Oord et al. (2018), a standard in contrastive and representation learning, to encourage disentanglement between groups of embeddings corresponding to distinct learnable concepts (Figure 2, middle-left).
Concretely, at each learning step as described in Algorithm A.6, a mini-batch \(B\) minor augmented (e.g. with random flip) example images are sampled, with \(N\) learnable prompts/concepts for each image, yields a set of \(BN\) embeddings, \(\{v_{b}^{n}\}_{b=1}^{B}{}_{n=1}^{N}\). Then, the similarity between every pair \(v_{i}\) and \(v_{j}\) of the \(BN\) samples is computed using cosine similarity, i.e. \(sim(v_{i},v_{j})=v_{i}^{T}.v_{j}/||v_{i}||||v_{j}||\). Given our goal is to differentiate the embeddings corresponding to each prompt, we consider the embeddings of the same concept as positive samples while the others as negative. Next, the contrastive loss \(l_{i,j\in B}^{\eta}\) for a positive pair \(v_{i}^{\eta}\) and \(v_{j}^{\eta}\) of each concept \(\eta\in N\) (two augmented views of the example image) is shown in the equation 3, where \(\tau\) is a temperature parameter following Chen et al. (2020). The contrastive loss is computed for \(BN\) views of each of the \(N\) learnable concepts. The total contrastive loss \(L_{PromptCL}\) is shown in equation 4 (left).
\[l_{i,j\in B}^{\eta}=-log(\frac{exp(sim(v_{i}^{\eta},v_{j}^{\eta}))/\tau}{\sum _{\eta=1}^{N}\sum_{j=1,j\neq i}^{B}exp(sim(v_{i}^{\eta},v_{j}^{\eta})/\tau)}) \tag{3}\]
\[L_{PromptCL}=\frac{1}{N}\frac{1}{B}\sum_{\eta=1}^{N}\sum_{i=1}^{B}l_{i,j\in B} ^{\eta},\qquad L_{PromptCL}^{adj}=\frac{1}{NM}\frac{1}{B}\sum_{\eta=1}^{NM} \sum_{i=1}^{B}l_{i,j\in B}^{\eta} \tag{4}\]
Enhance prompt-concept correlation by binding learnable prompt with the adjective word (_Bind adj._).An additional observation from the misaligned results in Figure 4 reveals that adjective words often correlate strongly with specific regions. This suggests that the pre-trained model is already adept at recognising descriptive concepts like colour or the term "fluffy." To leverage this innate understanding, we propose to optionally associate one adjective word for each learnable prompt as one positive group during the contrastive loss calculation. In particular, consider \(M\) adjective words associated with \(N\) learnable prompts. Then the positive pair \(v_{i}^{\eta}\) and \(v_{j}^{\eta}\) of each concept is sampled from \(\eta\in MN\) instead of \(N\). Therefore The contrastive loss is now computed for \(BNM\) views of each of the \(N\) learnable concepts. The resulting total contrastive loss \(L_{PromptCL}^{adj}\) is detailed in equation 4 (right). We scale \(L_{PromptCL}^{adj}\) with a scaling term \(\gamma\) and add with \(L_{DM}^{AnnMask}\) (equation 2), for them to have comparable magnitudes, resulting our final loss in equation 5.
\[L=L_{DM}^{AnnMask}+\gamma L_{PromptCL}^{adj}, \tag{5}\]
Implementation details.Unless otherwise noted, we retain the original hyper-parameter choices of LDM (Rombach et al., 2022). All learnable embeddings were initialised 'randomly' with the embeddings of a single-word "photo". Our experiments were conducted using a single V100 GPU with a batch size of 4. The base learning rate was set to \(0.005\). Following LDM, we further scale the base learning rate by the number of GPUs and the batch size, for an effective rate of \(0.02\). On calculating \(L_{PromptCL}\), we apply the temperature and scaling term \((\tau,\gamma)\) of \((0.2,0.0005)\) when _AttMask_ is not applied, and \((0.3,0.00075)\) when _AttMask_ is applied. All results were produced using \(6100\) optimisation steps. We find that these parameters work well for most cases.
## 4 Results
In this section, we start with qualitatively verifying our proposed regularisation terms in Section 4.1 We further quantitatively assess the efficacy of our _MCPL_ method in Section 4.2. Finally, we scrutinise multi-concept learning and composing capabilities in Section 4.3 across various tasks, such as image synthesis, editing, and multi-concept separation with visualisation of attention.
### Assessing regularisation terms with cross-attention
We start with assessing our proposed regularisation terms on improving the accuracy of semantic correlations between prompts and concepts. We visualise the cross-attention and segmentation
masks, as shown in Figure 4. Our visual results suggest that incorporating all of the proposed regularisation terms enhances concept disentanglement, whereas applying them in isolation yields sub-optimal outcomes (refer to full ablation results in Appendix A.5). Moreover, the results demonstrate that _MCPL-one_ is a more effective learning strategy than _MCPL-all_, highlighting the importance of excluding irrelevant prompts to maintain a focused learning objective.
### Quantitative evaluations
We collect both in-distribution natural images and out-of-distribution biomedical images over 16 object-level concepts, with all images containing multiple concepts and object-level masks. To approximate the unknown "ground truth" for disentangled embeddings per concept, we use masks in conjunction with Textural Inversion Gal et al. (2022). _However, we note that these estimated embeddings serve as our best guess for an unknown true value._ Our evaluation involves four variations of our approach and compares them against three baseline methods.
Dataset.For the in-distribution natural images dataset, we generate variations of target objects using local text-driven editing, as proposed by Patashnik et al. (2023). This minimizes the influence of irrelevant elements like background. This approach also produces per-text local masks based on attention maps, assisting us in getting our best approximation for the "ground truth" of disentangled embeddings. We generate five sets of natural images containing 10 object-level concepts. For the out-of-distribution bio-medical image dataset, we assemble three sets of radiological images featuring six organ/lesion concepts. These images are sourced from three public MRI segmentation datasets: heart myocardial infarction (Lalande et al., 2020), prostate segmentation (Antonelli et al., 2022), and Brain Tumor Segmentation (BraTS) (Menze et al., 2014). Each dataset includes per-concept masks. For both natural and biomedical datasets, we collected 40 images for each concept. Figure 5 gives some examples of the prepared datasets.
Baselines and experiments.We evaluate the effectiveness of four learning methods: 1) _Textural Inversion_ applied to unmasked multi-concept images, 2) _Textural Inversion_ applied to each masked object serving as our best estimate for a "ground truth", 3) _MCPL-all_ as our naive adaptation of the _Textural Inversion_ method to achieve the multi-concepts learning goal, which acts as the "state-of-the-art (SoTA)" given the absence of prior multi-concept learning methods, 4) _MCPL-one_ as our proposed method. For our method, we additionally examine four variations to scrutinise the impact of the regularisation terms discussed in Section 3.4. It's important to note that, despite the use of a "ground truth" all learning is performed on unmasked images. To assess the robustness of each learning method, we randomly sample four images to learn an embedding, leading to 10 learned
Figure 4: **Enhancing object-level prompt-concept correlation in MCPL using the proposed _AttnMask, PromptCL_ and _Bind adj._ regularisation techniques. We compare our best results of _MCPL-one_ applying all regularisation terms against the plain _MCPL-one_, using a “Ball and Box” example (left) and the plain _MCPL-all_, using a “Hamster and Watermelon” example (right). We use the average cross-attention maps and the _AttnMask_ to assess the accuracy of correlation.**
Figure 5: **Quantitative evaluation dataset examples. We prepared five sets of in-distribution natural images and three sets of out-of-distribution biomedical images, each containing two concepts resulting in a total of 16 concepts. Visualisation of full sets is available in the Appendix A.7.**
embeddings per concept. The experiments were executed on a single V100 GPU, with each run taking approximately one hour, resulting in a total computational cost of around 1940 GPU-hours (or 80 days on a single GPU). We employed various metrics to evaluate the four methods.
Investigate the disentanglement of learned embeddings with t-SNE.To assess disentanglement, we begin by visualising the t-SNE projection of the learned features Van der Maaten & Hinton (2008). The results, depicted in Figure 7, encompass both natural and biomedical datasets. They illustrate that our _MCPL-one_ combined with all regularisation terms can effectively distinguish all learned concepts compared to the "SoTA". It's noteworthy that, despite our best efforts to estimate the "ground truth" with masking, their learned embeddings exhibit less disentanglement when compared to ours. This performance discrepancy suggests that _applying an image-level learning method for object-level concepts with focused examples is less beneficial, as it excludes inter-object relationships within the scene_. This finding confirms the necessity of our proposed method.
Embedding similarity comparing to the estimated "ground truth".To assess the preservation of per-concept semantic and textural details, we calculate both prompt and image fidelity. This evaluation follows prior research by Gal et al. (2022) and Ruiz et al. (2022), but differently, we perform the calculations at the object level. In specific, we compare the masked "ground truth" (as in Figure 5) and the generated image masked by its own _AttMask_ (as in Figure 6). We generated 20 masked images per concept, resulting in 320 generated images. Prompt fidelity is determined by measuring the average pairwise cosine similarity between the embeddings learned from the estimated "ground truth" and the generated masked images, in the pre-trained embedding space of BERT (Devlin et al., 2018). For image fidelity, we compare the average pairwise cosine similarity in the pre-trained embedding spaces of CLIP Radford et al. (2021), DINOV1 (Caron et al., 2021) and DINOV2 (Oquab et al., 2023), all based on the ViT-S backbone. The results in Figure 8 show our method combined with all the proposed regularisation terms can improve both prompt and image fidelity, which consistently outperforms all baselines across both in-/out-of-distribution concepts and over all four embeddings spaces.
Figure 6: **Visualisation of generated concepts with the “SoTA” and our method**. Masks are derived from cross-attentions. Full ablation results are presented in the Appendix A.3
Figure 7: **The t-SNE projection of the learned embeddings**. Our method can effectively distinguish all learned concepts compared to the “SoTA” (full results in Appendix A.1).
### Applications: image editing over disentangled concepts.
Finally, we demonstrate our ability to capture object-level embeddings which enables more accurate object-level synthesis, editing (with methods by e.g. Hertz et al. (2022)) and quantification (Figure 9 top-left). The framework also has the flexibility to handle per-image specified string to learn the subtle differences or new object-level concepts within each example image, as shown in the top-right example of Figure 9. Furthermore, our method can also learn unknown concepts from challenging out-of-distribution images (Figure 9 bottom-left and right), opening an avenue of knowledge mining from pairs of textbook figures and captions, which are abundantly available on the internet.
## 5 Conclusions
We present _MCPL_, an innovative approach to prompt learning, designed to tackle the challenge of managing multiple concepts in scenes with multiple objects. This enables improved synthesis, editing, quantification, and understanding of multi-object relationships with greater precision at the object level. We empirically validated the constraints of utilising the preceding single-concept learning method in a multi-concept environment, which inspired the development of our _MCPL_ framework. Importantly, we illustrate that plain _MCPL_ is ineffective in learning multiple prompt-concept correlations accurately. To overcome this challenge and prevent trivial solutions, we introduce multiple regularisation terms during learning. We introduce a novel dataset and evaluation protocol for this new task. Our approach demonstrates robust and consistent learning of semantically disentangled concepts over extensive experiments. Notably, our method can be seamlessly integrated with existing prompt learning techniques without requiring architectural modifications.
Figure 8: **Embedding similarity of learned object-level concept comparing to the estimated “ground truth”. We measure in both pre-trained text (BERT) and image encoder (CLIP, DINOv1 and DINOv2) spaces, each bar is an average of 40,000 pairwise cosine similarities. For the text encoder, we use the BERT. We also present a full object-level comparison in Appendix Section A.2.**
Figure 9: **MCPL learning and composing capabilities. (top-left) learning and editing multiple concepts with a single string; (top-right) learning per-image different concepts with per-image specified string; (bottom-left) learning to disentangle multiple unseen concepts from cardiac MRI images; (bottom-right) learning to disentangle multiple unseen concepts from chest X-ray images.** |
2302.05519 | Multi objective Fitness Dependent Optimizer Algorithm | This paper proposes the multi objective variant of the recently introduced
fitness dependent optimizer (FDO). The algorithm is called a Multi objective
Fitness Dependent Optimizer (MOFDO) and is equipped with all five types of
knowledge (situational, normative, topographical, domain, and historical
knowledge) as in FDO. MOFDO is tested on two standard benchmarks for the
performance-proof purpose; classical ZDT test functions, which is a widespread
test suite that takes its name from its authors Zitzler, Deb, and Thiele, and
on IEEE Congress of Evolutionary Computation benchmark (CEC 2019) multi modal
multi objective functions. MOFDO results are compared to the latest variant of
multi objective particle swarm optimization (MOPSO), non-dominated sorting
genetic algorithm third improvement (NSGA-III), and multi objective dragonfly
algorithm (MODA). The comparative study shows the superiority of MOFDO in most
cases and comparative results in other cases. Moreover, MOFDO is used for
optimizing real-world engineering problems (e.g., welded beam design problems).
It is observed that the proposed algorithm successfully provides a wide variety
of well-distributed feasible solutions, which enable the decision-makers to
have more applicable-comfort choices to consider. | Jaza M. Abdullah, Tarik A. Rashid, Bestan B. Maaroof, Seyedali Mirjalili | 2023-01-26T06:33:53Z | http://arxiv.org/abs/2302.05519v1 | # Multi-objective Fitness Dependent Optimizer Algorithm
###### Abstract
This paper proposes the multi-objective variant of the recently-introduced fitness dependent optimizer (FDO). The algorithm is called a Multi-objective Fitness Dependent Optimizer (MOFDO) and is equipped with all five types of knowledge (situational, normative, topographical, domain, and historical knowledge) as in FDO. MOFDO is tested on two standard benchmarks for the performance-proof purpose: classical ZDT test functions, which is a widespread test suite that takes its name from its authors Zitzler, Deb, and Thiele, and on IEEE Congress of Evolutionary Computation benchmark (CEC-2019) multi-modal multi-objective functions. MOFDO results are compared to the latest variant of multi-objective particle swarm optimization (MOPSO), non-dominated sorting genetic algorithm third improvement (NSGA-III), and multi-objective dragonfly algorithm (MODA). The comparative study shows the superiority of MOFDO in most cases and comparative results in other cases. Moreover, MOFDO is used for optimizing real-world engineering problems (e.g., welded beam design problems). It is observed that the proposed algorithm successfully provides a wide variety of well-distributed feasible solutions, which enable the decision-makers to have more applicable-comfort choices to consider.
Keywords: Artificial Intelligence, Swarm Intelligence, Fitness Dependent Optimizer, Multi-objective Optimization Algorithm, Welded beam design.
## 1 Introduction
Multi-objective optimization problems (MOPs) are in the area of multiple criteria decision-making; the area is also known as multi-objective programming, multi-criteria optimization, or Pareto optimization. Precisely, this area deals with mathematical optimization problems with two or more, often conflicting, objectives. Research on MOPs became widely popular in 2002 [1]. It is worth mentioning that various real-world applications fall under this area, such as engineering design, energy, economics, logistics, and health science. Since many real-world problems are MOPs, multi-objective evolutionary algorithms (MOEAs) are used to solve them. Generally speaking, MOEAs include three branches: dominance-based, decomposition-based, and indicator-based evolutionary algorithms (IBEAs) [2][3][4].
The first branch of MOEAs includes a non-dominated sorting genetic algorithm (NSGA-II) [5], multi-objective particle swarm optimization (MOPSO) [6], and strength Pareto evolutionary algorithm 2 (SPEA2) [7]. These algorithms are considered posterior optimization algorithms, meaning that they maintain the multi-objective formulation of a multi-objective optimization problem and estimate the Pareto optimal solutions. On the other hand, MOEAs based on decomposition come in the second branch [8]. Such algorithms, use different weights to create different decompositions of the objectives of a multi-objective problem to estimate the Pareto optimal solutions.
The NSGA-II starts with a randomly generated population. As per fitness, it uses a fast-nondominated sorting technique to sort the overall population, and then the children's generation is produced by crossover and mutation. To enhance the diversity level among the solutions, a special crowding distance operator is also applied for later improvement [5]. Moreover, NSGA-II was improved to solve many-objective problems (having four or more objectives) known as NSGA-III [9]. Another popular algorithm with lower computational complexity than NSGA-III is MOPSO [6], which uses an archive grid-based approach to keep diversity among the solutions. In the last decade, many new MOEAs have been reported in the literature; multi-objective cat swarm optimization [10], multi-objective CLONAL algorithm, which is inspired by the clonal selection theory of acquired immunity [11], multi-objective moth flame optimization [12], multi-objective ant lion optimizer [13], multi-objective grey-wolf optimization [14], multi-objective dragonfly algorithm [15], and multi-objective whale algorithm [16]. Having said that, Some MOEAs have been used for production scheduling [17], optimal truss design [18], and resource allocation in cognitive radio networks [19]. Furthermore, some other MOEAs were employed to classify the normal and aggressive behavior of 3D human beings [11].
Finally, the IBEAs have received much popularity due to their strong theoretical support and background [3], IBEAs measure both diversity and convergence of non-dominated solutions in objective space, which are desirable in the context of multi-objective evolutionary optimization [20][2]. Since IBEAs indicator functions automatically recover the diversity issue among their population solutions, they do not need any diversity maintenance mechanism. The first hypervolume indicator-based EAs known as "hypervolume by slicing objectives (HSO)" was introduced by [21], also local search and hybrid evolutionary algorithms for Pareto optimization were proposed by [22]. As a result, many IBEAs were developed by using different procedures, such as preferences-based information and different local search optimizers, and many others [23][24][25][26]. One obvious disadvantage of IBEA is that it requires additional time for calculating hypervolumes while dealing with many objectives' problems, this issue has been addressed in several types of research, they proposed an enhancement of IBEAs, such as a faster algorithm for calculating hypervolume [27][28][29][30]. To solve MOPs correctly, two factors need to be considered. Firstly, the accuracy of estimated Pareto optimal solutions. Secondly, the diversity of estimated Pareto optimal solutions. Multi-objective meta-heuristics need to address these two, often conflicting, factors. They typically start with a random population of solutions and improve them until the satisfaction of an end condition. The accuracy of the initial population increases over time [31] while some solutions will be favored over others due to them being "close" to the solutions already found in the objective space.
Many efforts were made regarding the local and global guide selection, such as using the adaptive grid to select the global guide and introducing an extra repository to store the non-dominated particles in MOPSO [6]. In another attempt, the global focus is selected using crowding distance in crowding distance MOPSO [32]. In [33], Mostaghim and Teich determined the local guide based on the sigma method, while Pulido and Coello [34] used a clustering technique for the same purpose. On the other hand, a genetic operator and special domination principle have been employed in terms of population
diversity to improve the Pareto front variety. Zitzler proposed the elitism mechanism that uses crossover and mutation on individuals that have been selected from the combination of population and repository [35], while Laumanns proposed \(\mathfrak{E}\)-box dominance to combine diversity and convergence [36]. In [37] and then [31] a simulated binary crossover (SBX) is used. Additionally, in Pareto entropy MOPSO, a cell distance-based individual density is used to select the global guide [38]. Moreover, a hybrid grey wolf optimizer uses to increase the efficiency of complex industrial system designs [39]. Furthermore, [40] developed a modified genetic algorithm (MGA), and they used it for the ship routing and scheduling model. Another modification of GA has been used by [41] for utilizing a reduced-order model for preliminary reactor design optimization. Recently, a new multi-objective learner performance-based behavior algorithm (MOLPB) was used to solve a four-bar truss design problem, pressure vessel design, coil compression spring design problem, speed reducer design problem, and car side impact design problem [42].
According to [43], a cultural algorithm is formed of five types of knowledge: situational, normative, topographical, domain, and historical knowledge, These file types are explained briefly in the following list:
1. Situational knowledge is a set of objects useful for the experience interpretation of all individuals in a certain population. In other words, situational knowledge guides individuals to move toward exemplars (best local or best global guides).
2. Normative knowledge: includes a set of promising ranges of decision variables. It offers strategies for individual adjustments. More precisely, it leads individuals to dive into a good range.
3. Topographical knowledge: it splits the completely feasible search landscape into cells. Each of these cells represents a different spatial characteristic; also, each cell selects the best individual in its specific ranges. Keeping the idea simple, topographical knowledge leads individuals toward the best cell.
4. Domain knowledge: it records information about the problem domain to guide the whole search; it is considered useful during the search process.
5. Historical knowledge: it records the key events in the search process by keeping track of significant individuals' history. Key events might be a big move in the search space or sometimes comes in the form of notable changes in the search landscape. Individuals are using historical knowledge to select a preferable direction.
Various research has been conducted in the field of nature-inspired metaheuristic algorithms; additionally, many efficient algorithms have been proposed in the literature. Alternatively, there is always room for new algorithms, as long as the proposed algorithm provides better or comparative performances, as explained by [44] in their work titled "No Free Lunch Theorems for Optimization" in 1997. Thus, there is no single global algorithm that can provide the optimum solution for every optimization problem. Furthermore, this work represents a multi-objective mode of the currently existing single-objective algorithm called FDO [45]. One major limitation of many MOEAs is that they tend to fall into local optimum in high-dimensional space easily and have a low convergence rate in the iterative process [46], in MOFDO, the fitness weight and weight factor parameters were used to increase both coverage and convergence of the algorithm (more on this is discussed in section 2.2), also storing previous good decision for later reuse will convergence speed as well. For these reasons, a new algorithm called MOFDO is proposed in this work. This algorithm is inspired by the swarming behavior of bees during the reproductive process when they search for new hives. The proposed algorithms have nothing in common with the artificial bee colony (ABC) algorithm (except both algorithms are inspired by bee behavior, and both are nature-inspired meta-heuristic algorithms).
Regarding this paper's major contributions, a new novel multi-objective mode of the novel single-objective FDO algorithm is proposed. One of the major contributions of this work is developing MOEA, which has a linear time and space complexity (more on this is discussed in section 2.4). Moreover, besides having an archive for saving the Pareto front solutions, a polynomial mutation mechanism is employed as a variation operator. Furthermore, extra storage has been used for saving the previous paces for the potential reuse in the next iterations; this will improve the algorithm performance Additionally, Hypercube grids are used in the implementation to help select the local and global guide individuals.
The rest of this paper is organized as follows: Section 2, explains the methodology of theoretical calculations, also the definitions for the Pareto sets, Pareto optimality set, and Pareto front set. Moreover, the MOFDO is proposed and mathematically explained in detail. Then in Section 3, results and discussion are discussed. In Section 4, MOFDO is employed to solve a real-world engineering problem (welded beam design problem). Finally, the conclusions are outlined in Section 5.
## 2 Methodology
In this section, some of the preliminaries and essential definitions of multi-objective optimization are covered. MOFDO algorithm is mathematically and programmatically presented in detail. The level of detail is constructed in a way that other researchers can easily replicate our work.
### Pareto Optimal Solutions Set
Mathematically speaking, MOPs can be represented as follows, with no loss of generality:
minimize: \[\vec{F}(\vec{x})=\{f_{1}(\ \vec{x}),\ f_{2}(\ \vec{x}),...,f_{n}(\ \vec{x})\}\] (1) subject to: \[g_{i(\vec{x})}\ \leq 0,\hskip 28.452756pti=1,2,...,m\] \[h_{i(\vec{x})}\ =0,\hskip 28.452756pti=1,2,...,p\]
where: \(n\) is several objectives, \(g\) and \(h\) are constraints, \(m\) is inequality constraint and \(p\) is equality constraint [47].
This type of problem cannot be optimized normally with the traditional single-objective algorithm, not just because of its multi-objective nature but also because of conflicting objectives in the same optimization problem, which means there is no single optimum solution. Instead, there is a set of optimal solutions known as the Pareto optimality solutions set, representing the best trad-offs between objectives. For readability purposes, Pareto optimality will be discussed briefly in this sub-section. Pareto optimality solutions can be explained using the following definitions [48]:
* Def. #1: for vectors (solution) \(\vec{a}\) and \(\vec{b}\) in optimization problem \(K^{t}\). For \(i=1,2,...,m\), \(\vec{a}\leq\vec{b}\) if the objectives of vector \(\vec{a}\) smaller or equal to the objectives of vector \(\vec{b}\) and at least there is \(\vec{a_{\mathrm{t}}}\)\(<\)\(\vec{b_{\mathrm{t}}}\).
* Def. #2: if \(\vec{a}\leq\vec{b}\) then: \(\vec{a}\) dominates \(\vec{b}\), and denoted by \(\vec{a}\)\(\prec\)\(\vec{b}\).
* Def. #3: two solutions might not dominate each other if Def. #1 is not applied, in this case, solutions \(\vec{a}\) and \(\vec{b}\) are non-dominated concerning each other, and denoted as \(\vec{a}\)\(\prec\)\(\vec{b}\) is the set of all nondominated known as the Pareto optimal solution set \(P_{\mathrm{g}}\), and defined as equation (2):
\[P_{s}\coloneqq\{\vec{a}\,b\ \in A|\exists F(a)>F(b)\} \tag{2}\]
* Def #4: The set holding equivalent object values of Pareto optimal solutions in \(P_{s}\), is known as Pareto optimal front \(P_{f}\), and defined as equation (3): \[P_{f}\coloneqq\{F(\vec{a}\ )|\vec{a}\ \in|P_{s}\}\] (3)
### Multi-Objective Fitness Dependent Optimizer
Multi-objective fitness dependent optimizer (MOFDO) is based on our recent work, a single objective fitness dependent optimizer FDO [45]. FDO is a metaheuristic algorithm, the bee swarming reproductive process, and their collective decision-making has inspired this algorithm. FDO updates the individual position by adding \(pace\) to the current position as shown in equation (4); the same mechanism is also applied in MOFDO. However, to calculate the \(pace\), the conditions presented in Equations (5, 6, and 7) need to be considered, and these conditions depend on the fitness weight (\(fw\)) value. \(fw\) can be calculated using the problem cost function values according to Equation (8). It is worth mentioning that the \(pace\) represents both domain and historical knowledge in MOFDO.
\[X_{t,t+1}=X_{t,t}+pace \tag{4}\]
where \(i\) represents the current individual number, \(t\) represents the current iteration, \(x\) is the individual itself, and \(pace\) is the movement rate and direction. Recalling that the \(pace\) value mostly relies on the \(fw\). However, the direction of \(pace\) (value sign) entirely depends on a random mechanism.
\[\left\{\begin{array}{l}fw=1\ or\ fw=0\ or\ \sum_{\begin{subarray}{c}\sigma=1 \end{subarray}}^{n}x_{i,t\ fitness_{o}}=0,\ \ pace=\ x_{i,t}*r\\ fw>0\ and\ fw<1\ \begin{cases}r<0,pace=(x_{i,t}-\ x_{i,t}^{*})*fw*(-1)\\ r\geq 0,\ \ pace=(x_{i,t}-\ x_{i,t}^{*})*fw\end{cases}\right\} \tag{5}\]
Equations (5, 6, and 7) contain two different conditions. Firstly, if \(fw\) is equal to zero, or, if \(fw\) is equal to one, or if \(\sum_{\begin{subarray}{c}\sigma=1\end{subarray}}^{n}x_{i,t\ fitness_{o}}=0\), then the \(pace\) should be calculated as Equation (5). Secondly, if \(fw\) value comes in between zero and one, then the random number \(r\) is generated in the [-1, 1] range, if \(r\) is a negative number, then Equation (6) will be used; otherwise, Equation (7) will be used for calculating the \(pace\). \(fw\) Can be computed using Equation (8):
\[fw=\left|\frac{\sum_{\begin{subarray}{c}0=1\end{subarray}}^{n}x_{i,t\ fitness_{o}}^{*}}{\sum_{ \begin{subarray}{c}\sigma=1\end{subarray}}^{n}x_{i,t\ fitness_{o}}}\right|-wf \tag{8}\]
where \(\sum_{\begin{subarray}{c}\sigma=1\end{subarray}}^{n}x_{i,t\ fitness_{o}}^{*}\) is a sum of the cost function of the global best individual, \(n\) is the number of objectives, and \(\begin{subarray}{c}\sigma=\{1,2,...,n\}\end{subarray}\), the \(\begin{array}{l}\sum_{\begin{subarray}{c}\sigma=1\end{subarray}}^{n}x_{i,t \ fitness_{o}}\end{subarray}\) is the sum of the current individual's cost function, again \(n\) is the number of objectives, and \(\begin{subarray}{c}\sigma=\{1,2,...,n\}\end{subarray}\). Finally, \(wf\) in Equation (8) is a weight factor, and its value is either 0 or 1. One may notice that, when \(wf=0\), it does not affect the equation and can be ignored. Interested readers are referred to [45] for more details about single-objective FDO.
Although the algorithm structure is the same as a single objective FDO to some extent, there are several additional improvements in MOFDO as follows:
1. An archive (repository) is used for holding Pareto front solutions during optimization, as it has been widely used in the literature for this purpose [5].
2. Before adding the nondominated solution to the archive, a polynomial mutation is applied to Pareto front solutions. The polynomial mutation has been employed in MOEAs as a variation operator [49], and it is defined as Equation (9) [50]: \[\begin{split}& S_{i}=(x_{1},x_{2},...,x_{n})\\ & S_{Ni}\big{(}x_{j}\big{)}=S_{i}\big{(}x_{j}\big{)}+\ \alpha\, \beta_{max}\big{(}x_{j}\big{)},\qquad i=1,2,...,NP\,\qquad j=1,2,...\,n\\ &\alpha=\begin{pmatrix}(2v)^{\frac{1}{(q+1)}}-1,\qquad v<0.5\\ 1-\big{(}2(1-v)\big{)}^{\frac{1}{(q+1)}},otherwise\end{pmatrix}\\ &\beta_{max}\big{(}x_{j}\big{)}=Max\big{[}S_{i}\big{(}x_{j}\big{)}-l_{j},u_{j} -S_{i}\big{(}x_{j}\big{)}\big{]},i=1,2,...,NP,j=1,2,...\,n\end{split}\] (9) where \(S_{Ni}\) is a new solution, \(S_{i}(x_{j})\) is a current solution, \(\beta_{max}(x_{j})\) is the maximum perturbation acceptable between the original and mutated solution, _NP is the_ population size, \(q\) is a positive real number, \(v\) is a uniformly distributed random number in the [0, 1] range, \(l\) is a lower boundary of the decision variable \(x\), u is an upper boundary of decision variable \(x\), and \(n\) number of decision variables (problem dimensions).
3. In MOPs, the fittest solution cannot simply be chosen as a global guide (normative knowledge), as this might be the case in single-objective optimization since there is more than one objective. Usually, these objectives conflict with each other. Therefore, selecting a global guide needs a more careful decision. In this work, the global guide individual is denoted by \(x^{*}\), a global-best nondominated solution selected from the least populated region by artificial scout bees the same as to [51] work on MOPSO. For this purpose, a mechanism called archive controller is used to divide the archive into multiple equally sized grids (sub-hyper-spheres in multi-dimension problems) [15], in this work, these have been called hypercube grids, which represent a topographical knowledge usage in MOFDO. The hypercube grid mechanism allows the algorithm to determine the least populated area simply by counting the number of solutions in each grid. The global best solution will be selected from the least populated area, see Figure (1).
The reason behind selecting the global guide from the least populated area is to maintain a good diversity in the obtained Pareto front solutions. As a result, decision-makers will have more diverse choices (solutions) to consider. Nevertheless, the archive has a limited size. When a new non-dominated solution is found, and the archive is already reached its maximum capacity, the archive controller removes the worst solution from the most populated grid. Hence, the newly discovered solution can fit in, as long as the new solution is better than the archive's worst solution.
4. Regarding selecting the personal guide (situation knowledge), the same hypercube grid mechanism has been used for dividing the search landscape into equally sized cells, then inside each cell, the best personal solution is selected as a local guide.
### Multi-Objective Fitness Dependent Optimizer Working Mechanisms
The MOFDO starts by randomly distributing the search individuals over the search space as presented in the pseudocode Figure (2), and more explanations are given in the flowchart Figure (3). Then, an archive with a specific size is created, and hypercube grids are generated. From here, the main algorithm loop will start in Line (4), which mainly depends on a specific number of iterations or until a certain condition is met. In line (5), for each search, individual (artificial scout bee) operations from Lines (6 to 27) will be repeated according to the number of individuals. The mentioned operations include: finding the global best search agent, finding \(fw\) using equation (8), Lines (10 to 12) applying conditions from Equations (5, 6, and 7) to calculate _pace_then afterward, Line (13) calculating a new search agent position using equation (4). When the new search agent is discovered, the algorithm always checks whether the new result (cost function) dominates the old result or not (14). If it is, then the new position will be accepted, and the _pace_ will be stored for potential reuse in the future, as shown in Line (15). However, if it is not, if previously saved _pace_ available, it will be used instead of the new one, hoping to generate a better result unless the search agent maintains the current position (see Lines (17 to 22). The polynomial mutation will apply to get more variant solutions in Line (24) and then check whether the solution can fit inside the archive or not in Lines (25 and 26). Hypercube grid indices are always updated according to the search landscape changes in Line (27).
Figure 1: shows Pareto solution, Pareto front, and hypercube grids which is provide helps in selecting global and local guide
Initialize the artificial scout bee population \(X_{i,t}\), \(i=\) 1, 2,..., n, and t=1,2,..., m Creating archive for nondominated solutions Generate Hypercube grid While (t) iteration limit not reached (m) (or solution good enough) for each artificial scout bee \(X_{i,t}\) find best artificial scout bee \(X_{i,t}^{*}\) generating random walk \(r\) in [-1, 1] rang calculating fitness weight value. Equation ( 7 ) //checking Equation ( 4, 5, and 6) conditions if (fitness_weight \(\geq\)1 or fitness_weight \(\leq\)-1 or \(\sum_{o=1}^{n}x_{i,t fitness_{o}}=0\)) fitness_weight \(=r\) end if calculate \(X_{i,t+1}\) equation (3 ) if( \(X_{i,t+1}\) fitnesses dominate \(X_{i,t}\) fitnesses) move accepted and pace saved else calculate \(X_{i,t+1}\) equation (3 ) with previous pace if (\(X_{i,t+1}\) fitnesses dominate \(X_{i,t}\) fitnesses) move accepted and pace saved else maintain current position (don't move) end if end if Apply polynomial mutation Add non-dominated Bees (solutions) to archive Keep only non-dominated members in the archive Update Hypercube Grid Indices end for end while
Figure 2: MOFDO Pseudocode
Figure 3: flowchart shows how MOFDO works programmatically
### Multi-Objective Fitness Dependent Optimizer Algorithm Time and Space Complexity
Generally, computational complexity is mainly concerned with the time and space required to solve a given problem, Regarding MOFDO mathematical complexity, it has an O(_p_*_n_ + _p_*_CF_) linear time complexity in each iteration, where \(p\) is the population size, \(n\) is the dimension of the problem, and _CF_ is the cost of the objective function. Whereas, it has an O(_p_*_CF_ +_p_*_pace_) space complexity for all iterations, where the _pace_ is the best previous paces stored. Thus, MOFDO time complexity is proportional to the number of iterations. However, its space complexity will be the same during iterations.
Nonetheless, MOFDO has a simple objective value calculations calculation mechanism, it has only (a random number and fitness weight) to be calculated for each agent, whereas, in MOPSO for calculating each solution, there are global best, agent best, and search factors C1 and C2, and random numbers (R1 and R2 parameters) to be calculated [33]. Also, in the MODA, there are five different parameter weights to be calculated (attraction, distraction, separation, alignment, cohesion, and some random values), and most of these parameters have accumulative nature (summation and multiplication), and their values depend on all other agents' value, resulting in even more complex calculations [14]. Finally, according to Currya and Daglia, NSGA-III has a mathematical complexity of O(_n_\({}_{\theta}\)_*_n_\({}_{\theta}\)_*_n_\({}_{\theta}\)_), where _n_\({}_{\theta}\) is the number of objectives, _n_\({}_{\theta}\) is the population size, and _n_\({}_{\theta}\) is the number of generations, _n_\({}_{\theta}\) can have any complexity from constant to _n_\({}_{\theta}\) depending upon the stopping criteria used [52]. From here, it can be seen that NSGA-III has an order of n\({}^{2}\) complexity, which is more complex than MOFD linear complexity.
## 3 Results and Discussion
For testing MOFDO algorithm performance, two different types of multi-objective test functions were selected: Classical ZDT benchmarks [35] and 2019 CEC Multi-modal multi-objective benchmarks [53]. The MOFDO results are compared to the results of the latest state of the art of MOPSO, NSGA-III, and modern multi-objective dragonfly algorithm (MODA) [15].
### Classical ZDT Benchmark Results.
This benchmark includes five well-known challenging test functions from ZDT1 to ZDT5. Their mathematical definition is presented in Table (7) (See Appendix), the MOFDO results are compared to three well-known algorithms: MOPSO, MODA, and NSGA-III. The results are shown in Table (1), each algorithm is allowed to run for 500 iterations, each equipped with initial 100 search individuals and an archive size of 100, the parameter settings for each algorithm are as described in their original papers [15][53][5]. However, the parameter settings for MOFDO are:
_Polynomial Mutation Rate = 5._
_Number of Grids per Dimension = 7._
_Best Bee Selection Factor = 2._
_Delete Factor =2._
_Inflation Rate =1._
The inverse generational distance (IGD) as shown in equation (10), is a measurement, which uses a true Pareto front of the problem as a reference, then compares each of its elements concerning the _P_\({}_{f}\) produced by the algorithm as described by [54].
\[IGD=\frac{\sqrt{\sum_{i=1}^{n}a_{i}^{2}}}{n} \tag{10}\]
where \(\boldsymbol{d}_{i}\) is the Euclidean distance between the closest obtained Pareto optimal solutions and the \(i^{th}\) true Pareto optimal solution in the reference set, and \(n\) is the number of true Pareto optimal solutions, it should be clear that a value of \(\text{IGD}=0\) indicates that all the elements generated are in the true Pareto front of the problem. The IGD results of 30 independent runs are collected for each algorithm, then the average (mean), standard deviation STD, best, and worst IGD are calculated, see Table (1).
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Functions**} & \multirow{2}{*}{**Algorithms**} & \multirow{2}{*}{**IGD AVG.**} & \multirow{2}{*}{**IGD STD.**} & \multirow{2}{*}{**IGD Best**} & \multirow{2}{*}{**IGD Worst**} \\ \cline{3-3} \cline{5-6} & & **MOFDO** & **0.06758** & **0.030911** & **0.0018** & **2.61533** \\ \cline{2-6} & \multirow{2}{*}{MODA} & 0.07653 & 0.012071 & 0.0420 & 0.59398 \\ \cline{2-6} & & **MOPSO** & 0.07843 & 0.008848 & 0.0446 & 1.14508 \\ \cline{2-6} & \multirow{2}{*}{NSGA-III} & 0.52599 & 0.509184 & 0.0134 & 3.69236 \\ \hline \multirow{2}{*}{ZDT 2} & \multirow{2}{*}{MOFDO} & 0.03511 & 0.00404 & 0.0207 & 0.0515 \\ \cline{2-6} & \multirow{2}{*}{**MODA**} & **0.00292** & **0.00026** & **0.0002** & **0.0116** \\ \cline{2-6} & \multirow{2}{*}{MOPSO} & 0.03243 & 0.00093 & 0.0212 & 0.0682 \\ \cline{2-6} & \multirow{2}{*}{NSGA-III} & 0.13972 & 0.02626 & 0.1148 & 0.1834 \\ \hline \multirow{2}{*}{ZDT 3} & **MOFDO** & **0.06676** & **0.023913** & **0.0014** & **2.2206** \\ \cline{2-6} & \multirow{2}{*}{MODA} & 0.07653 & 0.014411 & 0.0401 & 0.8267 \\ \cline{2-6} & \multirow{2}{*}{MOPSO} & 0.07758 & 0.005755 & 0.0427 & 1.0355 \\ \cline{2-6} & \multirow{2}{*}{NSGA-III} & 0.19474 & 0.080043 & 0.1935 & 0.1962 \\ \hline \multirow{2}{*}{ZDT4} & \multirow{2}{*}{MOFDO} & 0.68020 & 0.352945 & 0.2679 & 1.6776 \\ \cline{2-6} & \multirow{2}{*}{MODA} & 64.9628 & 2.847807 & 51.742 & 500.93 \\ \cline{2-6} & \multirow{2}{*}{**MOPSO**} & **0.46175** & **0.047785** & **0.2515** & **5.1602** \\ \cline{2-6} & \multirow{2}{*}{NSGA-III} & 0.73731 & 0.307518 & 0.7360 & 0.7387 \\ \hline \multirow{2}{*}{ZDT5} & \multirow{2}{*}{MOFDO} & 0.35853 & 0.161795 & 0.1221 & 1.9125 \\ \cline{2-6} & \multirow{2}{*}{MODA} & 0.11349 & 0.018270 & 0.0142 & 2.3938 \\ \cline{2-6} & \multirow{2}{*}{**MOPSO**} & **0.26862** & **0.136598** & **0.0468** & **2.1894** \\ \cline{2-6} & \multirow{2}{*}{NSGA-III} & 0.66397 & 0.235754 & 0.66273 & 0.66545 \\ \hline \end{tabular}
\end{table}
Table (2) The ranking table shows algorithms performances in Table (1)
Here ranking tables are used to show the rank of each algorithm, see Tables (2 and 4), for example, MOFDO came at the first position in ZDT1, then its rank is equal to 1, MOFDO came at third position in ZDT2, then its rank is equal to 3 and so on, the total rank represents the summation of all acquired ranks by certain algorithm, the ranking table is a simple way to show the superiority of certain ranking tables.
Figure 4: show how MOFDO solved the ZDT3 test function from an initially random solution toward Pareto front optimality.
algorithm among the group of competed algorithms. Also, the ranking tables were used in the Friedman test for all test functions, which shows whether the results are statistically significant or not (see Section 3.3).
As can be seen in the ranking-table Table (2), MOFDO outperforms MOPSO and NSGA-III in most cases; yet, it provides comparative results compared to MODA. MOFDO and MODA achieve a total ranking of 10. Figure (4) shows the ZDT3 solution landscape as an example of MOFDO performance, Figure 4(a) shows how initially randomly distributed solutions which only 23 \(P_{fs}\) is located, then throughout iterations, MOFDO successfully increased the number of well-distributed \(P_{f}\) solutions as shown in Figures 4(b, c, and d).
**3.2.CEC 2019 Multi-Modal Multi-Objective Benchmarks**
A set of 12 CEC-2019 multi-modal multi-Objective (MMO) 2019 benchmarks are selected as described by [55], and their mathematical definition is shown in Table (8) (See Appendix). The reason behind selecting this benchmark is, that these test functions are represented a more difficult challenge than the ZDT benchmark for MOFDO; they have different characteristics, such as problems with different shapes of PSs and PFs, with the coexistence of local and global PSs, also having a scalable number of PSs, decision variables, and objectives. MOFDO results compared to MOPSO, MODA, and NSGA-III as shown in Table (3). The results are explained by the ranking system in Table (4), which shows the MOFDO ranked in first place with superior results in most cases; MODA comes in second place, then MOPSO and NSGA-III.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Functions**} & \multirow{2}{*}{**Algorithms**} & \multirow{2}{*}{**IGD AVG.**} & \multirow{2}{*}{**IGD STD.**} & \multirow{2}{*}{**IGD Best**} & \multirow{2}{*}{**IGD Worst**} \\ \hline \multirow{4}{*}{MMF1} & MOFDO & 0.18401921 & 0.0454458 & 0.0882267 & 2.2406685 \\ \cline{2-6} & MODA & 0.87703300 & 0.5302916 & 0.3618665 & 9.6322659 \\ \cline{2-6} & MOPSO & 0.32173518 & 0.1108645 & 0.1419164 & 0.5425832 \\ \cline{2-6} & **NSGA-III** & **0.00351527** & **0.0005796** & **0.0022760** & **0.0049877** \\ \hline \multirow{4}{*}{MMF2} & **MOFDO** & **0.09108902** & **0.0237087** & **0.0377645** & **0.9732300** \\ \cline{2-6} & MODA & 0.41152959 & 0.3041183 & 0.0883137 & 16.525584 \\ \cline{2-6} & MOPSO & 0.27264430 & 0.0606373 & 0.1906970 & 1.1093294 \\ \cline{2-6} & NSGA-III & N/A & N/A & N/A & N/A \\ \hline \multirow{4}{*}{MMF3} & **MOFDO** & **0.09121177** & **0.0184429** & **0.0412612** & **1.0219979** \\ \cline{2-6} & MODA & 0.38999723 & 0.3195349 & 0.0720422 & 20.350064 \\ \cline{2-6} & MOPSO & 0.46594619 & 0.1566197 & 0.3377674 & 1.7063454 \\ \cline{2-6} & NSGA-III & 6.22408148 & 2.4773195 & 0.0523665 & 17.564674 \\ \hline \multirow{4}{*}{MMF4} & MOFDO & 0.08195533 & 0.0340485 & 0.0453016 & 0.1879044 \\ \cline{2-6} & **MODA** & **0.00781723** & **0.0038766** & **0.0003086** & **0.0319178** \\ \cline{2-6} & MOPSO & 0.06806595 & 0.0056268 & 0.0437656 & 0.1621893 \\ \cline{2-6} & NSGA-III & 0.04784677 & 0.0102452 & 0.0054074 & 0.1314727 \\ \hline \end{tabular}
\end{table}
Table (3)
CEC 2019 MMF Benchmark results
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{MMF5} & **MOFDO** & **0.08166825** & **0.0203041** & **0.0410373** & **0.2872334** \\ \cline{2-6} & MODA & 0.20697935 & 0.1068910 & 0.1177007 & 0.2665746 \\ \cline{2-6} & MOPSO & 0.36516458 & 0.0818221 & 0.1431733 & 0.6443415 \\ \cline{2-6} & NSGA-III & 0.23876644 & 0.1195411 & 0.2370972 & 0.2405229 \\ \hline \multirow{2}{*}{MMF6} & **MOFDO** & **0.06319825** & **0.0052717** & **0.0435369** & **0.3023473** \\ \cline{2-6} & MODA & 6.20722767 & 7.0529532 & 3.8844812 & 18.481523 \\ \cline{2-6} & MOPSO & 0.57534658 & 0.2975283 & 0.2154710 & 1.0872217 \\ \cline{2-6} & NSGA-III & 0.70090140 & 0.2650748 & 0.6990628 & 0.7024793 \\ \hline \multirow{2}{*}{MMF7} & **MOFDO** & **0.14853951** & **0.0219769** & **0.0870897** & **0.2362694** \\ \cline{2-6} & MODA & 0.36139133 & 0.1036987 & 0.1322042 & 1.0331912 \\ \cline{2-6} & MOPSO & 0.33104321 & 0.0493751 & 0.1186243 & 0.4153237 \\ \cline{2-6} & NSGA-III & 0.40264339 & 0.1635305 & 0.4014061 & 0.404124 \\ \hline \multirow{2}{*}{MMF8} & MOFDO & 0.15550447 & 0.0869989 & 0.0343290 & 1.8046810 \\ \cline{2-6} & MODA & 0.08058735 & 0.2652865 & 0.0056401 & 10.189273 \\ \cline{2-6} & MOPSO & 0.14156195 & 0.1003047 & 0.0450518 & 2.6718013 \\ \cline{2-6} & **NSGA-III** & **0.01038634** & **0.0032172** & **0.0041821** & **0.0276770** \\ \hline \multirow{2}{*}{MMF9} & MOFDO & 0.47321267 & 0.1219659 & 0.3404164 & 1.7427477 \\ \cline{2-6} & **MODA** & **0.05060995** & **0.0274896** & **0.0051946** & **0.3219936** \\ \cline{2-6} & MOPSO & 1.33275589 & 0.1427530 & 0.7792982 & 2.0541908 \\ \cline{2-6} & NSGA-III & 0.96369603 & 0.3011030 & 0.0027674 & 0.2438216 \\ \hline \multirow{2}{*}{MMF10} & MOFDO & 0.44207841 & 0.1277887 & 0.3104489 & 1.1022388 \\ \cline{2-6} & **MODA** & **0.09017605** & **0.0385574** & **0.0039308** & **0.3893014** \\ \cline{2-6} & MOPSO & 1.00054897 & 0.1542964 & 0.7005662 & 1.4956293 \\ \cline{2-6} & NSGA-III & 3.89641261 & 4.6634273 & 0.0028740 & 4.4067242 \\ \hline \multirow{2}{*}{MMF11} & **MOFDO** & **0.09260275** & **0.0209854** & **0.0635536** & **0.2692325** \\ \cline{2-6} & MODA & 0.09291338 & 0.0515551 & 0.0042148 & 0.4962967 \\ \cline{2-6} & MOPSO & 1.30789085 & 0.1622864 & 0.6847497 & 2.2372910 \\ \cline{2-6} & NSGA-III & 1.18058557 & 0.7034533 & 0.0034136 & 4.2312541 \\ \hline \multirow{2}{*}{MMF12} & MOFDO & 0.08314653 & 0.0217281 & 0.0554114 & 0.2901663 \\ \cline{2-6} & **MODA** & **0.03661122** & **0.0119014** & **0.0035018** & **0.1841556** \\ \cline{2-6} & MOPSO & 0.13651933 & 0.0237385 & 0.0667090 & 0.3416214 \\ \cline{2-6} & NSGA-III & 0.35064339 & 0.1613096 & 0.3494061 & 0.352124 \\ \hline \end{tabular}
TABLE CAPT: The ranking table shows algorithms performances in Table (3)
To prove whether the results of Tables (1 and 3) are statistically significant or not, the Wilcoxon rank-sum test has been conducted to find the p-value between the MOFDO and other algorithms. As presented in Table (5), the majority of the results in Table (1) (ZDT benchmarks results) are statistically significant since the p-value is smaller than 0.05.
Nonetheless, Table (6) also shows the level of significance of the results in Table (3) (CEC 2019 benchmarks results), as clearly it can be seen, that the results are statistically significant in almost all cases, except three cases in MMF8 and MMF9.
### Friedman Test for the Results
Friedman test has been used to show that the given results are statically significant, the Friedman test analyzes whether there are statistically significant differences between three or more dependent samples, it is the non-parametric counterpart of the analysis of variance with repeated measures [56], Friedman test has been used to show that the previous results in Tables (1 and 3) are statistically significant.
In the Friedman test, there are two different hypotheses to accept, the null hypothesis is, that there are no significant differences between the dependent groups, and the alternative hypothesis is, that there is a significant difference between the dependent groups, the Friedman test does not use the true values but the ranks of the values. Friedman test can be calculated as Equation (11).
\[x_{r}^{2}=\frac{{}^{12}}{{nk(k+1)}}\sum{R^{2}}-3n(k+1) \tag{11}\]
Where \(x_{r}^{2}\) is a Chi-square, \(n\) is several test functions, \(k\) represents the number of groups (number of compared algorithms), and \(R\) is the square root of the total rank of each group, using the total rank from Tables ( 2 and 4) From here, to find the state decision rule, the degree of freedom is needed, which can be found as _df = k -1= 4-1= 3_, according to Chi-square distribution table of alpha significance level, the decision rule state of df =3 is equal to 7.815 for a _p-value_\(<0.5\)[57].
The Friedman test calculation for Tables (2 and 4) together would be:
Total Rank of _R:__MOFDO =32__, MODA = 35, MOPSO = 47_ and _NSGA-III = 56_
_n= 17_ since there are 17 test functions results in both Tables (2 and 4), and \(k=4\)
\[x_{r}^{2}=\frac{12}{17*4(4+1)}\sum{(32^{2}+35^{2}+47^{2}+56^{2})}-3*17(4+1)\]
\(x_{r}^{2}=13.0235\) and the p-value is.00459.
According to the Friedman test, the null hypothesis is rejected, since the result is greater than the 7.815 Chi-square distribution, which means the results are statistically significant at a p-value \(<\) 0.5.
Finally, as a performance proof, Figure (5) shows the MMF4 benchmark solution landscape as an example, Figure 5(a) shows the initial random distribution of only 31 \(P_{fs}\) solutions, then throughout iterations, the number of well-distributed \(P_{f}\) solutions are increased in Figures 5(b, c, and d).
Figure 5: MOFDO solved the MMF4 test function from an initially random solution toward Pareto front optimality.
## 4 Engineering Design Application
MOFDO is implemented in MATLAB codes; the algorithm is well structured, which allows for easy modifications and integration. It also enables researchers to easily understand how it works and how real-world applications can be solved with less effort. To demonstrate this, Welded beam design problem has been optimized with MOFDO, the problem definition and the results are discussed below.
The welded beam design problem is a very well-known real-world engineering design problem, it has been considered by other researchers previously as a test problem for various multi-objective algorithms, such as [58] and [59]. As shown in Figure (6), this problem has four real-parameter variables \(x=\) (_h, l, t, b_), where \(h\) is the thickness of the welds, \(l\) is the length of the welds, \(t\) is the height of the beam, and \(b\) is the width of the beam, the \(P\) in the Figure (6) represents the amount of load which applies on the beam. This design problem has a bi-objective to be minimized Equation (12), knowing that the objectives are conflicting in nature, the first objective is to minimize the cost of fabrication (X-axis measure in currency ) and the second objective is to mitigate the end deflection of the welded beam (Y-axis measure in meters) as follows:
\[\begin{array}{l}Minimize\ f1(\vec{x})=1.10471h^{2}l+0.04811tb(14.0+l),\\ Minimize\ f2(\vec{x})=\frac{2.1952}{t^{3}b},\\ Subject\ to:\\ g1(\vec{x})\ \equiv 13,600-T(\vec{x})\ \geq 0,\\ g2(\vec{x})\ \equiv 13,600-\sigma(\vec{x})\ \geq 0,\\ g3(\vec{x})\ \equiv b-h\ \geq 0,\\ g4(\vec{x})\ \equiv Pc(\vec{x})-6000\ \geq 0,\\ 0.125\ \leq h,\ \ \ \ \ b\ \leq 5.0\\ 0.1\ \leq l,\ \ \ \ \ t\ \leq 10.0\end{array} \tag{12}\]
Figure 6: The welded beam design problem
As presented in Equation (12), this problem has four constraints to be considered. A violation of any of these constraints will make the design unacceptable. The first constraint is to make sure that the shear stress produced at the beam's support location is less than an allowable value, which is equal to 13,600 psi. The second constraint is to guarantee that the normal stress produced at the beam's support location is less than the acceptable yield strength of the material, which is equal to 30,000 psi. The third constraint is to ensure that the beam's breadth is not less than the weld width from a practical perspective. Constraint number four ensures that the acceptable buckling load \(Pc(\vec{x})\) of the beam is larger than the applied load \(F=6000\) lbs. The shear stress \(T(\vec{x})\) and the buckling load \(\sigma(\vec{x})\) can be calculated as Equation (13 and 14) respectively:
\[\begin{split} T(\vec{x})=\sqrt{\left(T^{\prime}\right)^{2}+ \left.(T^{\prime\prime})^{2}+\left(lT^{\prime}T^{\prime\prime}\right)/\sqrt{0. 25(l^{2}+(h+t)^{2})}\right.,\\ T^{\prime}=\frac{6000}{\sqrt{2}\,hl}\\ T^{\prime\prime}=\frac{6000(14+0.5l)\sqrt{0.25(l^{2}+(h+t)^{2}) }}{2\left\{0.707hl(\frac{l^{2}}{12}+0.25\left(h+t\right)^{2}\right\}}\\ \sigma(\vec{x})=\frac{504000}{t^{2}b},\\ Pc(\vec{x})=64746.022(1-0.0282346t)\,tb^{3}\end{split} \tag{14}\]
The welded beam design problem is optimized using MOFDO, the algorithm is applied to solve this engineering design problem for 100 iterations, using 100 search agents, and the \(P_{fs}\) is stored in 100-sized archives. Figure (7) shows that the obtained \(P_{fs}\) are smoothly distributed between these two objectives (Cost and Deflection), and mostly laid on or located very close to the true \(P_{fs}\) known in the literature [60]. Also, MOFDO provides a wide variety of feasible solutions for the decision-makers to choose from, this wide variety and smooth distribution of the obtained \(P_{fs}\) prove the maturity of the MOFDO in terms of the algorithm capability of tackling real-world engineering design problems effectively.
Figure 7: MOFDO results on Welded Beam Design problems
Regarding MOFDO's \(P_{fs}\) discovering rate, MOFDO starts from 17 \(P_{fs}\) for the first iteration, and then dramatically reaches \(100P_{fs}\) in iteration 36 as presented in Figure (8). This shows how the algorithm efficiently improves multiple initial solutions toward optimality, then occasionally some \(P_{fs}\) becomes a non-dominated solution. Hence, they are deleted from the archive; this can be seen from iterations 36 to 100 in Figure (8), the discovery rate of \(P_{fs}\) starts from a small number, then steadily increase till it reaches full archive size. This shows that MOFDO is constantly improving all solutions (dominated and non-dominated) throughout all iterations; this feature guarantees that MOFDO avoids local solutions and eventually reaches optimality [61].
## 5 Conclusions
A multi-objective model for a novel single-objective FDO is proposed, known as a multi-objective fitness dependent optimizer, which is inspired by the bee reproductive swarming process. During the implementation, MOFDO is being treated as a typical cultural algorithm, for this purpose: situational, normative, topographical, domain, and historical knowledge were employed. MOFDO is tested on two different sets of test functions, the first set is ZTD test functions, a classical benchmark used by many other researchers for testing MOEAs. The second set is a modern CEC 2019 multi-modal multi-objective, which is considered to be a more challenging benchmark. The MOFDO results were compared to three other algorithms: the latest state of the art of MOPSO, NSGA-III, and MODA. The comparison showed that MOFDO outperformed other algorithms in most cases and provided comparative results in other cases. MOFDO was easily used for solving real-world problems. For example, the welded beam design problem was solved using MOFDO. It provided well-distributed solutions, which possibly enables decision-makers to have more variant options to consider.
The CEC 2019 benchmark complexity represents a real challenge for MOEAs compared to the classical ZDT benchmark because it contains both local and global Pareto front. The algorithm must try to avoid trapping in the local Pareto front. However, arguably, in some applications, the local
Figure 8: Shows MOFDO Pfs discovering rate
optima are preferable, as global optima solutions may not always be applicable in real-world problems [55]. Moreover, having different decision variable space boundary for each dimension make the CEC 2019 MMF even more difficult. The NSGA-III couldn't produce a correct result for the MMF2 test function as presented in Table (2). Also, it has difficulties in some other test functions. On the other hand, MOFDO is constructed to make it easy to deal with these difficulties. Another major drawback of MOPSO is that it is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process [46], in MOFDO, the \(fw\) and \(wf\) parameters were used to increase both coverage and convergence of the algorithm, also storing previous good decision for later reuse will convergence speed as well.
Despite that, one of the major contributions of this work is developing MOEA, which has a linear time and space complexity, this means that both time and required space for the algorithm will increase linearly, which is suitable for the current computation architecture. Nonetheless, a polynomial mutation mechanism is employed as a variation operator with the use of an archive for saving the Pareto front solutions. Furthermore, extra storage has been used for saving the previous paces for the potential reuse in future iterations; this will lead to improvement in the algorithm performance Finally, Hypercube grids are used in the implementation to help select the local and global guide individuals. For future works, researchers might try to improve algorithm performance by adapting new parameters, enhancing the learning rate and communication range between individuals, or possibly integrating or hybridizing MOFOD with other MOEAs. Furthermore, there are other interesting multi-objective real-world engineering problems to be optimized by this algorithm, as have been mentioned in the introduction section, such as the four-bar truss design problem, pressure vessel design, the coil compression spring design problem, speed reducer design problem, and car side impact design problem.
#### Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: None
#### Data Availability Statements
The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.
|
2305.19197 | Interpreting dark matter solution for $B-L$ gauge symmetry | It is shown that the solution for $B-L$ gauge symmetry with $B-L=-4,-4,+5$
assigned for three right-handed neutrinos respectively, reveals a novel
scotogenic mechanism with implied matter parity for neutrino mass generation
and dark matter stability. Additionally, the world with two-component dark
matter is hinted. | Phung Van Dong | 2023-05-30T16:42:03Z | http://arxiv.org/abs/2305.19197v2 | # Interpreting dark matter solution for \(B-L\) gauge symmetry
###### Abstract
It is shown that the solution for \(B-L\) gauge symmetry with \(B-L=-4,-4,+5\) assigned for three right-handed neutrinos respectively, reveals a novel scotogenic mechanism with implied matter parity for neutrino mass generation and dark matter stability. Additionally, the world with two-component dark matter is hinted.
_Introduction_.--Of the exact conservations in physics, the conservation of baryon number minus lepton number, say \(B-L\), causes curiosity. There is no necessary principle for \(B-L\) conservation since it results directly from the standard model gauge symmetry. As a matter of fact, every standard model interaction separately preserves \(B\) and \(L\) such that \(B-L\) is conserved and anomaly-free, if three right-handed neutrinos, say \(\nu_{1R},\nu_{2R},\nu_{3R}\), are simply imposed. In the literature, there are two integer solutions for \(B-L\), such as \(-1,-1,-1\) and \(-4,-4,+5\), according to \(\nu_{1R},\nu_{2R},\nu_{3R}\)[1]. In contrast to electric and color charges, the excess of baryons over antibaryons of the universe suggests that \(B-L\) is broken. \(B-L\) likely occurs in left-right symmetry and grand unification, which support the first solution and a seesaw mechanism for neutrino mass generation, but no such traditional theories manifestly explain the existence of dark matter, similarly to the standard model. I argue that the second solution provides naturally both dark matter and neutrino mass. In a period, the matter parity which stabilizes dark matter has been found usefully in supersymmetry. I argue that the matter parity naturally arises from the second solution for \(B-L\) gauge symmetry, without necessity of supersymmetry.
_Proposal_.--Gauge symmetry is given by \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{B-L}\). Field content according to this symmetry is supplied in Tab. 1, in which \(a=1,2,3\) and \(\alpha=1,2\) indicate family indices. The usual Higgs field \(H\) has a VEV \(\langle H\rangle=(0,v/\sqrt{2})\) breaking the electroweak symmetry and generating mass for usual particles. The new Higgs fields \(\phi_{1,2}\) have VEVs, \(\langle\phi_{1}\rangle=w_{1}/\sqrt{2}\) and \(\langle\phi_{2}\rangle=w_{2}/\sqrt{2}\), inducing Majorana masses for \(\nu_{\alpha R}\)
and \(\nu_{3R}\) respectively as well as breaking \(B-L\), determining a residual matter parity \(P=(-1)^{3(B-L)+2s}\) (see below), which is included to Tab. 1 too. I impose \(w_{1,2}\gg v=246\) GeV for consistency with the standard model. Additionally, the scalar \(\chi\) couples \(l_{aL}\) to \(\nu_{\alpha R}\), while the scalar \(\eta\) couples \(\chi\) to \(H\phi_{1}\) as well as to \(\phi_{2}\), which radiatively generates neutrino mass (see Fig. 1). The fields \(\chi,\eta\) have vanished VEVs, preserved by the matter parity conservation. This realizes a scotogenic scheme with automatic matter parity by the model itself, which stabilizes dark matter candidates \(\nu_{\alpha R},\chi^{0},\eta\), opposite to [2] for which a \(Z_{2}\) is _ad hoc_ input.
_Matter parity._--A \(B-L\) transformation has the form, \(P=e^{ix(B-L)}\), where \(x\) is a parameter. \(P\) conserves both the vacua \(w_{1,2}\), i.e. \(Pw_{1}=w_{1}\) and \(Pw_{2}=w_{2}\), given that \(e^{i8x}=1\) and \(e^{-i10x}=1\). It leads to \(x=k\pi\), thus \(P=(-1)^{k(B-L)}\), for \(k\) integer. Acting \(P\) on every field, I derive \(P=1\) for minimal \(|k|=6\), except the identity with \(k=0\). This de
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Field & \multicolumn{2}{c}{\(SU(3)_{C}\)\(SU(2)_{L}\)\(U(1)_{Y}\)\(U(1)_{B-L}\)\(P\)} \\ \hline \(l_{aL}=\begin{pmatrix}\nu_{aL}\\ e_{aL}\end{pmatrix}\) & 1 & 2 & \(-1/2\) & \(-1\) & \(+\) \\ \(\nu_{\alpha R}\) & 1 & 1 & 0 & \(-4\) & \(-\) \\ \(\nu_{3R}\) & 1 & 1 & 0 & 5 & \(+\) \\ \(e_{aR}\) & 1 & 1 & \(-1\) & \(-1\) & \(+\) \\ \(q_{aL}=\begin{pmatrix}u_{aL}\\ d_{aL}\end{pmatrix}\) & 3 & 2 & 1/6 & 1/3 & \(+\) \\ \(u_{aR}\) & 3 & 1 & 2/3 & 1/3 & \(+\) \\ \(d_{aR}\) & 3 & 1 & \(-1/3\) & 1/3 & \(+\) \\ \(H=\begin{pmatrix}H^{+}\\ H^{0}\end{pmatrix}\) & 1 & 2 & 1/2 & 0 & \(+\) \\ \(\phi_{1}\) & 1 & 1 & 0 & 8 & \(+\) \\ \(\phi_{2}\) & 1 & 1 & 0 & \(-10\) & \(+\) \\ \(\chi=\begin{pmatrix}\chi^{0}\\ \chi^{-}\end{pmatrix}\) & 1 & 2 & \(-1/2\) & 3 & \(-\) \\ \(\eta\) & 1 & 1 & 0 & 5 & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Field presentation content of the model.
\(Z_{6}=Z_{2}\otimes Z_{3}\), where \(Z_{2}=\{1,p^{3}\}\) is the invariant subgroup of \(Z_{6}\), while \(Z_{3}=\{[1],[p^{2}],[p^{4}]\}\) is the quotient group of \(Z_{6}\) by \(Z_{2}\), with each coset element containing two elements of \(Z_{6}\), i.e. \([g]=\{g,gp^{3}\}\), thus \([1]=[p^{3}]=Z_{2}\), \([p^{2}]=[p^{5}]=\{p^{2},p^{5}\}\), and \([p^{4}]=[p]=\{p,p^{4}\}\). Since \([p^{4}]=[p^{2}]^{2}=[p^{2}]^{*}\) and \([p^{2}]^{3}=[1]\), \(Z_{3}\) is generated by the generator \([p^{2}]=[\omega^{3(B-L)}]\), where \(\omega=e^{i2\pi/3}\) is the cube root of unity. Since \(3(B-L)\) is integer due to \(p^{6}=1\), \(Z_{3}\) has three irreducible representations \(\underline{1}\), \(\underline{1}^{\prime}\), and \(\underline{1}^{\prime\prime}\) according to \([p^{2}]=[1]\to 1\), \([p^{2}]=[\omega]\rightarrow\omega\), and \([p^{2}]=[\omega^{2}]\rightarrow\omega^{2}\) respectively, which are homomorphic from those of \(Z_{6}\) independent of the signs \(p^{3}=\pm 1\) that identify \(Z_{6}\) elements in a coset [3]. I obtain \([p^{2}]=[\omega]\rightarrow\omega\sim\underline{1}^{\prime}\) for quarks, while \([p^{2}]=[1]\to 1\sim\underline{1}\) for all other fields. Hence, \(Z_{3}\) transforms nontrivially only for quarks, isomorphic to the center of the color group. In other words, the theory automatically conserves \(Z_{3}\), accidentally preserved by \(SU(3)_{C}\). Omitting \(Z_{3}\), what the residual symmetry remains is only \(Z_{2}=\{1,p^{3}\}\), generated by the generator \(p^{3}=(-1)^{3(B-L)}\). Since the spin parity \(p_{s}=(-1)^{2s}\) is always conserved by the Lorentz symmetry, I redefine
\[P\equiv p^{3}\times p_{s}=(-1)^{3(B-L)+2s}, \tag{1}\]
to be matter parity similar to that in supersymmetry, governing this model. The matter-parity group \(M=\{1,P\}\) instead of \(Z_{2}\) has two irreducible representations \(\underline{1}\) and \(\underline{1}^{\prime}\) according to \(P=1\) and \(P=-1\) respectively, collected in Tab. 1 for every field. The lightest of odd fields \(\nu_{\alpha R},\eta,\chi\) is absolutely stabilized by the matter parity conservation, providing a dark matter candidate. However, since \(\nu_{3R}\) does not singly couple to standard model fields at
Figure 1: Neutrino mass generation induced by dark matter solution of \(B-L\) gauge symmetry.
renormalizable level similar to proton, \(\nu_{3R}\) has a lifetime bigger than the universe age, supplying an alternative dark matter candidate, kind of minimal dark matter.
_Neutrino mass_.--I write the scalar potential relevant to \(\eta,\chi\), such as
\[V \supset \mu_{1}^{2}\eta^{*}\eta+\mu_{2}^{2}\chi^{\dagger}\chi+\lambda_{1}( \eta^{*}\eta)^{2}+\lambda_{2}(\chi^{\dagger}\chi)^{2}+\lambda_{3}(\eta^{*}\eta )(\chi^{\dagger}\chi) \tag{2}\] \[+(\eta^{*}\eta)(\lambda_{4}\phi_{1}^{*}\phi_{1}+\lambda_{5}\phi_{ 2}^{*}\phi_{2}+\lambda_{6}H^{\dagger}H)+(\chi^{\dagger}\chi)(\lambda_{7}\phi_{ 1}^{*}\phi_{1}+\lambda_{8}\phi_{2}^{*}\phi_{2}+\lambda_{9}H^{\dagger}H)\] \[+\lambda_{10}(\chi^{\dagger}H)(H^{\dagger}\chi)+\mu(\phi_{2}\eta \eta+H.c.)+\lambda(H\chi\eta\phi_{1}^{*}+H.c.).\]
The trivial vacua set for \(\eta,\chi\) acquire \(\mu_{1,2}^{2}>0\). Additionally, the potential bounded from below demands that \(\lambda_{1,2}>0\) which is derived from \(V>0\) when \(\eta,\chi\) separately tending to infinity, as well as other conditions for scalar self-couplings when two or more than two of scalar fields simultaneously tending to infinity. Let \(\chi^{0}=(R+iI)/\sqrt{2}\) and \(\eta=(R_{1}+iI_{1})/\sqrt{2}\). Further, denote \(M_{1}^{2}=\mu_{1}^{2}+\frac{\lambda_{4}}{2}w_{1}^{2}+\frac{\lambda_{5}}{2}w_{ 2}^{2}+\frac{\lambda_{6}}{2}v^{2}\) and \(M_{2}^{2}=\mu_{2}^{2}+\frac{\lambda_{7}}{2}w_{1}^{2}+\frac{\lambda_{8}}{2}w_{ 2}^{2}+\frac{\lambda_{9}}{2}v^{2}\), which all are at least at \(w_{1,2}\) scale. The field \(\chi^{\pm}\) is a physical field by itself with mass \(m_{\chi^{\pm}}^{2}=M_{2}^{2}+\frac{\lambda_{10}}{2}v^{2}\). The fields \(R,R_{1}\) and \(I,I_{1}\) mix in each pair, such as
\[V \supset \frac{1}{2}\begin{pmatrix}R_{1}&R\end{pmatrix}\begin{pmatrix}M_{1 }^{2}+\sqrt{2}\mu w_{2}&-\frac{1}{2}\lambda vw_{1}\\ -\frac{1}{2}\lambda vw_{1}&M_{2}^{2}\end{pmatrix}\begin{pmatrix}R_{1}\\ R\end{pmatrix} \tag{3}\] \[+\frac{1}{2}\begin{pmatrix}I_{1}&I\end{pmatrix}\begin{pmatrix}M_{1 }^{2}-\sqrt{2}\mu w_{2}&\frac{1}{2}\lambda vw_{1}\\ \frac{1}{2}\lambda vw_{1}&M_{2}^{2}\end{pmatrix}\begin{pmatrix}I_{1}\\ I\end{pmatrix}.\]
I define two mixing angles,
\[t_{2\theta_{R}}=\frac{-\lambda vw_{1}}{M_{2}^{2}-M_{1}^{2}-\sqrt{2}\mu w_{2}}, \hskip 14.226378ptt_{2\theta_{I}}=\frac{\lambda vw_{1}}{M_{2}^{2}-M_{1}^{2}+ \sqrt{2}\mu w_{2}}. \tag{4}\]
The physical fields are
\[R_{1}^{\prime}=c_{\theta_{R}}R_{1}-s_{\theta_{R}}R,\hskip 14.226378ptR ^{\prime}=s_{\theta_{R}}R_{1}+c_{\theta_{R}}R, \tag{5}\] \[I_{1}^{\prime}=c_{\theta_{I}}I_{1}-s_{\theta_{I}}I,\hskip 14.226378ptI ^{\prime}=s_{\theta_{I}}I_{1}+c_{\theta_{I}}I, \tag{6}\]
with respective masses,
\[m_{R_{1}^{\prime}}^{2}\simeq M_{1}^{2}+\sqrt{2}\mu w_{2}+\frac{ \frac{1}{4}\lambda^{2}v^{2}w_{1}^{2}}{M_{1}^{2}+\sqrt{2}\mu w_{2}-M_{2}^{2}}, \hskip 14.226378ptm_{R^{\prime}}^{2}\simeq M_{2}^{2}+\frac{\frac{1}{4} \lambda^{2}v^{2}w_{1}^{2}}{M_{2}^{2}-M_{1}^{2}-\sqrt{2}\mu w_{2}}, \tag{7}\] \[m_{I_{1}^{\prime}}^{2}\simeq M_{1}^{2}-\sqrt{2}\mu w_{2}+\frac{ \frac{1}{4}\lambda^{2}v^{2}w_{1}^{2}}{M_{1}^{2}-\sqrt{2}\mu w_{2}-M_{2}^{2}}, \hskip 14.226378ptm_{I^{\prime}}^{2}\simeq M_{2}^{2}+\frac{\frac{1}{4} \lambda^{2}v^{2}w_{1}^{2}}{M_{2}^{2}-M_{1}^{2}+\sqrt{2}\mu w_{2}}, \tag{8}\]
where the approximations come from \(|\theta_{R,I}|\ll 1\) due to \(v\ll w_{1,2}\), and it is clear that the \(R,I\) and \(R_{1},I_{1}\) masses are now separated.
The Yukawa Lagrangian relevant to neutral fermions is
\[{\cal L}_{\rm Yuk}\supset h_{aa}\bar{l}_{aL}\chi\nu_{\alpha R}+\frac{1}{2}t_{ \alpha\beta}\bar{\nu}_{\alpha R}^{c}\nu_{\beta R}\phi_{1}+\frac{1}{2}t_{33} \bar{\nu}_{3R}^{c}\nu_{3R}\phi_{2}+H.c. \tag{9}\]
When \(\phi_{1,2}\) develop VEVs, \(\nu_{R}\)'s obtain Majorana masses, such as
\[m_{\nu_{1R}}=-t_{11}\frac{w_{1}}{\sqrt{2}},\ \ \ \ m_{\nu_{2R}}=-t_{22}\frac{w_{1} }{\sqrt{2}},\ \ \ \ m_{\nu_{3R}}=-t_{33}\frac{w_{2}}{\sqrt{2}}, \tag{10}\]
where I assume \(t_{\alpha\beta}\) to be flavor diagonal, i.e. \(\nu_{1,2,3R}\) are physical fields by themselves. This Yukawa Lagrangian combined with the above scalar potential, i.e. \({\cal L}\supset\frac{h_{aa}}{\sqrt{2}}\bar{\nu}_{aL}(c_{\theta_{R}}R^{\prime} +ic_{\theta_{I}}I^{\prime}-s_{\theta_{R}}R^{\prime}_{1}-is_{\theta_{I}}I^{ \prime}_{1})\nu_{\alpha R}-\frac{1}{2}m_{\nu_{\alpha R}}\nu_{\alpha R}^{2}+H.c.-\frac{1}{2}m_{R^{\prime}}^{2}R^{\prime 2}-\frac{1}{2}m_{I^{\prime}}^{2}I^{ \prime 2}-\frac{1}{2}m_{R^{\prime}_{1}}^{2}R^{\prime 2}_{1}-\frac{1}{2}m_{I^{ \prime}_{1}}^{2}I^{\prime 2}_{1}\), up to kinetic terms yields necessary features for the diagram in Fig. 1 in mass basis. That said, the loop is propagated by physical fermions \(\nu_{1,2R}\) and physical scalars \(R^{\prime},I^{\prime},R^{\prime}_{1},I^{\prime}_{1}\), inducing neutrino mass in form of \({\cal L}\supset-\frac{1}{2}\bar{\nu}_{aL}(m_{\nu})_{ab}\nu_{bL}^{c}+H.c.\), where
\[(m_{\nu})_{ab} = \frac{h_{a\alpha}h_{b\alpha}m_{\nu_{\alpha R}}}{32\pi^{2}}\left( \frac{c_{\theta_{R}}^{2}m_{R^{\prime}}^{2}\ln\frac{m_{\nu_{\alpha R}}^{2}}{m_ {R^{\prime}}^{2}}}{m_{\nu_{\alpha R}}^{2}-m_{R^{\prime}}^{2}}-\frac{c_{ \theta_{I}}^{2}m_{I^{\prime}}^{2}\ln\frac{m_{\nu_{\alpha R}}^{2}}{m_{\nu_{ \alpha R}}^{2}-m_{I^{\prime}}^{2}}}\right. \tag{11}\] \[\left.+\frac{s_{\theta_{R}}^{2}m_{R^{\prime}_{1}}^{2}\ln\frac{m_ {\nu_{\alpha R}}^{2}}{m_{\nu_{\alpha R}}^{2}-m_{R^{\prime}_{1}}^{2}}}{m_{\nu_ {\alpha R}}^{2}-m_{R^{\prime}_{1}}^{2}}-\frac{s_{\theta_{I}}^{2}m_{I^{\prime }_{1}}^{2}\ln\frac{m_{\nu_{\alpha R}}^{2}}{m_{\nu_{\alpha R}}^{2}-m_{I^{ \prime}_{1}}^{2}}}\right).\]
It is noteworthy that the divergent parts arising from individual one-loop contributions by \(R^{\prime},I^{\prime},R^{\prime}_{1},I^{\prime}_{1}\), having a common form \(C_{\rm UV}=\frac{1}{\epsilon}-\gamma+\ln 4\pi+1\) in dimensional regularization \(\epsilon=2-d/2\to 0\), are exactly cancelled due to \((c_{\theta_{R}}^{2}-c_{\theta_{I}}^{2}+s_{\theta_{R}}^{2}-s_{\theta_{I}}^{2} )C_{\rm UV}=0\). Additionally, as suppressed by the loop factor \((1/32\pi^{2})\), the \(R^{\prime},I^{\prime}\) mass splitting \((m_{R^{\prime}}^{2}-m_{I^{\prime}}^{2})/m_{R^{\prime},I^{\prime}}^{2}\sim \lambda^{2}v^{2}/w_{1,2}^{2}\), as well as the mixing angles \(\theta_{R,I}^{2}\sim\lambda^{2}v^{2}/w_{1,2}^{2}\), the resultant neutrino mass in (11) manifestly achieved, proportional to \(m_{\nu}\sim\lambda^{2}h^{2}v^{2}/32\pi^{2}w_{1,2}\) is small, as expected.
Since \(\eta\) is a mediator field, possibly coming from a more fundamental theory, I particularly assume it to be much heavier than other dark fields, i.e. \(\mu_{1}\gg\mu_{2},w_{1,2}\), thus \(M_{1}\simeq\mu_{1}\) and one can take the soft coupling \(\mu\lesssim\mu_{1}\). In this case, the diagram in Fig. 1 approximates that in the basic scotogenic setup with the vertex \(\frac{1}{2}\bar{\lambda}(H\chi)^{2}+H.c.\) induced by \(\eta\) to be \(\bar{\lambda}=\lambda^{2}\mu w_{2}w_{1}^{2}/\sqrt{2}\mu_{1}^{4}\sim(w_{1,2}/ \mu_{1})^{3}\ll 1\) explaining why \(\bar{\lambda}\) is necessarily small [2]. Indeed, it is clear that \(\theta_{R}\simeq-\theta_{I}\simeq\lambda vw_{1}/2\mu_{1}^{2}\), commonly called \(\theta=|\theta_{R,I}|\). The contribution of
\(R^{\prime}_{1},I^{\prime}_{1}\) (i.e. \(\eta\)) in the last two terms in (11) is proportional to \(\theta^{2}\ln m^{2}_{R^{\prime}_{1}}/m^{2}_{I^{\prime}_{1}}\simeq\bar{\lambda}v^{ 2}/\mu_{1}^{2}\), which is more suppressed than that by \(\chi^{0}\) in the first two terms in (11) due to the \(R^{\prime},I^{\prime}\) mass splitting, proportional to \((m^{2}_{R^{\prime}}-m^{2}_{I^{\prime}})/m^{2}_{R^{\prime},I^{\prime}}\simeq \bar{\lambda}v^{2}/m^{2}_{R^{\prime},I^{\prime}}\), where notice that \(m_{\nu_{\alpha R}}\sim m_{R^{\prime},I^{\prime}}\sim(\mu_{2},w_{1,2})\ll\mu_ {1}\). That said, the neutrino mass is dominantly contributed by \(R^{\prime},I^{\prime}\) (i.e. \(\chi^{0}\)), approximated as
\[(m_{\nu})_{ab}\simeq\frac{\bar{\lambda}v^{2}}{32\pi^{2}}\frac{h_{a\alpha}h_{ b\alpha}m_{\nu_{\alpha R}}}{m_{\chi^{0}}^{2}-m_{\nu_{\alpha R}}^{2}}\left(1- \frac{m^{2}_{\nu_{\alpha R}}}{m_{\chi^{0}}^{2}-m_{\nu_{\alpha R}}^{2}}\ln \frac{m^{2}_{\chi^{0}}}{m_{\nu_{\alpha R}}^{2}}\right), \tag{12}\]
because \(m^{2}_{\chi^{0}}\equiv(m^{2}_{R^{\prime}}+m^{2}_{I^{\prime}})/2\) is much bigger than the \(R^{\prime},I^{\prime}\) mass splitting. This matches the well-established result, but the smallness of the coupling \(\bar{\lambda}\) or exactly of observed neutrino mass \(m_{\nu}\sim 0.1\) eV given that \(m_{\nu_{\alpha R}}\sim m_{\chi^{0}}\sim 1\) TeV and \(h\sim 0.1\) is manifestly solved, since \(\bar{\lambda}=(\lambda^{2}/\sqrt{2})(\mu/\mu_{1})(w_{2}w_{1}^{2}/\mu_{1}^{3}) \sim 10^{-6}\) for \(w_{1,2}/\mu_{1}=10^{-2}\) and \(\lambda\sim 1\sim\mu/\mu_{1}\), as desirable.
_Dark matter_.--Differing from the scotogenic (odd) fields \(\nu_{\alpha R},\chi^{0},\eta\), the third right-handed neutrino (\(\nu_{3R}\)) is accidentally stabilized by the current model by itself despite the fact that it is even. Further, this stability is maintained even if \(\nu_{3R}\) is heavier than the scotogenic fields and others. This results from the \(B-L\) gauge symmetry solution for which \(\nu_{3R}\) has a charge \(B-L=5\) and is thus coupled only in pair in renormalizable interactions, say \(\nu_{3R}\nu_{3R}\phi_{2}\) and \(\bar{\nu}_{3R}\nu_{3R}Z^{\prime}_{B-L}\). Given an effective interaction that leads to \(\nu_{3R}\) decay, the one with minimal dimension is \(\frac{1}{\Lambda^{6}}\bar{l}_{L}\tilde{H}\nu_{3R}(\phi_{1}\phi_{2})^{3}\), \(\frac{1}{\Lambda^{6}}\bar{l}_{L}\chi\nu_{3R}\eta^{*}(\phi_{1}\phi_{2})^{2}\), or \(\frac{1}{\Lambda^{6}}\nu_{\alpha R}\nu_{3R}\eta(\phi_{1}\phi_{2})^{3}\), where \(\Lambda\sim 10^{16}\) GeV would be a scale of GUT, broken for determining the effective couplings, conserving the current gauge symmetry. After \(B-L\) breaking, \(\nu_{3R}\) gets mass and possibly decays to normal fields \(l_{L}H\), dark fields \(\nu^{c}_{\alpha R}\eta^{*}\), or mixed product \(l_{L}\eta\chi^{*}\), with the rate suppressed by \(\Gamma_{\nu_{3R}}\sim(w_{1,2}/\Lambda)^{12}m_{\nu_{3R}}\to 0\). The field \(\nu_{3R}\) is absolutely stable.
The field \(\nu_{3R}\) communicates with normal matter through the \(Z^{\prime}_{B-L}\) and \(\phi_{2}\) portals only, unlike the scotogenic fields that interact directly with usual leptons and Higgs field additionally. Obviously the \(\phi_{2}\) portal couples to normal matter only through a mixing with
Figure 2: Annihilation of accidental \(\nu_{3R}\) dark matter to normal matter.
the usual Higgs field, giving a small contribution to dark matter observables. The gauge portal dominantly contributes to dark matter annihilation to normal matter via \(s\)-channel diagrams as in Fig. 2 where \(f\) is every fermion, possibly including odd field. The annihilation cross-section is proportional to \(\langle\sigma v\rangle\sim g_{B-L}^{4}m_{\nu_{3R}}^{2}/(4m_{\nu_{3R}}^{2}-m_{Z_{ B-L}^{\prime}}^{2})^{2}\). Hence, the \(Z^{\prime}\) mass resonance \(m_{\nu_{3R}}\simeq\frac{1}{2}m_{Z_{B-L}^{\prime}}\) will set the correct relic density for dark matter. Note that \(\nu_{3R}\) is a Majorana field, scattering with nucleon in direct detection only via spin-dependent effective interaction through exchange by \(Z_{B-L}^{\prime}\). However, this kind of interaction vanishes for quark, hence it presents a negative search result, as currently measured [4].
Last, but not least, the lightest of odd fields \(\nu_{\alpha R},\chi^{0},\eta\) is stabilized by the matter parity, contributing to dark matter too. As a result, this model presents a promising scenario for two-component dark matter [5].
_Concluding remarks_.--The dark side of the \(B-L\) gauge symmetry is perhaps associated with three right-handed neutrinos that possess \(B-L=-4,-4,+5\), respectively. This theory implies a unique matter parity as residual gauge symmetry, stabilizing scotogenic fields in a way different from the hypothesis of superparticles. Besides explaining the scotogenic neutrino mass generation and dark matter candidate, the model reveals a second component for dark matter, \(\nu_{3R}\) with \(B-L=5\).
|
2308.11537 | BELB: a Biomedical Entity Linking Benchmark | Biomedical entity linking (BEL) is the task of grounding entity mentions to a
knowledge base. It plays a vital role in information extraction pipelines for
the life sciences literature. We review recent work in the field and find that,
as the task is absent from existing benchmarks for biomedical text mining,
different studies adopt different experimental setups making comparisons based
on published numbers problematic. Furthermore, neural systems are tested
primarily on instances linked to the broad coverage knowledge base UMLS,
leaving their performance to more specialized ones, e.g. genes or variants,
understudied. We therefore developed BELB, a Biomedical Entity Linking
Benchmark, providing access in a unified format to 11 corpora linked to 7
knowledge bases and spanning six entity types: gene, disease, chemical,
species, cell line and variant. BELB greatly reduces preprocessing overhead in
testing BEL systems on multiple corpora offering a standardized testbed for
reproducible experiments. Using BELB we perform an extensive evaluation of six
rule-based entity-specific systems and three recent neural approaches
leveraging pre-trained language models. Our results reveal a mixed picture
showing that neural approaches fail to perform consistently across entity
types, highlighting the need of further studies towards entity-agnostic models. | Samuele Garda, Leon Weber-Genzel, Robert Martin, Ulf Leser | 2023-08-22T16:05:18Z | http://arxiv.org/abs/2308.11537v1 | [
###### Abstract
Motivation: Biomedical entity linking (BEL) is the task of grounding entity mentions to a knowledge base. It plays a vital role in information extraction pipelines for the life sciences literature. We review recent work in the field and find that, as the task is absent from existing benchmarks for biomedical text mining, different studies adopt different experimental setups making comparisons based on published numbers problematic. Furthermore, neural systems are tested primarily on instances linked to the broad coverage knowledge base UMLS, leaving their performance to more specialized ones, e.g. genes or variants, understudied.
Results: We therefore developed **BELB**, a Biomedical Entity Linking **B**enchmark, providing access in a unified format to 11 corpora linked to 7 knowledge bases and spanning six entity types: gene, disease, chemical, species, cell line and variant. BELB greatly reduces preprocessing overhead in testing BEL systems on multiple corpora offering a standardized testbed for reproducible experiments. Using BELB we perform an extensive evaluation of six rule-based entity-specific systems and three recent neural approaches leveraging pre-trained language models. Our results reveal a mixed picture showing that neural approaches fail to perform consistently across entity types, highlighting the need of further studies towards entity-agnostic models.
Data and text mining BELB: a Biomedical Entity Linking Benchmark]BELB: a Biomedical Entity Linking Benchmark
Samuele Garda, 1,* Leon Weber-Genzel,2 Robert Martin and Ulf Leser\({}^{1,*}\)
Footnote 1: In some studies “entity linking” denotes the joint entity recognition and linking process. We however refer exclusively to the grounding step. The task is also known as Named Entity Normalization and we will use “linking”, “grounding” and “normalizing” interchangeably throughout the text.
## 1 Introduction
The task of assigning entity mentions found in biomedical text to a knowledge base (KB) entity is known as Biomedical Entity Linking1 (BEL). Texts in the biomedical domain are rich in ambiguous expressions, with abbreviation being a prominent example, e.g.: "WSS" can be either "Wrinkly skin syndrome" or "Weaver-Smith syndrome". BEL resolves such ambiguities and is therefore a crucial component in many downstream applications. For instance it is used to index PubMed (Mork et al., 2013), a primary archive of biomedical literature.
Footnote 1: In some studies “entity linking” denotes the joint entity recognition and linking process. We however refer exclusively to the grounding step. The task is also known as Named Entity Normalization and we will use “linking”, “grounding” and “normalizing” interchangeably throughout the text.
Although several benchmarks have been developed for biomedical text mining, e.g. BLUE (Peng et al., 2019) and BLURB (Gu et al., 2021), BEL is notably absent from all of them. GeneTuring (Hou and Ji, 2023) contains a module to test normalization, but covers only genes and is specific for models built on the GPT-3 architecture. The lack of a standardized evaluation setup translates into a wide variety of approaches: different studies use different combinations of corpus and KB and different evaluation protocols. These differences limit severely direct comparison of results (see Appendix A).
In the biomedical domain different entity types require normalization to different specialized KBs (Wei et al., 2019), e.g. species to NCBI Taxonomy (Scott, 2012) but genes to NCBI Gene (Brown et al., 2015). Yet, important types such as genes and variants are completely absent from corpora commonly used to evaluate neural BEL approaches (see 2.1.1), which instead only target UMLS. Although adapting neural approaches to
other KBs is possible, it leaves open the question of whether their performance transfers across entity types. Additionally, as corpora are distributed in different formats, developing new BEL approaches (or adapting existing ones to new corpora) requires writing new input-output and quality assurance routines, e.g. to correct wrong mentions boundaries, increasing the overall engineering turnaround.
To facilitate research on BEL, we introduce **BELB**, a **B**iomedical **E**ntity **L**inking **B**enchmark. BELB provides access to 11 corpora linked to 7 KBs. All components undergo extensive data cleaning and are converted in a _unified format_, covering six biomedical entities (gene, disease, chemical, species, cell lines and variants). As show in Figure 1, BELB significantly lowers the barrier for research in the field, allowing to (i) train models on corpora in the highest quality possible and (ii) fairly compare them against other approaches with minimal preprocessing overhead (see Appendix B for a simple showcase). Using BELB, we perform an extensive comparison of **six rule-based domain-specific systems and three neural methods**. Our findings show that results of neural approaches do not transfer across entity types, with specialized rule-based systems still being the best option for the gene and disease entity types. We hope that our publicly available benchmark will be adopted by future work allowing to fairly evaluate approaches and accelerate progress towards more robust neural models.
## 2 Materials and methods
In this study we introduce BELB, a benchmark for standardized evaluation of models for BEL. The task is formulated as predicting an entity \(e\in E\) from a KB given a document \(d\) and an entity mention \(m\), a pair of start and end positions \(\langle m_{s},m_{e}\rangle\) indicating a span in \(d\). We use BELB to compare rule-based domain-specific systems and state-of-the-art neural approaches. In all experiment we use in-KB gold mentions: each mention has a valid gold KB entity (Roder et al., 2018) and its position in \(d\) is given.
### Biomedical Entity Linking Benchmark
We report an overview of the 11 corpora and 7 KBs available in BELB in Table 1 and 2, respectively. Their detailed description can be found in Appendix C. In the following we outline crucial properties of BEL and highlight how they are accounted for in BELB, allowing it to comprehensively analyze and fairly evaluate BEL models.
#### 2.1.1 Specialized knowledge bases
In biomedical information extraction instances of multiple entity types are linked to specialized KBs (Wei et al., 2019). However, recent studies in the NLP community primarily focus on the MedMentions corpus linking to UMLS (Liu et al., 2021; Zhang et al., 2022; Agarwal et al., 2022 inter alia). Additionally, in MedMentions, entity types such as diseases and genes are covered only marginally or not at all, respectively (see Appendix D).
This calls into question how well results obtained in this setting can be transferred for instance to publications in genomics or molecular biology in general. In BELB we cover six entity types
Figure 1: We illustrate the main advantages of BELB. In (a) we see the current stand of experimental setups for biomedical entity linking. Different studies use different (i) preprocessing (I/O routines), (ii) combinations of corpora and KBs and (iii) evaluation protocols, ultimately making published numbers not directly comparable. With BELB (b) researcher have access to (i) uniformly preprocessed corpora and KBs, which can be accessed programmatically and (ii) a standardized evaluation protocol greatly reducing preprocessing overhead and maximizing comparability and reproducibility.
(gene, species, disease, chemicals, cell line and variant) represented by 11 corpora linked to 7 specialized KBs (for comparison with previous studies we include UMLS as well). We design a _unified schema_ to harmonize all KBs (see Appendix E). This allows to test a model's ability to preserve performance across multiple pairs of corpus and KB with minimal preprocessing overhead.
#### 2.1.2 Unseen entities and synonyms
Corpora typically cover only a small fraction of all entities in a KB. Additionally, biomedical entities present multiple names (_synonyms_), e.g. both "Ocloudofacial dysplasia" and "Burn-Mckewon Syndrome" are valid names of "MeSH:C563682". Hence if an entity is in the training set, it does not imply that all its surface forms are included. In BELB we assign a unique identifier to each mention and provide lists of mentions of (i) unseen entities, i.e. present in the test set but not in the train one (_zero-shot_) and (ii) train entities occurring in the test set but with different (case-insensitive) surface forms (Tutubalina et al., 2020). This allows to easily report a model's performance in (i) generalizing to new entities and (ii) recognizing known ones appearing in different forms.
#### 2.1.3 Homonyms
Discriminating mentions with the same surface form but representing different entities (_homonyms_) by their context is indispensable to BEL. This is because in biomedical KBs the same synonym can be associated to multiple entities. This is especially the case of abbreviations. For instance, as in example (a) in Table
\begin{table}
\begin{tabular}{l|c|c|c|c|c} & Documents (train / dev / test) & Annotations (train / dev / test) & 0-shot & Stratified \\ \hline
**Disease** & & & & & \\ NCBI Disease (Dogan et al., 2014) & 592 / 100 / 100 & 5,133 / 787 / 960 & 150 (15.62\%) & 185 (19.27\%) \\ BC5ORR (D) (Li et al., 2016) & 500 / 500 / 500 & 4,149 / 4,228 / 4,363 & 388 (8.89\%) & 765 (17.53\%) \\ \hline
**Chemical** & & & & & \\ BC5ORR (C)(Li et al., 2016) & 500 / 500 / 500 & 5,148 / 5,298 / 5334 & 1,038 (19.46\%) & 415 (7.78\%) \\ \(\dagger\) (Islamaj et al., 2022) & 80 / 20 / 50 & 20,796 / 5,234 / 11514 & 3,908 (33.94\%) & 1,534 (13.32\%) \\ \hline
**Cell line** & & & & & \\ BioID \(\ddagger\) (Arighi et al., 2017) & 231 / 59 / 60 & 3,815 / 1,096 / 864 & 158 (18.29\%) & 45 (5.21\%) \\ \hline
**Species** & & & & & \\ Linnaeus \(\dagger\) (Martin et al., 2010) & 47 / 17 / 31 & 2,115 / 705 / 1,430 & 385 (26.92\%) & 58 (4.06\%) \\
8800 (Patilis et al., 2013) & 437 / 63 / 125 & 2,557 / 384 / 767 & 363 (47.33\%) & 107 (13.95\%) \\ \hline
**Gene** & & & & & \\ GNornPlus (Wei et al., 2015) & 279 / 137 / 254 & 3,015 / 1,203 / 3,222 & 2,822 (87.59\%) & 163 (5.06\%) \\ NLM-Gene (Islamaj et al., 2021) & 400 / 50 / 100 & 11,263 / 1,371 / 2,729 & 1,215 (44.52\%) & 353 (12.94\%) \\ \hline
**Variant** & & & & & \\ SNP (Thomas et al., 2011) & - / - / 292 & - / - / 517 & - & - \\
0\(\dagger\)s rivis v1.2 (Furlong et al., 2008) & - / - / 57 & - / - / 261 & - & - \\ t\(\dagger\)var v3 (Wei et al., 2022) & - / - / 214 & - / - / 1,018 & - & - \\ \hline
**UMLS** & & & & & \\ MedMentions (Mohan and Li, 2019) & 2,635 / 878 / 879 & 122,178 / 40,864 / 40,143 & 8,167 (20.34\%) & 7,945 (19.79\%) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of the corpora available in BELB with their primary characteristics: number of documents, annotations and how many of them are zero-shot (unseen entities) or stratified (seen entity but unseen name). \(\ddagger\) Full text \(\ddagger\) Figure captions
\begin{table}
\begin{tabular}{l|c|c|c|c|c} & Version & History & Entities & Names & Synonyms & Homonyms (PN) \\ \hline
**Disease** & & & & & & \\ CTD Diseases (Davis et al., 2023) & monthly \(\dagger\) & ✗ & 13,188 & 88,548 & 6.71 & 0.39\% (-) \\ \hline
**Chemical** & & & & & \\ CTD Chemicals (Davis et al., 2023) & monthly \(\dagger\) & ✗ & 175,663 & 451,410 & 2.56 & - (-) \\ \hline
**Cell line** & & & & & \\ Cellosaurus (Bairoch, 2018) & - & ✓ & 144,568 & 251,747 & 1.74 & 3.21\% (1.22\%) \\ \hline
**Species** & & & & & \\ NCBI Taxonomy (Scott, 2012) & - & ✓ & 2,491,364 & 3,783,882 & 1.51 & 0.04\% (-) \\ \hline
**Gene** & & & & & \\ NCBI Gene (Brown et al., 2015) & - & ✓ & 42,252,923 & 105,570,090 & 2.49 & 47.37\% (8.32\%) \\ GNornPlus subset & & & 703,858 & 2,455,772 & 3.48 & 50.79\% (9.13\%) \\ NLM-Gene subset & & & 873,015 & 2,913,456 & 3.33 & 53.61\% (9.55\%) \\ \hline
**Variant** & & & & & \\ dbSNP (Sherry et al., 2001) & build 156 \(\dagger\) & ✓ & 1,053,854,063 & 3,119,027,235 & 2.95 & 1,557,105,418 (49.92\%) \\ \hline
**UMLS** & & & & & \\ UMLS (Bodenreider, 2004) & 2017AA (full) & - & 3,464,809 & 7,938,833 & 2.29 & 2.07\% (0.16\%) \\ \hline \end{tabular}
\end{table}
Table 2: Overview of the KBs available in BELB according to their entity type. We report the number of entities, synonyms per entity, homonyms and how many of them are the primary name (PN). \(\dagger\) No archive of previous versions is provided
3, "WSS" is the abbreviated form of two syndromes and it appears twice in CTD Diseases. Another example are genes present in multiple species, as in (c), where the string "rats" is essential for correct normalization, as "\(\alpha\)2microglobulin" could refer either to the human or rat gene. Identifying contextual information can be non-trivial, e.g (c) is the title of a publication, but the text may describe general characteristics of "\(\alpha\)2microglobulin" introducing textual cues pointing to the human gene. Additionally, this information is not always explicitly expressed and may emerge via other patterns. In example (e) "PC12" denotes a human cell line, whereas in (f) it refers to the rat one. This can be inferred from the capitalized gene mention "SGK1" in (e) which conventionally denotes human genes. By introducing entity types such as genes and variants, BELB allows to probe a model's ability to (i) identify contextual information and (ii) handle highly ambiguous search spaces (KB).
#### 2.1.4 Scale
As mentioned in Section 2.1.1 studies on neural methods have primarily targeted UMLS, which, as shown in Table 2, is one and three order of magnitude(s) smaller than NCBI Gene and dbSNP, respectively. With its unified format BELB allows to easily test how implementations scale to these large KBs.
#### 2.1.5 Synchronization of KB versions
Entity linking is dynamic by nature: over the years entities in KBs are replaced or become obsolete. For instance, in GNormPlus mentions of "MDS1" are linked to NCBI Gene entity "4197", which was subsequently replaced by "2122". As several KBs do not have a versioning system (see Table 2), it is often not possible to retrieve the exact KB used to create a corpus. Failing to account for these changes may introduce a notable amount of noise in measuring performance of high quality systems. BELB offers access to the KB _history_ if available, i.e. a table tracking all changes in the entities. In our preprocessing we update all corpora with the KB version at hand and remove mentions linked to obsolete entities. This allows as well to update the predictions of systems shipping with a pre-processed (i.e. non-trivially swappable) KB on corpora created after their release, allowing for fair comparison _over time_.
### Evaluated approaches
We use BELB to perform an extensive evaluation of rule-based and neural methods. We now present the selected approaches for evaluation. We stress that we do not re-implement any method (we rely on the original code).
#### 2.2.1 Rule-based entity-specific systems
We compare the performance of neural models linking to KBs for which specialized systems have been developed, as these are still the de facto standard in BEL (Wei et al., 2019; Mujeen et al., 2022). Specifically we test the following rule-based methods: **GNormPlus**(Wei et al., 2015) for genes (NCBI Gene), **SR4GN**(Wei et al., 2012) for species (NCBI Taxonomy) and **tmVar v3**(Wei et al., 2022) for variants (dbSNP). For UMLS we employ **SciSpacy**(Neumann et al., 2019). We label them rule-based entity-specific (RBES) as for linking they rely on a mixture of string matching approaches and ad-hoc rules tailored to a specific entity type. For diseases and chemicals, we include in the RBES category two systems which are only _partly_ rule-based (stretching our definition), as they better represent the state-of-the-art of disease/chemical-specific models. We use **TaggerOne**(Leaman and Lu, 2016), a semi-Markov model, for diseases, and opt for the system that won the BioCreative VII NLM-Chem track (Almeida et al., 2022) for chemicals (**BC7T2W**), which uses both string matching and neural embeddings. To the best of our knowledge there exists no linking approach specific for cell lines. We therefore use a fuzzy string matching approach based on Levenshtein distance (**FuzzySearch**). For detailed descriptions and information on specific implementations we refer the reader to Appendix G.1. All of these systems do not require re-training as either (i) their normalization component is completely rule-based (SR4GN, tmVar, SciSpacy) or (ii) models trained on the BELB corpora are provided along with the code (GNormPlus, TaggerOne, BC7T2W).
#### 2.2.2 Neural systems
We train the following neural models on the train split of each BELB corpus (see Appendix G.2 for training details).
**BioSyn**(Sung et al., 2020) is a dual encoder architecture. Importantly, BioSyn does not account for context, i.e. it uses only entity mentions. The model is trained via "synonym marginalization": it learns to maximize the similarity (inner product) between a mention embedding and all the synonyms embeddings of the gold entity. At inference it retrieves the synonyms most similar to the given test mention, i.e. it relies on a lookup from synonym to entity. We prefer BioSyn over SapBERT (Liu et al., 2021) as the latter is primarily a pre-training strategy. **GenBioEL**(Yuan et al., 2022) is an encoder-decoder model. As input it takes a text with a single mention marked with special tokens. The system is then trained to generate a synonym. At inference it ensures that the prediction is a valid KB synonym by constraining the generation process with a prefix-tree created from all KB synonyms. Similar to BioSyn, this approach represents
\begin{table}
\begin{tabular}{c|l|l} & Example & Entity \\ \hline a) & Features of ARCL type II overlap with those of Wrinkly skin syndrome (**WSS**) & MeSH:C536750 \\ b) & Weaver-Smith syndrome (**WSS**) is a Mendelian disorder of the epigenetic machinery & MeSH:C536687 \\ \hline c) & **\(\alpha\)2microglobulin** exacerbates brain damage after stroke in rats. & NCBI Gene:24153 \\ d) & The T6T cell line produced the proteinase inhibitor **\(\alpha\)2microglobulin**. & NCBI Gene:2 \\ e) & We identified the novel mutation **c.908G\(>\)A** within exon 8 of the _C_TSS gene. & rs756250449 \\ f) & The patient was compound heterozygous of the **c.908G\(>\)A** mutation in the SLC17A5 gene. & rs1057516601 \\ g) & The GSK650394 inhibitor is used to suppress SGK1 expression in **PC12 cells**. & CVCL\_8979 \\ h) & Effects of topography on _rat_ pheochromocytoma cell, **PC12 cells**, neuroitogenesis. & CVCL\_0481 \\ \end{tabular}
\end{table}
Table 3: Example of homonym mentions (in **bold**) requiring specific contextual information (underlined) for linking.
KB entities by their synonyms. The authors propose as well "KB-Guided Pre-training", i.e. a method based on prompting to pre-train GenBioEL on the KB, which we however do not deploy. This is because (i) it would introduce an advantage over other neural methods and (ii) it is too computationally expensive to run for each KB.
**arboEL** (Agarwal et al., 2022) is a dual encoder as well. The authors propose to construct k-nearest neighbor graphs over mention and entity clusters. Using a pruning algorithm they then generate directed minimum spanning trees rooted at entity nodes, and use the edges as positive examples for training. At inference they use the entity present in the mention's cluster. Notably, arboEL learns _entity embeddings_, encoded as a concatenated list of synonyms. The authors use as well a cross-encoder (Wu et al., 2020), i.e. using the top-64 entities retrieved by a trained arboEL as hard negatives (training) and as linking candidates (inference) for a second reranking model. In our experiments we do not make use of this extension as it is not strictly part of the arboEL algorithm.
### Evaluation protocol
We now describe in detail the evaluation protocol which we followed in our experiments. For all systems we report the _mention-level_ recall@1 (accuracy), since RBES approaches generate only a single candidate.
#### 2.3.1 Synonym as entity proxy
Approaches using strings as proxies for entities (BioSyn, GenBioEL) cannot meaningfully resolve ambiguous mentions. That is, for a mention of rat "a2microglobulin", they would return a list containing _both_ NCBI Gene "2" (human) and "24153" (rat). Sung et al. (2020) introduced a _lenient_ evaluation, which considers a prediction correct if any of the returned entities matches the gold one. As reported by Zhang et al. (2022), this largely overestimates performance. Following their suggestion, we opt for a _standard_ evaluation, which randomly samples one prediction from the returned list. However, as one of the aims of BEL is direct deployment in extraction pipelines, e.g. for constructing gene networks (Lehmann et al., 2015), we also include a _strict_ evaluation in which all such cases, i.e. multiple predictions, are considered incorrect.
#### 2.3.2 Disentangling recognition and linking
Some RBES systems (GNormPlus, SR4GN, TaggerOne, tmVar) perform entity recognition and linking jointly (see Appendix G.1). Due to false negatives in the NER step we cannot obtain their results on the full test set. To ensure that we are measuring the performance on the same instances for all methods, for corpora whose reference RBES system is a joint approach, we use only the test mentions which are correctly identified during entity recognition. For instance, for NLM-Gene we use only 73% of the test mentions, i.e. those correctly recognized by GNormPlus (see Table 13 for other corpora). As correct recognition correlates with correct normalization, our evaluation protocol probably introduces a bias towards RBES systems (see Section 4).
#### 2.3.3 Multiple gold entities
Mentions in biomedical corpora can provide multiple normalizations. Common instances are _composite mentions_, e.g. "breast and squamous cell neoplasms" and ambiguous ones, e.g. "Toll-like receptor" ("Toll-like receptor 2", "4" and "9"). Whether these cases are logical AND or OR is not always specified in the annotation guidelines. We opt for the more lenient OR interpretation and consider a prediction correct if it matches one of the gold entities.
## 3 Results
Table 4 reports the results of neural models and entity-specific models grouped under the RBES category. For results with strict evaluation (Section 2.3.1) and on the full test sets (Section 2.3.2) please see Table 14 and 15, respectively. We observe that performance of neural models varies significantly across entity types, with disease and genes corpora incurring the most significant drop.
**Homonyms** Beside the implicit bias towards RBES approaches (see Section 2.3.2), we hypothesize that an important factor at play are homonyms. RBES systems use ad-hoc components to handle these challenging cases. For instance GNormPlus directly integrates Ab3P (Sohn et al., 2008), a widely adopted abbreviation resolution tool, and SR4GN, which is specifically developed for cross-species gene normalization. Neural models lack these components, and synonym-based approaches are significantly impacted by random selection in case of homonyms. In table 5 we show that if we resolve abbreviation with Ab3P there is a notable improvement in performance for diseases. Similarly, if we use a lenient evaluation (see Section 2.3.1), GenBioEL is almost on par with GNormPlus on genes. In contrast, abbreviation resolution
\begin{table}
\begin{tabular}{l|l|l|l|l} Entity type & & & & \\ Corpus & RBES & BioSyn & GenBioEL & arboEL \\ \hline
**Disease** & 0.94 & 0.87 & 0.90 & 0.87 \\ NCBI Disease & 0.94 & 0.83 & 0.89 & 0.86 \\ BC5DR (D) & 0.94 & 0.88 & 0.92 & 0.88 \\ \hline
**Chemical** & 0.72 & 0.72 & 0.81 & 0.77 \\ BC5DR (C) & 0.82 & 0.85 & 0.95 & 0.88 \\ NLM-Chem & 0.67 & 0.67 & 0.75 & 0.72 \\ \hline
**Cell line** & & & & \\ BioID & 0.82 & 0.82 & 0.96 & 0.95 \\ \hline
**Species** & 0.97 & 0.91 & 0.85 & 0.76 \\ Linnaeus & 0.99 & 0.93 & 0.81 & 0.74 \\ S800 & 0.93 & 0.88 & 0.93 & 0.79 \\ \hline
**Gene** & 0.82 & - & 0.17 & 0.30 \\ GNormPlus & 0.87 & OOM & 0.21 & 0.36 \\ NLM-Gene & 0.76 & OOM & 0.13 & 0.23 \\ \hline
**Variant** & 0.91 & - & - & - \\ SNP\(\dagger\) & 0.94 & - & - & - \\ Osiris v1.2 & 0.91 & - & - & - \\ tmVar v3 & 0.88 & - & - & - \\ \hline
**UMLS** & & & & \\ MedMentions & 0.58 & OOM & 0.57 & 0.69 \\ \end{tabular}
\end{table}
Table 4: Performance of all baselines on BELB (test set). All scores are _mention-level_ recall@1. OOM: out-of-memory (>200GB)
has no impact on arboEL. We argue that this is due to the fact that arboEL uses entity embeddings, which benefit less by long forms mentions. Secondly, as entity embeddings require to learn a compressed entity representation, arboEL is affected by the limited size of the corpora. This is supported by results on MedMentions, which is one order of magnitude larger than other BELB corpora, where arboEL is confirmed the state-of-the-art approach.
**Unseen entities and synonyms** In Table 6 we see that neural approaches are outperformed by RBES systems on mentions of unseen entities while we the opposite happens with unseen synonyms of train entities. This can be explained by the fact that as string-matching approaches have direct access to the KB they are better suited for the zero-shot cases. If training data is available, neural representation are superior instead, as they can leverage representations learned from context.
**Scale** Neural models implementations fail to scale to large KBs such as NCBI Gene or dbSNP. In our experiments resorted to use the NCBI Gene subset determined by the species of the entities found in the gene corpora (see Table 2). This reflects a common real-world use case scenario, since often only a specific subset of species is relevant for linking (e.g. human and mouse). For dbSNP we are not aware of a valid criterion to subset it and we are therefore unable to run neural models on variants corpora.
**Synchronization of KB version** Corpora are only sparsely affected by changes in entities. However, if they are not handled properly, in BCSCDR (C) and Linnaeus (the most affected corpora in BELB), even if evaluating a perfect system, we would register an error rate of 4.56% and 3.57% respectively (see Appendix F).
## 4 Discussion
We strived to include in BELB as many corpora and KBs as possible, prioritizing those which are most commonly used by the community. We leave as further improvement the expansion to other important research directions as applications to clinical notes (Luo et al., 2020) and other languages as Spanish (Miranda-Escalada et al., 2022) and German (Kittner et al., 2021).
Our evaluation showed that neural approaches fail to perform consistently across all BELB instances, especially on genes, where RBES approaches are still far superior. However, as reported in Section 2.3.2, our evaluation protocol introduces a bias towards RBES systems by considering exclusively the test mentions they correctly identify. Nonetheless, we believe that this is the best approximation to compare results across all methods. We note as well the lack of hyperparameter exploration in neural models. Due to the high computational resources necessary we rely on the default ones reported by the authors. It is therefore likely that optimizing them may result in better numbers. Further improvements may possible by pre-training on the KB (Liu et al., 2021; Yuan et al., 2022) and refining candidates with a cross-encoder (Agarwal et al., 2022). RBES systems are advantaged by the use of ad-hoc components to handle homonyms. In Table 5 we show that introducing similar approaches for neural models could significantly improve their performance. However, as the neural paradigm is based on learning task-related capabilities from data (Bengio et al., 2013), we believe that future studies should nevertheless continue to investigate entity-agnostic models, rather than falling back to custom-made hand-crafted heuristics.
## 5 Conclusion
We presented BELB, a benchmark to standardize experimental setups for biomedical entity linking. We conducted an extensive evaluation of rule-based entity-specific systems and recent neural approaches. We find that the first are still the state-of-the-art on entity-types not explored by neural approaches, namely genes and variants. We hope that BELB will encourage future studies to compare approaches with a common testbed and to address current limitations of neural approaches.
## Acknowledgements
Samuele Garda and Robert Martin are supported by the _Deutsche Forschungsgemeinschaft_ as part of the research unit "Beyond the Exome".
|
2306.07821 | Galerkin-like Method for Integro-Differential Inclusions with
applications to Volterra Sweeping processes | In this paper, we develop the Galerkin-like method to address first-order
integro-differential inclusions. Under compactness or monotonicity conditions,
we obtain new results for the existence of solutions for this class of
problems, which generalize existing results in the literature and provide new
insights for differential inclusions with an unbounded right-hand side. The
effectiveness of the proposed approach is illustrated by presenting new
existence results for nonconvex state-dependent Volterra sweeping processes,
where the right-hand side is unbounded, and the classical theory of
differential inclusions is not applicable. This is the first result of its
kind.
The paper concludes with an application to the existence of an optimal
control problem governed by nonconvex state-dependent Volterra sweeping
processes in finite dimensions. | Pedro Pérez-Aros, Manuel Torres-Valdebenito, Emilio Vilches | 2023-06-13T14:53:19Z | http://arxiv.org/abs/2306.07821v2 | Galerkin-like method for integro-differential inclusions with application to state-dependent sweeping processes
###### Abstract
In this paper, we develop the Galerkin-like method to deal with first-order integro-differential inclusions. Under compactness or monotonicity conditions, we obtain new results for the existence of solutions for this class of problems, which generalize existing results in the literature and give new insights for differential inclusions with an unbounded right-hand side. The effectiveness of the proposed approach is illustrated by providing new results for nonconvex state-dependent integro-differential sweeping processes, where the right-hand side is unbounded, and the classical theory of differential inclusions is not applicable. It is the first result of this kind. The paper ends with an application to the existence of an optimal control problem governed by an integro-differential inclusion in finite dimensions.
1
Footnote 1: e-mail: [email protected]
2
Footnote 2: e-mail: [email protected]
3
Footnote 3: e-mail: [email protected]
**Keywords:** Integro-differential inclusions, Galerkin-like method, sweeping process, subsmooth sets, normal cone, measure of non-compactness
**MSC codes:** 34A60, 49J52, 34G25, 49J53
## 1 Introduction
First-order differential inclusions provide a general framework for the study of dynamical processes, where the velocity belongs to a set that depends on time and state. It constitutes an instantaneous description of the velocity, for which there is a well-developed theory. We refer to [3, 10] for an overview of the subject. However, this instantaneous description does not cover phenomena where the history of the process affects the description of the system. In this paper, we introduce a class of first-order integro-differential inclusions, where the velocity depends on an integral term and, which can be understood as a history operator or a given description of the acceleration. We develop the Galerkin-like method, introduced in [16], to deal with this class of integro-differential inclusions in separable Hilbert spaces. We show that this class generalizes first-order differential equations (see,
e.g., [3, 10]) and is general enough to include the state-dependent sweeping process perturbed by an integral term, which has applications in electrical circuits, contact mechanics and crowd motion, among other areas (see, e.g., [5, 6]). Moreover, we provide an existence result for an optimal control problem involving the aforementioned class of integro-differential inclusions in finite dimensional spaces.
Let \(\mathcal{H}\) be a separable Hilbert space and \(I=[0,T]\) a nonempty interval. The first part of this paper aims to study the following integro-differential inclusion:
\[\begin{cases}\dot{x}(t)\in F(t,x(t))+\int_{0}^{t}g(t,s,x(s))ds\quad\text{ a.e }t\in[0,T],\\ x(0)=x_{0}.\end{cases} \tag{1}\]
To prove the existence of solutions for (1), following the ideas from [16], we approach the latter problem by projecting the state, but not the velocity, into a finite-dimensional Hilbert space. Indeed, for each \(n\in\mathbb{N}\) we approach (1) by the following integro-differential inclusion:
\[\begin{cases}\dot{x}(t)\in F(t,P_{n}(x(t)))+\int_{0}^{t}g(t,s,P_{n}(x(s)))ds \quad\text{ a.e }t\in[0,T],\\ x(0)=P_{n}(x_{0}),\end{cases}\]
where, given an orthonormal basis \((e_{n})_{n\in\mathbb{N}}\) of \(\mathcal{H}\) and \(P_{n}\) is the projector from \(\mathcal{H}\) into the linear span of \(\{e_{1},\ldots,e_{n}\}\). By means of a fixed point argument, we prove an _Approximation Principle_ which states that, without any compactness or monotonicity conditions, the above problem has a solution (see Theorem 4.1). Hence, the Approximation Principle provides a starting point for the approximation of differential inclusions in infinite-dimensional spaces. Once the existence of approximate solutions has been established, the idea is to pass to the limit in the differential inclusion. To do this, we provide a Compactness Principle (see Theorem 4.2), which establishes that whenever the trajectories are relatively compact, it is possible to ensure the existence of solutions. The Compactness Principle is used later to establish the existence of solutions for state-dependent sweeping processes (see Section 5). Finally, we show that our approach allows us to recover classical results from differential inclusions with compactness and monotonicity assumptions (See Subsections 4.1 and 4.2).
The method described above is called the _Galerkin-like method_ and it was introduced in [16]. Hence, our results extend those obtained in [16]. Moreover, under compactness or monotonicity conditions, we obtain new results for the existence of solutions for this class of problems, which generalize existing results in the literature and provide new insights for differential inclusions with an unbounded right-hand side (see Theorems 4.3 and 4.4, respectively).
The second part of this paper aims to study state-dependent sweeping processes perturbed by an integral term. The sweeping process is a first-differential inclusion involving normal cones to moving sets. It was introduced by J.J. Moreau in a series of papers to model a problem in elasto-plasticity (see [19, 20, 21, 22]). Sweeping processes with integral terms have been recently considered. We can mention [7], where the authors model the movement of sticky particles via sweeping processes with an integral term. Then, in [9] the authors use the Moreau-Yosida regularization to obtain the existence of solutions. Finally, in [6], the authors prove the existence of solutions for sweeping processes with an integral term through a numerical method. We refer to [15, 16, 17, 23, 25, 26] for existence results without an integral term. In this part, and under mild assumptions, we use the results of the first part of the paper and the reduction technique proposed in [13] to obtain the existence of solutions.
The third part of this paper studies the existence of solutions for an optimal control problem governed by an integro-differential inclusion where the control acts in the integral term.
The paper is organized as follows. After some mathematical preliminaries, in Section 3, we gather the hypotheses used in the paper and provide a technical lemma used to prove the main
result. Then, in Section 4, through a fixed point argument, we prove the existence of solutions for the integro-differential inclusion (1). Section 5 proves the well-posedness for state-dependent sweeping processes with integral perturbation. Finally, Section 6 shows the existence of solutions for a related optimal control problem governed by integro-differential inclusions.
## 2 Mathematical preliminaries
From now on, \(\mathcal{H}\) denotes a separable Hilbert space whose norm is denoted by \(\|\cdot\|\). The closed unit ball is denoted by \(\mathbb{B}\). The notation \(\mathcal{H}_{w}\) stands for \(\mathcal{H}\) equipped with the weak topology, and \(x_{n}\rightharpoonup x\) denotes the weak convergence of a sequence \((x_{n})_{n}\) to \(x\).
### Elements of differential inclusions
We denote by \(L^{1}\left([0,T];\mathcal{H}\right)\) the space of \(\mathcal{H}\)-valued Lebesgue integrable functions defined over the interval \([0,T]\). We write \(L^{1}_{w}\left([0,T];\mathcal{H}\right)\) to mean the space \(L^{1}\left([0,T];\mathcal{H}\right)\) endowed with the weak topology. Moreover, we say that \(u\in\mathrm{AC}\left([0,T];\mathcal{H}\right)\) if there exists \(f\in L^{1}\left([0,T];\mathcal{H}\right)\) and \(u_{0}\in\mathcal{H}\) such that \(u(t)=u_{0}+\int_{0}^{t}f(s)ds\) for all \(t\in[0,T]\).
The following result is of intrinsic interest and provides a boosted version of classical Gronwall's lemma.
**Lemma 2.1**.: _Let \(\alpha,\beta\) be integrable functions on \([0,T]\) and \(\gamma\) be an integrable function on \([0,T]\times[0,T]\). Let \(u\colon[0,T]\to\mathbb{R}\) be a nonnegative continuous function such that for all \(t\in[0,T]\)_
\[u(t)\leq u(0)+\int_{0}^{t}\alpha(s)ds+\int_{0}^{t}\beta(s)u(s)ds+\int_{0}^{t} \int_{0}^{s}\gamma(s,\tau)u(\tau)d\tau ds.\]
_Then,_
\[u(t)\leq u(0)\mathfrak{e}(t,0)+\int_{0}^{t}\alpha(s)\mathfrak{e}(t,s)ds\text{ for all }t\in[0,T],\]
_where \(\mathfrak{e}(t_{2},t_{1}):=\exp\left(\int_{t_{1}}^{t_{2}}\left(\beta(s)+\int_{0}^{s} \gamma(s,\tau)d\tau\right)ds\right)\) for \(t_{1},t_{2}\in[0,T]\)._
Proof.: Let us consider the following non-decreasing function
\[\vartheta(t):=u(0)+\int_{0}^{t}\alpha(s)ds+\int_{0}^{t}\beta(s)u(s)ds+\int_{0 }^{t}\int_{0}^{s}\gamma(s,\tau)u(\tau)d\tau ds.\]
Then, for a.e. \(t\in[0,T]\)
\[\dot{\vartheta}(t)=\alpha(t)+\beta(t)u(t)+\int_{0}^{t}\gamma(t,s)u(s)ds\leq \alpha(t)+(\beta(t)+\int_{0}^{t}\gamma(t,s)ds)\vartheta(t),\]
which implies that
\[u(t)\leq\vartheta(t)\leq\vartheta(0)\mathfrak{e}(t,0)+\int_{0}^{t}\alpha(s) \mathfrak{e}(t,s)ds.\]
The result follows by noting that \(\vartheta(0)=u(0)\).
The following lemma, proved in [17], is a compactness criterion for absolutely continuous functions.
**Lemma 2.2**.: _Let \((x_{n})_{n}\) be a sequence of absolutely continuous functions from \([0,T]\) into \(\mathcal{H}\). Assume that for all \(n\in\mathbb{N}\)_
\[\left\|\dot{x}_{n}(t)\right\|\leq\psi(t)\quad\text{ a.e }t\in[0,T], \tag{2}\]
_where \(\psi\in L^{1}([0,T];\mathbb{R})\) and that \(x_{n}(0)\to x_{0}\) as \(n\to+\infty\). Then, there exists a subsequence \((x_{n_{k}})_{k}\) of \((x_{n})_{n}\) and an absolutely continuous function \(x\) such that_
1. \(x_{n_{k}}(t)\rightharpoonup x(t)\) _in_ \(\mathcal{H}\) _as_ \(k\to+\infty\) _for all_ \(t\in[0,T]\)_;_
2. \(x_{n_{k}}\rightharpoonup x\) _in_ \(L^{1}\left([0,T];\mathcal{H}\right)\) _as_ \(k\to+\infty\)_;_
3. \(\dot{x}_{n_{k}}\rightharpoonup\dot{x}\) _in_ \(L^{1}\left([0,T];\mathcal{H}\right)\) _as_ \(k\to+\infty\)_;_
4. \(\left\|\dot{x}(t)\right\|\leq\psi(t)\) _a.e._ \(t\in[0,T]\)_._
### Elements of nonlinear analysis
Let \(\left(e_{n}\right)_{n\in\mathbb{N}}\) be an orthonormal basis of \(\mathcal{H}\). For every \(n\in\mathbb{N}\) we consider the linear operator \(P_{n}\) from \(\mathcal{H}\) into \(\operatorname{span}\left\{e_{1},\ldots,e_{n}\right\}\) defined as
\[P_{n}\left(\sum_{k=1}^{\infty}\left\langle x,e_{k}\right\rangle e_{k}\right)= \sum_{k=1}^{n}\left\langle x,e_{k}\right\rangle e_{k}. \tag{3}\]
The following lemma summarizes the main properties of the linear operator \(P_{n}\) (we refer to [16] for its proof).
**Lemma 2.3**.: _Let \(\left(e_{n}\right)_{n\in\mathbb{N}}\) be an orthonormal basis of a separable Hilbert space \(\mathcal{H}\). Then,_
1. \(\left\|P_{n}(x)\right\|\leq\left\|x\right\|\) _for all_ \(x\in\mathcal{H}\)_;_
2. \(\left\langle P_{n}(x),x-P_{n}(x)\right\rangle=0\) _for all_ \(x\in\mathcal{H}\)_;_
3. \(P_{n}(x)\to x\) _as_ \(n\to+\infty\) _for all_ \(x\in\mathcal{H}\)_;_
4. \(P_{\theta(n)}(x_{\theta(n)})\rightharpoonup x\) _for any_ \(\theta\colon\mathbb{N}\to\mathbb{N}\) _increasing and_ \(x_{\theta(n)}\rightharpoonup x\)_;_
5. _if_ \(B\subset\mathcal{H}\) _is relatively compact then_ \(\sup_{x\in B}\left\|x-P_{n}(x)\right\|\to 0\) _as_ \(n\to+\infty\)_._
Let \(A\) be a bounded subset of \(\mathcal{H}\). We define the _Kuratowski measure of non-compactness of \(A\)_, \(\alpha(A)\), as
\[\alpha(A)=\inf\{d>0\colon A\text{ admits a finite cover by sets of diameter }\leq d\},\]
and the _Hausdorff measure of non-compactness of_ \(A\), \(\beta(A)\), as
\[\beta(A)=\inf\{r>0\colon A\text{ can be covered by finitely many balls of radius }r\}.\]
In Hilbert spaces the relation between these two concepts is given by the inequality:
\[\sqrt{2}\beta(A)\leq\alpha(A)\leq 2\beta(A)\text{ for }A\subset\mathcal{H}\text{ bounded}. \tag{4}\]
The following proposition summarizes the main properties of Kuratowski and Hausdorff measures of non-compactness (see [10, Section 9.2]).
**Proposition 2.4**.: _Let \(\mathcal{H}\) be an infinite dimensional Hilbert space and \(B,B_{1},B_{2}\) be bounded subsets of \(\mathcal{H}\). Let \(\gamma\) be either the Kuratowski or the Hausdorff measures of non-compactness. Then,_
1. \(\gamma(B)=0\) _if and only if_ \(\overline{B}\) _is compact;_
_;_
2. \(\gamma(\lambda B)=|\lambda|\gamma(B)\) _for every_ \(\lambda\in\mathbb{R}\)_;_
3. \(\gamma(B_{1}+B_{2})\leq\gamma(B_{1})+\gamma(B_{2})\)_;_
4. \(B_{1}\subset B_{2}\) _implies_ \(\gamma(B_{1})\leq\gamma(B_{2})\)_;_
5. \(\gamma(\operatorname{conv}B)=\gamma(B)\)_;_
6. \(\gamma(\bar{B})=\gamma(B)\)_._
The following lemma (see [10, Proposition 9.3]) is a useful rule for the interchange of \(\gamma\) and integration.
**Lemma 2.5**.: _Let \((v_{n})\) be a sequence of measurable functions \(v_{n}\colon[0,T]\to\mathcal{H}\) with \(\sup_{n}\|v_{n}(t)\|\leq\psi(t)\) a.e. \(t\in[0,T]\), where \(\psi\) integrable. Then_
\[\gamma\left(\left\{\int_{t}^{t+h}v_{n}(s)ds\colon n\in\mathbb{N}\right\} \right)\leq\int_{t}^{t+h}\gamma\left(\{v_{n}(s)\colon n\in\mathbb{N}\right\} \right)ds,\]
_for \(0\leq t<t+h\leq T\)._
The following result is due to Gohberg - Goldenstein - Markus (see, e.g., [18, Theorem 3.9]).
**Lemma 2.6**.: _Let \(\mathcal{H}\) be a separable Hilbert space and \(A\) be a bounded subset of \(\mathcal{H}\). Then_
\[\frac{1}{a}\inf_{n}\sup_{x\in A}\|(I-P_{n})(x)\|\leq\beta(A)\leq\inf_{n}\sup_{ x\in A}\|(I-P_{n})(x)\|,\]
_where \(P_{n}\) is the projector defined in (3) and \(a=\limsup_{n\to+\infty}\|I-P_{n}\|\) is a constant._
**Lemma 2.7**.: _Let \((x_{n})\) be a bounded sequence in a separable Hilbert space \(\mathcal{H}\). Then, there exists \(C>0\) such that_
\[\gamma(\{P_{n}x_{n}\colon n\in\mathbb{N}\})\leq C\gamma(\{x_{n}\colon n\in \mathbb{N}\}), \tag{5}\]
_where \(\gamma\) is either the Kuratowski or the Hausdorff measures of non-compactness._
Proof.: According to (4), it is enough to prove the inequality (5) for the Hausdorff measures of non-compactness \(\beta\). Indeed, due to Lemma 2.6, it follows that
\[\beta(\{P_{k}x_{k}\colon k\in\mathbb{N}\}) \leq\inf_{n}\sup_{k\in\mathbb{N}}\|(I-P_{n})(P_{k}x_{k})\|\] \[\leq\inf_{n}\sup_{k\in\mathbb{N}}\|P_{k}(I-P_{n})(x_{k})\|\] \[\leq\inf_{n}\sup_{k\in\mathbb{N}}\|(I-P_{n})(x_{k})\|\] \[\leq a\beta(\{x_{k}\colon k\in\mathbb{N}\}),\]
where \(a\) is the constant given by Lemma 2.6.
### Tools from Variational Analysis
A vector \(h\in\mathcal{H}\) belongs to the _Clarke tangent cone_\(T(S;x)\) when for every sequence \((x_{n})_{n}\) in \(S\) converging to \(x\) and every sequence of positive numbers \((t_{n})_{n}\) converging to \(0\), there exists some sequence \((h_{n})_{n}\) in \(\mathcal{H}\) converging to \(h\) such that \(x_{n}+t_{n}h_{n}\in S\) for all \(n\in\mathbb{N}\). This cone is closed and convex and its negative polar \(N(S;x)\) is the _Clarke normal cone_ to \(S\) at \(x\in S\), that is,
\[N\left(S;x\right):=\left\{v\in\mathcal{H}\colon\left\langle v,h\right\rangle \leq 0\quad\forall h\in T(S;x)\right\}.\]
As usual, \(N(S;x)=\emptyset\) if \(x\notin S\). Through that normal cone, the Clarke subdifferential of a function \(f\colon\mathcal{H}\to\mathbb{R}\cup\{+\infty\}\) is defined by
\[\partial f(x):=\left\{v\in\mathcal{H}\colon(v,-1)\in N\left(\operatorname{epi}f,(x,f(x))\right)\right\},\]
where \(\operatorname{epi}f:=\{(x,r)\in\mathcal{H}\times\mathbb{R}\colon f(x)\leq r\}\) is the epigraph of \(f\). When the function \(f\) is finite and locally Lipschitzian around \(x\), the Clarke subdifferential is characterized (see [8]) in the following simple and amenable way
\[\partial f(x)=\left\{v\in\mathcal{H}\colon\,\langle v,h\rangle\leq f^{\circ}( x;h)\text{ for all }h\in\mathcal{H}\right\},\]
where
\[f^{\circ}(x;h):=\limsup_{(t,y)\to(0^{+},x)}t^{-1}\left[f(y+th)-f(y)\right],\]
is the _generalized directional derivative_ of the locally Lipschitzian function \(f\) at \(x\) in the direction \(h\in\mathcal{H}\). The function \(f^{\circ}(x;\cdot)\) is in fact the support of \(\partial f(x)\). That characterization easily yields that the Clarke subdifferential of any locally Lipschitzian function has the important property of upper semicontinuity from \(\mathcal{H}\) into \(\mathcal{H}_{w}\).
For \(x\in\mathcal{H}\) and \(S\subset\mathcal{H}\) the distance function to the set \(S\) at \(x\in\mathcal{H}\) is defined by \(d_{S}(x):=\inf_{y\in S}\|x-y\|\). We denote \(\operatorname{Proj}_{S}(x)\) the set (possibly empty)
\[\operatorname{Proj}_{S}(x):=\left\{y\in S\colon d_{S}(x)=\|x-y\|\right\}.\]
The equality (see, e.g., [8])
\[N\left(S;x\right)=\overline{\mathbb{R}_{+}\partial d_{S}(x)}^{*}\quad\text{ for }x\in S, \tag{6}\]
gives an expression of the Clarke normal cone in terms of the distance function. As usual, it will be convenient to write \(\partial d(x,S)\) in place of \(\partial d\left(\cdot,S\right)(x)\).
To deal with the sweeping processes, we recall the definition of the class of positively \(\alpha\)-far sets, introduced in [13] and widely studied in [15].
**Definition 2.8**.: Let \(\alpha\in]0,1]\) and \(\rho\in]0,+\infty]\). Let \(S\) be a nonempty closed subset of \(\mathcal{H}\) with \(S\neq\mathcal{H}\). We say that the Clarke subdifferential of the distance function \(d(\cdot,S)\) keeps the origin \(\alpha\)-far-off on the open tube around \(S\), \(U_{\rho}(S):=\{x\in\mathcal{H}\colon 0<d(x,S)<\rho\}\), provided
\[0<\alpha\leq\inf_{x\in U_{\rho}(S)}d(0,\partial d(\cdot,S)(x)). \tag{7}\]
Moreover, if \(E\) is a given nonempty set, we say that the family \((S(t))_{t\in E}\) is positively \(\alpha\)-far if every \(S(t)\) satisfies (7) with the same \(\alpha\in]0,1]\) and \(\rho>0\).
This notion strictly includes the notion of uniformly subsmooth sets and the notion of uniformly prox-regular sets (see [15]).
**Definition 2.9**.: Let \(S\) be a closed subset of \(\mathcal{H}\). We say that \(S\) is _uniformly subsmooth_, if for every \(\varepsilon>0\) there exists \(\delta>0\), such that
\[\langle x_{1}^{*}-x_{2}^{*},x_{1}-x_{2}\rangle\geq-\varepsilon\|x_{1}-x_{2}\| \tag{8}\]
holds for all \(x_{1},x_{2}\in S\), satisfying \(\|x_{1}-x_{2}\|<\delta\) and all \(x_{i}^{*}\in N\left(S;x_{i}\right)\cap\mathbb{B}\) for \(i=1,2\). Also, if \(E\) is a given nonempty set, we say that the family \(\left(S(t)\right)_{t\in E}\) is _equi-uniformly subsmooth_, if for every \(\varepsilon>0\), there exists \(\delta>0\) such that (8) holds for each \(t\in E\) and all \(x_{1},x_{2}\in S(t)\) satisfying \(\|x_{1}-x_{2}\|<\delta\) and all \(x_{i}^{*}\in N\left(S(t);x_{i}\right)\cap\mathbb{B}\) for \(i=1,2\).
It is worth emphasizing that the above definitions include the class of convex and uniformly prox-regular sets, which are common in the study of the sweeping process (see, e.g., [15, 17]).
## 3 Technical assumptions and technical lemma
* The set-valued map \(F\colon[0,T]\times\mathcal{H}\rightrightarrows\mathcal{H}\) has nonempty, closed and convex values.
* For each \(v\in\mathcal{H}\), \(F(\cdot,v)\) is measurable.
* For a.e. \(t\in[0,T]\), \(F(t,\cdot)\) is upper semicontinuous from \(\mathcal{H}\) into \(\mathcal{H}_{w}\).
* There exist two integrable functions \(c,d\colon[0,T]\to\mathbb{R}_{+}\) such that \[d\left(0,F(t,v)\right):=\inf\{\|w\|\colon w\in F(t,v)\}\leq c(t)\|v\|+d(t),\] for all \(v\in\mathcal{H}\) and a.e. \(t\in[0,T]\).
* For a.e. \(t\in[0,T]\) and \(A\subset\mathcal{H}\) bounded, \[\gamma(F(t,A))\leq k(t)\gamma(A),\] for some integrable function \(k\colon[0,T]\to\mathbb{R}_{+}\), where \(\gamma=\alpha\) or \(\gamma=\beta\) is either the Kuratowski or the Hausdorff measure of noncompactness.
* There exists an integrable function \(\tilde{k}\colon[0,T]\to\mathbb{R}\) such that for a.e. \(t\in[0,T]\) and all \(x,y\in\mathcal{H}\) \[\langle v_{1}-v_{2},x_{1}-x_{2}\rangle\leq\tilde{k}(t)\|x_{1}-x_{2}\|^{2}\text { for all }v_{1}\in F(t,x_{1})\text{ and }v_{2}\in F(t,x_{2}).\]
* The function \(g\colon[0,T]\times[0,T]\times\mathcal{H}\to\mathcal{H}\) satisfies
* For each \(v\in\mathcal{H}\), the map \((t,s)\mapsto g(t,s,v)\) is measurable.
* For all \(r>0\), there exists an integrable function \(\mu_{r}\colon[0,T]\to\mathbb{R}_{+}\) such that for all \((t,s)\in D\) \[\|g(t,s,x)-g(t,s,y)\|\leq\mu_{r}(t)\|x-y\|\text{ for all }x,y\in r\mathbb{B}.\] Here \(D:=\{(t,s)\in[0,T]\times[0,T]\colon s\leq t\}\).
* There exists a nonnegative integrable function \(\sigma\colon D\to\mathbb{R}\) such that \[\|g(t,s,x)\|\leq\sigma(t,s)(1+\|x\|)\text{ for all }(t,s)\in D\text{ and }x\in\mathcal{H}.\]
The following lemma will be used in the proof of Theorem 4.1.
**Lemma 3.1**.: _Assume that \((\mathcal{H}_{1}^{F})\), \((\mathcal{H}_{2}^{F})\) and \((\mathcal{H}_{3}^{F})\) hold and let \(r\colon[0,T]\to\mathbb{R}_{+}\) be a continuous function. Then, the set-valued map \(G\colon[0,T]\times\mathcal{H}\rightrightarrows\mathcal{H}\) defined by_
\[G(t,x):=F(t,p_{r(t)}(x))\cap\left(c(t)\|p_{r(t)}(x)\|+d(t)\right)\mathbb{B} \quad(t,x)\in[0,T]\times\mathcal{H}, \tag{9}\]
_where \(p_{r(t)}(x):=\begin{cases}x&\text{ if }\|x\|\leq r(t);\\ r(t)\frac{x}{\|x\|}&\text{ if }\|x\|>r(t),\end{cases},\) satisfies:_
* \(G(t,x)\) _is nonempty, closed and convex for all_ \((t,x)\in[0,T]\times\mathcal{H}\)_;_
* _for each_ \(x\in\mathcal{H}\)_,_ \(G(\cdot,x)\) _is measurable;_
* _for a.e._ \(t\in[0,T]\)_,_ \(G(t,\cdot)\) _is upper semicontinuous from_ \(\mathcal{H}\) _into_ \(\mathcal{H}_{w}\)_;_
* _for all_ \(x\in\mathcal{H}\) _and a.e._ \(t\in[0,T]\)__ \[\|G(t,x)\|:=\sup\{\|w\|\colon w\in G(t,x)\}\leq c(t)r(t)+d(t).\]
Proof.: (i) is direct. (iii) follows from \((\mathcal{H}_{2}^{F})\) and [2, Theorems 17.23 and 17.25]. Also, due to \((\mathcal{H}_{3}^{F})\), we have
\[\|G(t,x)\| =\sup\{\|w\|\colon w\in G(t,x)\}\] \[\leq c(t)\|p_{r(t)}(x)\|+d(t)\] \[\leq c(t)r(t)+d(t)\]
which proves (iv). Thus, by virtue of (i) and (iv), \(G\) takes weakly compact and convex values. Therefore, (ii) follows from [14, Proposition 2.2.37] and \((\mathcal{H}_{1}^{F})\).
## 4 Galerkin-like method for integro-differential inclusions
In this section, we study the existence of solutions to the following integro-differential inclusion:
\[\begin{cases}\dot{x}(t)\in F(t,x(t))+\int_{0}^{t}g(t,s,x(s))ds\quad\text{ a.e. }t\in[0,T];\\ x(0)=x_{0},\end{cases} \tag{10}\]
where \(F\colon[0,T]\times\mathcal{H}\rightrightarrows\mathcal{H}\) is a set-valued map with nonempty, closed, and convex values and \(g\colon[0,T]\times[0,T]\times\mathcal{H}\rightarrow\mathcal{H}\) is a given function. For every \(n\in\mathbb{N}\) let us consider the following integro-differential inclusion:
\[\begin{cases}\dot{x}(t)\in F(t,P_{n}(x(t)))+\int_{0}^{t}g(t,s,P_{n}(x(s)))ds \quad\text{ a.e. }t\in[0,T];\\ x(0)=P_{n}(x_{0}),\end{cases} \tag{11}\]
where \(P_{n}\colon\mathcal{H}\rightarrow\operatorname{span}\left\{e_{1},\ldots,e_{n}\right\}\) is the linear operator defined in Lemma 2.3. The next proposition asserts the existence of solutions for the approximate problem (11). We will call this result the _Approximation Principle_, since it provides a methodology to approximate the solutions of integro-differential inclusions. In Subsections 4.1 and 4.2, respectively, we will see that the Approximation Principle together with a condition of compactness or monotonicity allows us to obtain the existence of solutions for the problem 1.
**Theorem 4.1** (Approximation Principle for integro-differential inclusions).: _Assume that \((\mathcal{H}^{F})\) and \((\mathcal{H}^{g})\) hold. Then, for each \(n\in\mathbb{N}\) the problem (11) admits at least one solution \(x_{n}\in\operatorname{AC}\left([0,T];\mathcal{H}\right)\). Moreover,_
\[\|x_{n}(t)\|\leq r(t)\quad\text{ for all }t\in[0,T], \tag{12}\]
_where for all \(t\in[0,T]\)_
\[r(t):=\|x_{0}\|e(t,0)+\int_{0}^{t}\left(d(s)+\int_{0}^{s}\sigma(s,\tau)d\tau \right)e(t,s)ds,\]
_where_
\[e(t_{2},t_{1}):=\exp\left(\int_{t_{1}}^{t_{2}}(c(s)+\int_{0}^{s}\sigma(s,\tau )d\tau)ds\right)\text{ for }t_{1},t_{2}\in[0,T].\]
_Moreover, for a.e. \(t\in[0,T]\)_
\[\|\dot{x}_{n}(t)\|\leq\psi(t):=c(t)r(t)+d(t)+\int_{0}^{t}\sigma(t,s)ds+\int_{0 }^{t}\sigma(t,s)r(s)ds. \tag{13}\]
Proof.: Let us consider \(G(t,x)\) as the mapping defined in (9). Then, due to Lemma 3.1, \(G\) satisfies \((\mathcal{H}_{1}^{F})\), \((\mathcal{H}_{2}^{F})\) and
\[\|G(t,x)\|:=\sup\{\|w\|\colon w\in G(t,x)\}\leq c(t)r(t)+d(t), \tag{14}\]
for all \(x\in\mathcal{H}\) and a.e. \(t\in[0,T]\).
Consider the following differential inclusion:
\[\left\{\begin{aligned} &\dot{x}(t)\in G(t,P_{n}(x(t)))+\int_{0}^{t}g(t,s,p_{r(s )}(P_{n}(x(s))))ds\quad\text{ a.e. }t\in[0,T];\\ & x(0)=P_{n}(x_{0}).\end{aligned}\right. \tag{15}\]
Let \(K\subset L^{1}\left([0,T];\mathcal{H}\right)\) be defined by
\[K:=\left\{f\in L^{1}\left([0,T];\mathcal{H}\right):\|f(t)\|\leq\psi(t)\text{ a.e. }t\in[0,T]\right\},\]
where \(\psi\) is defined by (13). This set is nonempty, closed, and convex. In addition, since \(\psi\) is integrable, the set \(K\) is bounded and uniformly integrable. Hence, it is compact in \(L^{1}_{w}\left([0,T];\mathcal{H}\right)\) (see [12, Theorem 2.3.24]). Since \(L^{1}\left([0,T];\mathcal{H}\right)\) is separable, we also note that \(K\), endowed with the relative \(L^{1}_{w}\left([0,T];\mathcal{H}\right)\) topology is a metric space (see [11, Theorem V.6.3]). Consider the map \(\mathcal{F}_{n}\colon K\rightrightarrows L^{1}\left([0,T];\mathcal{H}\right)\) defined for \(f\in K\) as those \(v\in L^{1}\left([0,T];\mathcal{H}\right)\) such that for a.e. \(t\in[0,T]\)
\[v(t)\in G(t,P_{n}(x_{0}+\int_{0}^{t}f(s)ds))+\int_{0}^{t}g(t,s,p_{r(s)}(P_{n}( x_{0}+\int_{0}^{s}f(\tau)d\tau)))ds,\]
By \((\mathcal{H}^{F}_{1})\), \((\mathcal{H}^{F}_{2})\), (14) and [1, Lemma 6], we conclude that \(\mathcal{F}_{n}(f)\) has nonempty, closed and convex values. Moreover, \(\mathcal{F}_{n}(K)\subset K\). Indeed, let \(f\in K\) and \(v\in\mathcal{F}_{n}(f)\). Then,
\[\|v(t)\| \leq\sup\left\{\|w\|\colon w\in G(t,P_{n}(x_{0}+\int_{0}^{t}f(s) ds))\right.\] \[\qquad\qquad\qquad\left.+\int_{0}^{t}g(t,s,p_{r(s)}(P_{n}(x_{0}+ \int_{0}^{s}f(\tau)d\tau)))ds\right\}\] \[\leq c(t)r(t)+d(t)+\int_{0}^{t}\sigma(t,s)ds+\int_{0}^{t}\sigma(t,s)r(s)ds=\psi(t).\]
We denote \(K_{w}\) the set \(K\) seen as a compact convex subset of \(L^{1}_{w}\left([0,T];\mathcal{H}\right)\).
_Claim:_\(\mathcal{F}_{n}\) is upper semicontinuous from \(K_{w}\) into \(K_{w}\).
_Proof of Claim :_ By virtue of [14, Proposition 1.2.23], it is sufficient to prove that its \(\operatorname{graph}(\mathcal{F}_{n})\) is sequentially closed in \(K_{w}\times K_{w}\). Indeed, let \((f_{m},v_{m})\in\operatorname{graph}(\mathcal{F}_{n})\) with \(f_{m}\to f\) and \(v_{m}\to v\) in \(L^{1}_{w}\left([0,T];\mathcal{H}\right)\) as \(m\to+\infty\). We have to show that \((f,v)\in\operatorname{graph}(\mathcal{F}_{n})\). To do that, let us define
\[u_{m}(t):=P_{n}(x_{0})+\int_{0}^{t}f_{m}(s)ds\quad\text{ for every }t\in[0,T].\]
Thus, for a.e. \(t\in[0,T]\),
\[v_{m}(t)\in G(t,P_{n}(u_{m}(t)))+\int_{0}^{t}g(t,s,p_{r(s)}(P_{n}(u_{m}(s))))ds. \tag{16}\]
Also, since \(f_{m}\in K\), we have that
\[\|\dot{u}_{m}(t)\|\leq\psi(t)\quad\text{ a.e. }t\in[0,T].\]
Hence, due to Lemma 2.2, there exists a subsequence of \((u_{m})_{m}\) (without relabeling) and an absolutely continuous function \(u\colon[0,T]\to\mathcal{H}\) such that
\[u_{m}(t) \to u(t)\text{ weakly \ for all }t\in[0,T];\] \[\dot{u}_{m} \to\dot{u}\text{ in }L^{1}_{w}\left([0,T];\mathcal{H}\right),\]
which implies that \(\dot{u}=f\). Moreover, since \((u_{m}(t))_{m}\) is bounded for every \(t\in[0,T]\), \(P_{n}(u_{m}(t))\to P_{n}(u(t))\) for every \(t\in[0,T]\). Consequently, by virtue of [12, Proposition 2.3.1], (16) and the upper semicontinuity of \(G\) from \(\mathcal{H}\) into \(\mathcal{H}_{w}\), for a.e. \(t\in[0,T]\)
\[\begin{split} v(t)&\in\overline{\mathrm{conv}}\,w \text{-}\limsup_{m\to+\infty}\{v_{m}(t)\}\\ &\subset\overline{\mathrm{conv}}\,\left[G(t,P_{n}(u(t)))+\int_{ 0}^{t}g(t,s,p_{r(s)}(P_{n}(u(s))))ds\right]\\ &=G(t,P_{n}(u(t)))+\int_{0}^{t}g(t,s,p_{r(s)}(P_{n}(u(s))))ds, \end{split} \tag{17}\]
which shows that \((f,v)\in\mathrm{graph}(\mathcal{F}_{n})\), as claimed.
Now, we can invoke the Kakutani-Fan-Glicksberg fixed point theorem (see [2, Corollary 17.55]) to the set-valued map \(\mathcal{F}_{n}\colon K_{w}\rightrightarrows K_{w}\) to deduce the existence of \(\widehat{f}_{n}\in K\) such that \(\widehat{f}_{n}\in\mathcal{F}_{n}(\widehat{f}_{n})\). Then, the function \(x_{n}\in\mathrm{AC}\left([0,T];\mathcal{H}\right)\) defined for every \(t\in[0,T]\) as:
\[x_{n}(t)=P_{n}(x_{0})+\int_{0}^{t}\widehat{f}_{n}(s)ds,\]
is a solution for (15). Moreover, \(x_{n}\in\mathrm{AC}\left([0,T];\mathcal{H}\right)\) is a solution of (11). Indeed, for a.e. \(t\in[0,T]\),
\[\begin{split}\|\dot{x}_{n}(t)\|&\leq c(t)\|p_{r(t)} (P_{n}(x_{n}(t)))\|+d(t)+\int_{0}^{t}\sigma(t,s)ds\\ &\quad+\int_{0}^{t}\sigma(t,s)\|p_{r(s)}(P_{n}(x_{n}(s)))\|ds\\ &\leq c(t)\|x_{n}(t)\|+d(t)+\int_{0}^{t}\sigma(t,s)ds+\int_{0}^{ t}\sigma(t,s)\|x_{n}(s)\|ds.\end{split}\]
Therefore, by using the inequality \(\|P_{n}(x_{0})\|\leq\|x_{0}\|\), we get that for all \(t\in[0,T]\)
\[\begin{split}\|x_{n}(t)\|&\leq\ \|x_{0}\|+\int_{0}^{t}d(s)ds+\int_{0}^{t}\int_{0}^{s} \sigma(s,\tau)d\tau ds+\int_{0}^{t}c(s)\|x_{n}(s)\|ds\\ &\quad+\int_{0}^{t}\int_{0}^{s}\sigma(s,\tau)\|x_{n}(\tau)\|d\tau ds.\end{split}\]
Therefore, inequality (12) follows from Lemma 2.1. Finally,
\[\|P_{n}(x_{n}(t))\|\leq r(t)\text{ for all }t\in[0,T].\]
Hence, \(p_{r(t)}(P_{n}(x_{n}(t)))=P_{n}(x_{n}(t))\) for all \(t\in[0,T]\), which finishes the proof.
### Integro-differential inclusions under compactness assumptions
In this subsection, we prove a Compactness Principle for integro-differential inclusions, that is, whenever the sequence \((P_{n}(x_{n}(t)))_{n}\) obtained in Theorem 4.1 is relatively compact for all \(t\in[0,T]\), then the problem (1) admits at least one absolutely continuous solution. This principle is used to show the existence of solutions for the problem 1 under a compactness assumption (see Theorem 4.3). Subsequently, in Section 5, the Principle of Compactness is used to prove the existence of state-dependent integro-differential sweeping processes.
**Theorem 4.2** (Compactness Principle for integro-differential inclusions).: _Let assumptions \((\mathcal{H}^{F})\) and \((\mathcal{H}^{g})\) hold. Assume that the sequence \((P_{n}(x_{n}(t)))_{n}\) obtained in Theorem 4.1 is relatively compact for all \(t\in[0,T]\). Then, there exists a subsequence \((x_{n_{k}})_{k}\) of \((x_{n})\) converging strongly pointwisely to a solution \(x\in AC([0,T];\mathcal{H})\) of (10). Moreover,_
\[\|x(t)\|\leq r(t)\text{ for all }t\in[0,T]\text{ and }\|\dot{x}(t)\|\leq\psi(t) \text{ for a.e. }t\in[0,T],\]
_where \(r\) and \(\psi\) are the functions defined in Theorem 4.1._
Proof.: We will show the existence of the subsequence via Lemma 2.2.
_Claim 1:_ There exists a subsequence \((x_{n_{k}})_{k}\) of \((x_{n})_{n}\) and an absolutely continuous function \(x\) such that (i), (ii), (iii) and (iv) from Lemma 2.2 hold with \(\psi\) defined as in the statement of the theorem.
_Proof of Claim 1:_ Due to Theorem 4.1, \(\|\dot{x}_{n}(t)\|\leq\psi(t)\) for a.e. \(t\in[0,T]\), which shows that (2) holds with the function \(\psi\) defined as above. Also, \(P_{n}(x_{0})\to x_{0}\) as \(n\to+\infty\). Therefore, the claim follows from Lemma 2.2.
By simplicity we denote \(P_{k}:=P_{n_{k}}\) and \(x_{k}:=x_{n_{k}}\) for \(k\in\mathbb{N}\).
_Claim 2:_\(P_{k}(x_{k}(t))\rightharpoonup x(t)\) as \(k\to+\infty\) for all \(t\in[0,T]\).
_Proof of Claim 2:_ Since \(x_{k}(t)\rightharpoonup x(t)\) as \(k\to+\infty\) for all \(t\in[0,T]\), the result follows from (iv) of Lemma 2.3.
_Claim 3:_\(P_{k}(x_{k}(t))\to x(t)\) as \(k\to+\infty\) for all \(t\in[0,T]\).
_Proof of Claim 3:_ The result follows from Claim 2 and the relative compactness of the sequence \((P_{n}(x_{n}(t)))_{n}\) for a.e. \(t\in[0,T]\).
Summarizing, we have
1. For each \(x\in\mathcal{H}\), \(F(\cdot,x)\) is measurable.
2. For a.e. \(t\in[0,T]\), \(F(t,\cdot)\) is upper semicontinuous from \(\mathcal{H}\) into \(\mathcal{H}_{w}\).
3. \(\dot{x}_{k}\rightharpoonup\dot{x}\) in \(L^{1}\left([0,T];\mathcal{H}\right)\);
4. \(P_{k}(x_{k}(t))\to x(t)\) as \(k\to+\infty\) for a.e. \(t\in[0,T]\);
5. For all \(k\in\mathbb{N}\) and a.e. \(t\in[0,T]\) \[\dot{x}_{k}(t)\in F(t,P_{k}(x_{k}(t)))+\int_{0}^{t}g(t,s,P_{k}(x_{k}(s)))ds.\]
These conditions and the Convergence Theorem (see, e.g., [1, Proposition 5] for more details) imply that \(x\in\operatorname{AC}\left([0,T];\mathcal{H}\right)\) is a solution of (10), which finishes the proof.
The following result establishes the existence of solutions for (10) under a compactness condition on the set-valued map \(F\). Hence, whenever \(g\equiv 0\), we recover classical results of differential inclusions under compactness (see, e.g., [10, Chapter 9]). Furthermore, when the dimension of space \(\mathcal{H}\) is finite, this condition is trivially satisfied.
**Theorem 4.3**.: _Assume, in addition to the hypotheses of Theorem 4.1, that \((\mathcal{H}_{4}^{F})\) holds. Then, the problem (10) admits an absolutely continuous solution \(x\). Moreover,_
\[\|x(t)\|\leq r(t)\text{ for all }t\in[0,T]\text{ and }\|\dot{x}(t)\|\leq\psi(t) \text{ for a.e. }t\in[0,T],\]
_where \(r\) and \(\psi\) are the functions defined in Theorem 4.1._
Proof.: We show that the the sequence \((P_{n}(x_{n}(t)))_{n}\) is relatively compact for all \(t\in[0,T]\). Hence, the result will be a consequence of Theorem 4.2.
Indeed, fix \(t\in[0,T]\) and let us consider the sets
\[A(t)=\{P_{n}(x_{n}(t))\colon n\in\mathbb{N}\}.\]
We proceed to prove that \(\gamma(A(t))=0\) for all \(t\in[0,T]\), where \(\gamma=\alpha\) or \(\gamma=\beta\) is either the Kuratowski or the Hausdorff measure of non-compactness.
Fix \(t\in[0,T]\). By virtue of Lemma 2.7, there exists \(C>0\) such that
\[\gamma(A(t))\leq C\gamma(\{x_{n}(t)\colon n\in\mathbb{N}\}).\]
Moreover, by using Proposition 2.4 and Lemma 2.5, we obtain that
\[\gamma(A(t)) \leq C\gamma(\{x_{n}(t)\colon n\in\mathbb{N}\})\] \[\leq C\gamma(\{P_{n}(x_{0})\colon n\in\mathbb{N}\})+C\gamma(\{ \int_{0}^{t}\dot{x}_{n}(s)ds\colon n\in\mathbb{N}\})\] \[=C\gamma(\{\int_{0}^{t}\dot{x}_{n}(s)ds\colon n\in\mathbb{N}\})\] \[\leq C\int_{0}^{t}\gamma(\{\dot{x}_{n}(s)\colon n\in\mathbb{N}\})ds\] \[\leq C\int_{0}^{t}\gamma(F(s,A(s)))ds+C\int_{0}^{t}\gamma(\int_{0 }^{s}g(s,\tau,A(\tau))d\tau)ds\] \[\leq C\int_{0}^{t}\gamma(F(s,A(s)))ds+C\int_{0}^{t}\int_{0}^{s} \gamma(g(s,\tau,A(\tau)))d\tau ds\] \[\leq C\int_{0}^{t}k(s)\gamma(A(s))ds+C\int_{0}^{t}\int_{0}^{s} \mu_{r(s)}(\tau)\gamma(A(\tau))d\tau ds\] \[\leq C\int_{0}^{t}[k(s)+\int_{0}^{s}\mu_{r(T)}(\tau)d\tau]\gamma( A(s))ds,\]
where we have used that the set \(\{P_{n}(x_{0})\colon n\in\mathbb{N}\}\) is relatively compact and \((\mathcal{H}_{4}^{F})\) holds. Therefore, for all \(t\in[0,T]\)
\[\gamma(A(t))\leq C\int_{0}^{t}[k(s)+\int_{0}^{s}\mu_{r(T)}(\tau)d\tau]\gamma( A(s))ds,\]
which, due to Gronwall's Lemma, implies that \(\gamma(A(t))=0\) for all \(t\in[0,T]\).
Therefore, the sequence \((P_{n}(x_{n}(t)))_{n}\) is relatively compact for all \(t\in[0,T]\). The proof is finished.
### Integro-differential inclusions under monotonicity assumptions
The following result establishes the existence of solutions for (10) under a monotonicity condition on the set-valued map \(F\). Hence, whenever \(g\equiv 0\), we recover classical results of differential inclusions under monotonicity conditions (see, e.g., [10, Chapter 9]). Furthermore, this result does not require any compactness assumption on the problem data.
**Theorem 4.4**.: _Assume, in addition to the hypotheses of Theorem 4.1, that \((\mathcal{H}_{5}^{F})\) holds. Then, the problem (10) has a unique absolutely continuous solution \(x\). Moreover,_
\[\|x(t)\|\leq r(t)\text{ for all }t\in[0,T]\text{ and }\|\dot{x}(t)\|\leq\psi(t) \text{ for a.e. }t\in[0,T],\]
_where \(r\) and \(\psi\) are the functions defined in Theorem 4.1._
Proof.: We will prove that \((P_{n}x_{n}(t))_{n}\) is a Cauchy sequence for all \(t\in[0,T]\). Indeed, fix \(n,m\in\mathbb{N}\) with \(n\geq m+1\). Let us consider the absolutely continuous function:
\[\Theta(t):=\frac{1}{2}\|P_{n}x_{n}(t)-P_{m}x_{m}(t)\|^{2}.\]
Then, for a.e. \(t\in[0,T]\),
\[\dot{\Theta}(t)=\langle P_{n}x_{n}(t)-P_{m}x_{m}(t),\dot{x}_{n}(t)-\dot{x}_{m} (t)\rangle+\sum_{k=m+1}^{n}\langle x_{n}(t),e_{k}\rangle\langle\dot{x}_{m}(t), e_{k}\rangle.\]
Set \(\delta_{n,m}(t):=\sum_{k=m+1}^{n}\langle x_{n}(t),e_{k}\rangle\langle\dot{x}_{m }(t),e_{k}\rangle\). Then, for a.e. \(t\in[0,T]\)
\[|\delta_{n,m}(t)| \leq\left(\sum_{k=m+1}^{n}\langle x_{n}(t),e_{k}\rangle^{2} \right)^{1/2}\cdot\left(\sum_{k=m+1}^{n}\langle\dot{x}_{m}(t),e_{k}\rangle^{2 }\right)^{1/2}\] \[\leq\|x_{n}(t)\|\cdot\|\dot{x}_{m}(t)\|\leq r(t)\cdot\psi(t),\]
where \(r\) and \(\psi\) are the functions defined in Theorem 4.1. Therefore, \(\delta_{n,m}\) is uniformly bounded in \(L^{1}([0,T])\) and \(\delta_{n,m}(t)\) converges to \(0\), as \(n,m\to+\infty\), for all \(t\in[0,T]\).
Besides, by using \((\mathcal{H}_{5}^{F})\), we obtain that for a.e. \(t\in[0,T]\)
\[\dot{\Theta}(t)\leq\tilde{k}(t)\|P_{n}x_{n}(t)-P_{m}x_{m}(t)\|^{2}\] \[+\langle P_{n}x_{n}(t)-P_{m}x_{m}(t),\int_{0}^{t}[g(t,s,P_{n}x_{n }(s))-g(t,s,P_{m}x_{m}(s))]ds\rangle+|\delta_{n,m}(t)|\] \[\leq\tilde{k}(t)\|P_{n}x_{n}(t)-P_{m}x_{m}(t)\|^{2}\] \[+\|P_{n}x_{n}(t)-P_{m}x_{m}(t)\|\int_{0}^{t}\mu_{r(T)}(s)\|P_{n}x _{n}(s)-P_{m}x_{m}(s)\|ds+|\delta_{n,m}(t)|.\]
Hence, for all \(t\in[0,T]\)
\[\dot{\Theta}(t)\leq 2\tilde{k}(t)\Theta(t)+\sqrt{2\Theta(t)}\int_{0}^{t}\mu_{r (T)}(s)\sqrt{2\Theta(s)}ds+|\delta_{n,m}(t)|.\]
Therefore, by arguments similar to those given in the proof of Lemma 2.1, we get that
\[\Theta(t)\leq\frac{1}{2}\|P_{n}x_{0}-P_{m}x_{0}\|^{2}\pi(t,0)+\int_{0}^{t}| \delta_{n,m}(s)|\pi(t,s)ds\text{ for all }t\in[0,T], \tag{18}\]
where
\[\pi(t_{2},t_{1}):=\exp\left(\int_{t_{1}}^{t_{2}}\left(2\tilde{k}(s)+2\int_{0} ^{s}\mu_{r(T)}(\tau)d\tau\right)ds\right)\text{ for }t_{1},t_{2}\in[0,T].\]
Then, by taking the limit in (18), we obtain that \((P_{n}x_{n}(t))_{n}\) is a Cauchy sequence for all \(t\in[0,T]\). Hence, for some \(x\colon[0,T]\to\mathcal{H}\), the following convergence holds
\[P_{n}x_{n}(t)\to x(t)\text{ for all }t\in[0,T].\]
To prove that \(x\) is indeed a solution of (10), we can proceed similarly to the proof of Theorem 4.3. Finally, to prove the uniqueness, let \(x_{1}\) and \(x_{2}\) be two solutions of (10) and define \(\vartheta(t):=\frac{1}{2}\|x_{1}(t)-x_{2}(t)\|^{2}\). Hence, for a.e. \(t\in[0,T]\),
\[\dot{\vartheta}(t) \leq\tilde{k}(t)\|x_{1}(t)-x_{2}(t)\|^{2}+\langle x_{1}(t)-x_{2} (t),\int_{0}^{t}[g(t,s,x_{1}(s))-g(t,s,x_{2}(s))]ds\rangle\] \[\leq\tilde{k}(t)\|x_{1}(t)-x_{2}(t)\|^{2}+\|x_{1}(t)-x_{2}(t)\| \int_{0}^{t}\mu_{r(T)}(s)\|x_{1}(s)-x_{2}(s)\|ds,\]
which, by arguments similar to those given in the proof of Lemma 2.1, implies that \(\vartheta(t)=0\) for all \(t\in[0,T]\)
State-dependent sweeping processes
In this section, by using the Approximation Principle (Theorem 4.1) and the Compactness Principle (Theorem 4.2), we obtain the existence of solutions for the state-dependent sweeping process with integral perturbation:
\[\begin{cases}\dot{x}(t)\in-N(C(t,x(t));x(t))+F(t,x(t))+\int_{0}^{t}g(t,s,x(s)) ds\text{ a.e }t\in[0,T],\\ x(0)=x_{0},\end{cases} \tag{19}\]
where \(x_{0}\in C(0,x_{0})\) and \(C\colon[0,T]\times\mathcal{H}\rightrightarrows\mathcal{H}\) is set-valued with nonempty and closed values satisfying the following assumptions:
* There exist \(\zeta\in\mathrm{AC}\left([0,T];\mathbb{R}_{+}\right)\) and \(L\in[0,1[\) such that for all \(s,t\in[0,T]\) and all \(x,y,z\in\mathcal{H}\) \[\left|d(z,C(t,x))-d(z,C(s,y))\right|\leq\left|\zeta(t)-\zeta(s)\right|+L\|x-y\|\).
* The family \(\{C(t,x)\colon(t,x)\in[0,T]\times\mathcal{H}\}\) is equi-uniformly subsmooth.
* For every \(t\in[0,T]\), every \(r>0\) and every bounded set \(A\subset\mathcal{H}\) the set \(C(t,A)\cap r\mathbb{B}\) is relatively compact.
* There exist \(\alpha_{0}\in]0,1]\) and \(\rho\in]0,+\infty]\) such that for every \(x\in\mathcal{H}\) \[0<\alpha_{0}\leq\inf_{z\in U_{\rho}(C(t,x))}d\left(0,\partial d(\cdot,C(t,x)) (z)\right)\quad\text{a.e. }t\in[0,T],\] where \(U_{\rho}\left(C(t,x)\right)=\{z\in\mathcal{H}\colon 0<d(z,C(t,x))<\rho\}\).
**Remark 5.1**.: According to [15, Proposition 3.9], hypothesis \((\mathcal{H}_{2})\) implies that for every \(\alpha_{0}\in]\sqrt{L},1]\) there exists \(\rho>0\) such that \((\mathcal{H}_{4})\) holds.
The next theorem is the main result of this section. When \(g\equiv 0\), it extends previous results on state-dependent sweeping processes (see, e.g., [15, 17, 16]). Moreover, it complements the results from [6], where the authors prove the existence of solutions for integrally perturbed sweeping processes with uniformly prox-regular sets. The next result is the first existence result for integrally perturbed state-dependent sweeping processes. We emphasize here that the existence is obtained within the framework of equi-uniformly prox-regular sets, which strictly contain the class of prox-regular sets.
**Theorem 5.2**.: _Assume, in addition to \((\mathcal{H}^{F})\), \((\mathcal{H}_{4}^{F})\) and \((\mathcal{H}^{g})\), that \((\mathcal{H}_{1})\), \((\mathcal{H}_{2})\) and \((\mathcal{H}_{3})\) hold. Then, the problem (19) admits at least one solution \(x\in\mathrm{AC}([0,T];\mathcal{H})\)._
Proof.: Fix \(\alpha_{0}\in]\sqrt{L},1]\) and, let \(\rho>0\) such that \((\mathcal{H}_{4})\) holds (see Remark 5.1).
Let \(m\in L^{1}([0,T];\mathbb{R}_{+})\) be the unique solution for the following first-order integral equation:
\[m(t):=\frac{|\dot{\zeta}(t)|}{\alpha_{0}^{2}-L}+\frac{(1+L)}{\alpha_{0}^{2}-L }(c(t)r(t)+d(t)+\int_{0}^{t}\sigma(t,s)ds+\int_{0}^{t}\sigma(t,s)r(s)ds), \tag{20}\]
where \(r\) is the function defined by:
\[r(t):=\|x_{0}\|\varepsilon(t,0)+\int_{0}^{t}\left(d(s)+m(s)+\int_{0}^{s} \sigma(s,\tau)d\tau\right)\varepsilon(t,s)ds, \tag{21}\]
with \(\varepsilon(t_{2},t_{1})=\exp\left(\int_{t_{1}}^{t_{2}}\left(c(s)+\int_{0}^{s} \sigma(s,\tau)d\tau\right)ds\right)\).
Let us define
\[\psi(t):=c(t)r(t)+d(t)+m(t)+\int_{0}^{t}\sigma(t,s)ds+\int_{0}^{t}\sigma(t,s)r( s)ds.\]
We prove the theorem under the additional assumption:
\[\int_{0}^{T}\left(|\dot{\zeta}(s)|+(1+L)\psi(s)\right)ds<\rho, \tag{22}\]
where \(\rho>0\) is defined at the beginning of the proof. The general case, without the above assumption on the length of \(T\), can be obtained in a similar way as [16, Step 2].
Let \(G\colon[0,T]\times\mathcal{H}\to\mathcal{H}\) the set-valued map defined as
\[G(t,x):=-m(t)\partial d_{C(t,x)}(x)+F(t,x)\text{ for }(t,x)\in[0,T]\times \mathcal{H},\]
where \(m\) is defined in (20). We will show, by using the results from Section 4, that the following differential inclusion has at least one solution:
\[\begin{cases}\dot{x}(t)\in G(t,x(t))+\int_{0}^{t}g(t,s,x(s))ds\text{ a.e. }t\in[0,T],\\ x(0)=x_{0}.\end{cases}\]
Moreover, any solution of the above problem solves (19).
The proof is divided into several claims.
_Claim 1_: \(G\) satisfies the assumptions of Theorem 4.1, that is,
* For each \(x\in\mathcal{H}\), \(G(\cdot,x)\) is measurable.
* For a.e. \(t\in[0,T]\), \(G(t,\cdot)\) is upper semicontinuous from \(\mathcal{H}\) into \(\mathcal{H}_{w}\).
* For all \(x\in\mathcal{H}\) and a.e. \(t\in[0,T]\) \[d(0,G(t,x))\leq c(t)\|x\|+d(t)+m(t),\]
where \(c,d\) and \(m\) are defined by \((\mathcal{H}_{3}^{F})\) and (20), respectively.
_Proof of Claim 1:_ It was proved in [16, Claim 6.2]. \(\Box\)
For each \(n\in\mathbb{N}\), let us consider the following integro-differential inclusion:
\[\begin{split}\dot{x}(t)&\in G(t,P_{n}(x(t)))+\int_{0 }^{t}g(t,s,P_{n}(x(s)))ds\text{ a.e. }t\in[0,T],\\ x(0)&=P_{n}(x_{0}),\end{split} \tag{23}\]
where \((P_{n})_{n}\) is an orthonormal basis of \(\mathcal{H}\). By virtue of Theorem 4.1, the above differential inclusion has at least one solution \(x_{n}\in\operatorname{AC}([0,T];\mathcal{H})\). Moreover,
\[\|x_{n}(t)\|\leq r(t)\text{ for all }t\in[0,T],\]
and for a.e \(t\in[0,T]\)
\[\|\dot{x}_{n}(t)\|\leq\psi(t):=c(t)r(t)+d(t)+\int_{0}^{t}\sigma(t,s)ds+\int_{ 0}^{t}\sigma(t,s)r(s)ds,\]
where \(r\) is defined in (21). To simplify the notation, we write
\[\Gamma_{n}(t):=\partial d_{C(t,P_{n}(x_{n}(t)))}(P_{n}(x_{n}(t))).\]
Without loss of generality (see inequality (22)), we assume that that \(n\in\mathbb{N}\) is large enough so that
\[\int_{0}^{T}\left(|\dot{\zeta}(s)|+(1+L)\psi(s)\right)ds+(1+L)\|x_{0}-P_{n}(x_{0} )\|<\rho, \tag{24}\]
We observe that there exist \(f_{n}(t)\in F(t,P_{n}(x_{n}(t)))\) and \(d_{n}(t)\in\Gamma_{n}(t)\) such that
\[\dot{x}_{n}(t)=-m(t)d_{n}(t)+f_{n}(t)+\int_{0}^{T}g(t,s,P_{n}(x_{n}(s)))ds\text { a.e. }t\in[0,T].\]
Define \(\varphi_{n}(t):=d_{C(t,P_{n}(x_{n}(t)))}(P_{n}(x_{n}(t)))\) for \(t\in[0,T]\).
_Claim 2:_ The distance function satisfies the following inequality:
\[\varphi_{n}(t)\leq(1+L)\|x_{0}-P_{n}(x_{0})\|\text{ for all }t\in[0,T].\]
_Proof of Claim 2:_ The idea is to estimate the derivative of the distance function \(\varphi_{n}(t)\). To do that, we proceed to show first that \(\varphi_{n}(t)<\rho\) for all \(t\in[0,T]\). Indeed, let \(t\in[0,T]\) where \(\dot{x}_{n}(t)\) exists. Then, due to [16, Lemma 4.4] and (20),
\[\dot{\varphi}_{n}(t) \leq|\dot{\zeta}(t)|+L\|P_{n}(\dot{x}_{n}(t))\|+\max_{y^{*}\in \Gamma_{n}(t)}\langle y^{*},P_{n}(\dot{x}_{n}(t))\rangle\] \[\leq|\dot{\zeta}(t)|+(1+L)\|\dot{x}_{n}(t)\|\] \[\leq|\dot{\zeta}(t)|+(1+L)\psi(t),\]
which, by (24), implies that \(\varphi_{n}(t)<\rho\) for all \(t\in[0,T]\).
Let \(t\in\Omega_{n}:=\{t\in[0,T]\colon P_{n}(x_{n}(t))\notin C(t,P_{n}(x_{n}(t)))\}\), where \(\dot{x}_{n}(t)\) exists. Then, due to [16, Lemma 4.4],
\[\dot{\varphi}_{n}(t) \leq|\dot{\zeta}(t)|+L\|P_{n}(\dot{x}_{n}(t))\|+\min_{y^{*}\in \Gamma_{n}(t)}\langle y^{*},P_{n}(\dot{x}_{n}(t))\rangle\] \[\leq|\dot{\zeta}(t)|+L\psi(t)+\langle d_{n}(t),P_{n}(\dot{x}_{n}( t))\rangle\] \[=|\dot{\zeta}(t)|+L\psi(t)+m(t)\langle d_{n}(t),-P_{n}(d_{n}(t))\rangle\] \[+\langle d_{n}(t),P_{n}(f_{n}(t)+\int_{0}^{t}g(t,s,P_{n}(x_{n}(s) ))ds)\rangle\] \[\leq|\dot{\zeta}(t)|+L\psi(t)+\|f_{n}(t)\|+\int_{0}^{t}\|g(t,s,P_ {n}(x_{n}(s)))\|ds\] \[+m(t)\langle d_{n}(t),-P_{n}(d_{n}(t))\rangle\] \[\leq|\dot{\zeta}(t)|+L\psi(t)+c(t)r(t)+d(t)+\int_{0}^{t}\sigma(t, s)ds+\int_{0}^{t}\sigma(t,s)r(s)ds\] \[+m(t)\langle d_{n}(t),-P_{n}(d_{n}(t))\rangle.\]
where we have used that \((\mathcal{H}^{F})\), \((\mathcal{H}^{g})\) and that \(d_{n}(t)\in\Gamma_{n}(t)\) a.e. \(t\in[0,T]\). Moreover, due to \((\mathcal{H}_{4})\),
\[\langle d_{n}(t),-P_{n}(d_{n}(t))\rangle =\langle d_{n}(t),d_{n}(t)-P_{n}(d_{n}(t))\rangle+\langle d_{n}( t),-d_{n}(t)\rangle\] \[\leq\langle d_{n}(t),d_{n}(t)-P_{n}(d_{n}(t))\rangle-\alpha_{0}^{2}\] \[=-\alpha_{0}^{2}.\]
Thus, by using the above inequalities and the definition of \(m\), we obtain that
\[\dot{\varphi}_{n}(t)\leq m(t)(\alpha_{0}^{2}+\langle d_{n}(t),-P_{n}(d_{n}(t) )\rangle)\leq 0,\]
which, by virtue of \((\mathcal{H}_{1})\), implies that
\[\varphi_{n}(t) \leq\varphi_{n}(0)=d_{C(0,P_{n}(x_{0}))}(P_{n}(x_{0}))\] \[=d_{C(0,P_{n}(x_{0}))}(P_{n}(x_{0}))-d_{C(0,x_{0})}(x_{0})\] \[\leq(1+L)\|x_{0}-P_{n}(x_{0})\|\]
_Claim 3:_ For all \(t\in[0,T]\), \(\lim_{n\to+\infty}\varphi_{n}(t)=0\).
_Proof of Claim 3:_ It follows directly from Claim 2 and Lemma 2.3. \(\Box\)
_Claim 4:_ The sequence \((P_{n}(x_{n}(t)))_{n}\) is relatively compact for all \(t\in[0,T]\).
_Proof of Claim 4:_ Let \(\gamma=\alpha\) or \(\gamma=\beta\) be either the Kuratowski or the Hausdorff measure of noncompactness. Fix \(t\in[0,T]\) and let
\[s_{n}(t)\in\mathrm{Proj}_{C(t,P_{n}(x_{n}(t)))}(P_{n}(x_{n}(t))).\]
Then, \(s_{n}(t)\in(\rho+r(t))\mathbb{B}\) and, due to Claim 3 and \((\mathcal{H}_{3})\),
\[\gamma(\{P_{n}(x_{n}(t))\colon n\in\mathbb{N}\})=\gamma(\{s_{n}(t)\colon n\in \mathbb{N}\})\leq\gamma\left(C(t,r(t)\mathbb{B})\cap(\rho+r(t))\mathbb{B} \right)=0,\]
which proves the claim. \(\Box\)
Hence, we have verified all hypotheses of Theorem 4.2, and there exists at least one solution \(x\in\mathrm{AC}([0,T];\mathcal{H})\) of (23).
_Claim 5:_ For all \(t\in[0,T]\), \(x(t)\in C(t,x(t))\).
_Proof of Claim 5:_ Fix \(t\in[0,T]\). Then, as in the proof of Theorem 4.2, \(P_{k}(x_{k}(t))\to x(t)\) for some subsequence of \((x_{n})_{n}\). Thus, due to Claim 3,
\[d_{C(t,x(t))}(x(t)) =\limsup_{k\to+\infty}\left(d_{C(t,x(t))}(x(t))-\varphi_{k}(t)+ \varphi_{k}(t)\right)\] \[\leq\limsup_{k\to+\infty}((1+L)\|x(t)-P_{k}(x_{k}(t))\|+\varphi_{ k}(t))=0,\]
as claimed. \(\Box\)
Finally, by virtue of (6) and Claim 5, \(x\) is also a solution (19). \(\Box\)
We end this section with an existence result for the sweeping process with integral perturbation.
\[\begin{cases}\dot{x}(t)\in-N(C(t);x(t))+F(t,x(t))+\int_{0}^{t}g(t,s,x(s))ds \text{ a.e }t\in[0,T],\\ x(0)=x_{0},\end{cases} \tag{25}\]
where \(C\colon[0,T]\rightrightarrows\mathcal{H}\) is set-valued with nonempty and closed values satisfying the following assumptions:
* There exists \(\zeta\in\mathrm{AC}\left([0,T];\mathbb{R}_{+}\right)\) such that for all \(s,t\in[0,T]\) \[\sup_{x\in\mathcal{H}}|d(x,C(t))-d(x,C(s))|\leq|\zeta(t)-\zeta(s)|.\]
* There exist two constants \(\alpha_{0}\in]0,1]\) and \(\rho\in]0,+\infty]\) such that \[0<\alpha_{0}\leq\inf_{x\in U_{\rho}(C(t))}d\left(0,\partial d(x,C(t))\right) \quad\text{ a.e. }t\in[0,T],\] where \(U_{\rho}\left(C(t)\right)=\{x\in\mathcal{H}\colon 0<d(x,C(t))<\rho\}\) for all \(t\in[0,T]\).
* For all \(t\in[0,T]\) the set \(C(t)\) is ball-compact, that is, for every \(r>0\) the set \(C(t)\cap r\mathbb{B}\) is compact in \(\mathcal{H}\).
The following result can be proved similarly to Theorem 5.2. When \(\mathcal{H}\) is a finite dimensional space, it extends the result from [6] to positively \(\alpha\)-far sets, a class which strictly includes prox-regular sets.
**Theorem 5.3**.: _Assume, in addition to \((\mathcal{H}^{F})\), \((\mathcal{H}^{g})\) and \((\mathcal{H}^{F}_{4})\), that \((\mathcal{H}_{5})\), \((\mathcal{H}_{6})\) and \((\mathcal{H}_{7})\) hold. Then, the differential inclusion (25) admits at least one solution \(x\in\mathrm{AC}([0,T];\mathcal{H})\)._
## 6 Optimal control problem
In this section, we study the existence of solutions for an optimal control problem governed by an integro-differential inclusion in finite-dimensional spaces. We prove the existence of solutions for the controlled problem without convexity assumptions of the dynamics.
Given a running cost function \(\varphi\colon[0,T]\times\mathbb{R}^{2s}\times\mathbb{R}^{m}\to\overline{ \mathbb{R}}\) and a terminal cost function \(\ell\colon\mathbb{R}^{2s}\to\overline{\mathbb{R}}\), we consider the following optimal control problem:
\[\begin{split}\min&\ell(x(0),x(T))+\int_{0}^{T} \varphi(s,x(s),\dot{x}(s),u(s))ds,\\ \text{s.t.}&\dot{x}(t)\in F(t,x(t))+\int_{0}^{t}g(t, s,x(s),u(s))ds\text{ a.e. }t\in[0,T],\\ & u(t)\in U\text{ a.e. }t\in[0,T]\quad(x(0),x(T))\in C,\end{split} \tag{26}\]
where \(u\colon[0,T]\to\mathbb{R}^{m}\) is a measurable control function, \(U\subset\mathbb{R}^{m}\) is a bounded, closed and convex set and \(C\) is a compact set of \(\mathbb{R}^{2s}\). Here \(\varphi\) is a normal integrand (see, e.g., [24, Definition 14.27]).
In order to obtain the existence of an optimal control problem, we strengthen the condition \((\mathcal{H}^{F}_{3})\) to the condition: there exist two positive integrable functions \(c,d\) such that for a.e. \(t\in[0,T]\)
* For all \(x\in\mathcal{H}\) and a.e. \(t\in[0,T]\), \(\|F(t,x)\|\leq c(t)\|x\|+d(t)\).
The latter condition, together with assumptions of Theorem 4.3, ensures that any solution of (10) satisfies the bounds provided in (2).
The following result asserts the existence of optimal controls for the optimization problem (26). Its main feature is that convexity in the dynamics is not required for the existence of solutions.
**Theorem 6.1**.: _Suppose, in addition to \((\mathcal{H}^{F}_{1})\), \((\mathcal{H}^{F}_{2})\), and \((\mathcal{H}^{F_{*}}_{3})\), that the following assumptions hold:_
* _For every_ \(u\in U\)_, the map_ \((t,s,x)\mapsto g(t,s,x,u)\) _satisfies_ \((\mathcal{H}^{g})\)_._
* _For all_ \(t\in[0,T]\) _the integral mapping_ \[\mathcal{G}(x,u)(t):=\int_{0}^{t}g(t,s,x(s),u(s))ds\] (27) _is norm-weak continuous, i.e.,_ \(\lim_{n\to+\infty}\mathcal{G}(x_{n},u_{n})(t)=\mathcal{G}(x,u)(t)\) _for all_ \(t\in[0,T]\)_, whenever_ \(x_{n}\to x\) _in_ \(C([0,T];\mathbb{R}^{s})\) _and_ \(u_{n}\rightharpoonup u\) _in_ \(L^{2}([0,T];\mathbb{R}^{s})\)_._
* _The terminal cost function_ \(\ell\) _is lsc._
* _The running cost function_ \(\varphi\) _is a normal integrand such that_ \(\varphi(s,x,\cdot,\cdot)\) _is convex for all_ \((s,x)\in[0,T]\times\mathbb{R}^{s}\)_. Moreover, there exists_ \(\eta\in L^{1}([0,T],\mathbb{R})\) _such that_ \[\varphi(s,x,y,u)\geq\eta(s),\text{ for all }(s,x,y,u)\in[0,T]\times\mathbb{R}^{2s}\times U.\]
_Then, if the value of the optimal control problem (26) is finite, it admits at least one optimal solution._
Proof.: Let us consider a minimizing sequence \((x_{k},u_{k})\) of the optimization problem (26). Then, by virtue (\(\mathcal{H}_{3}^{F_{s}}\)) and Gronwall's inequality, we can show that the sequence \((x_{k})\) satisfies condition (2). Moreover, by the compactness of \(C\) we can assume that \((x_{k}(0),x_{k}(T))\to(x_{0},x_{T})\in C\). Then, we can suppose (up to a subsequence) that \(x_{k}\) satisfies the conclusion of Lemma 2.2 for some absolutely continuous function \(x\) with \((x(0),x(T))=(x_{0},x_{T})\). Furthermore, since \(U\) is bounded, the sequence \((u_{k})\) is bounded in \(L^{2}([0,T];\mathbb{R}^{m})\). Thus, we can assume that \(u_{k}\rightharpoonup u\in L^{2}([0,T];\mathbb{R}^{m})\). Hence, by using similar arguments to those given in (17), we obtain that \((x,u)\) satisfies the differential inclusion in (26). Finally, by the lower semicontinuity of \(\ell\) and the lower semicontinuity result from [4, Theorem 2.1], we conclude that \((x,u)\) attains the minimum on the control problem (26), which ends the proof.
The following result provides an example where the integral mapping (27) is norm-weak continuous.
**Example 6.2** (Linear control variable).: _Let us consider the map \(g(s,x,u):=A(s,x)u+b(s,x)\), where \(A\colon[0,T]\times\mathbb{R}^{s}\to\mathcal{M}_{n\times m}\), \(b\colon[0,T]\times\mathbb{R}^{s}\to\mathbb{R}^{s}\) are continuous functions. Then, the integral mapping (27) is norm-weak continuous._
|
2308.09378 | High-precision Electric Dipole Polarizabilities of the Clock States in
$^{133}$Cs | We have calculated static and dynamic electric dipole (E1) polarizabilities
($\alpha_F$) of the hyperfine levels of the clock transition precisely in
$^{133}$Cs. The scalar, vector, and tensor components of $\alpha_F$ are
estimated by expressing as sum of valence, core, core-core, core-valence, and
valence-core contributions that are arising from the virtual and core
intermediate states. The dominant valence contributions are estimated by
combining a large number of matrix elements of the E1 and magnetic dipole
hyperfine interaction operators from the relativistic coupled-cluster method
and measurements. For an insightful understanding of their accurate
determination, we explicitly give intermediate contributions in different forms
to the above quantities. Very good agreement of the static values for the
scalar and tensor components with their experimental results suggest that our
estimated dynamic $\alpha_F$ values can be used reliably to estimate the Stark
shifts while conducting high-precision measurements at the respective laser
frequency using the clock states of $^{133}$Cs. | A. Chakraborty, B. K. Sahoo | 2023-08-18T08:13:56Z | http://arxiv.org/abs/2308.09378v2 | # High-precision Electric Dipole Polarizabilities of the Clock States in \({}^{133}\)Cs
###### Abstract
We have calculated static and dynamic electric dipole (E1) polarizabilities (\(\alpha_{F}\)) of the hyperfine levels of the clock transition precisely in \({}^{133}\)Cs. The scalar, vector, and tensor components of \(\alpha_{F}\) are estimated by expressing as sum of valence, core, core-core, core-valence, and valence-core contributions that are arising from the virtual and core intermediate states. The dominant valence contributions are estimated by combining a large number of matrix elements of the E1 and magnetic dipole hyperfine interaction operators from the relativistic coupled-cluster method and measurements. For an insightful understanding of their accurate determination, we explicitly give intermediate contributions in different forms to the above quantities. Very good agreement of the static values for the scalar and tensor components with their experimental results suggest that our estimated dynamic \(\alpha_{F}\) values can be used reliably to estimate the Stark shifts while conducting high-precision measurements at the respective laser frequency using the clock states of \({}^{133}\)Cs.
## I Introduction
Precise estimations of electric dipole polarizabilities (\(\alpha_{d}\)) are useful for various high-precision experiments including atom trapping, atomic clocks, and quantum computers [1; 2; 3; 4; 5]. Among all atoms in the periodic table, alkali atoms are treated to be very special as they are being considered in many laboratories to carry out high-precision experiments [6; 7]. Atomic clocks based on the Rb and Cs atoms are frequently used for both laboratory and space applications [8]. It is also a well known fact that \({}^{137}\)Cs atomic clock is being used as the primary time and frequency standards [9; 10]. In this clock, microwave transition frequency between the hyperfine levels \(F=3\) and \(F=4\) of the ground state of \({}^{133}\)Cs is used. Since accuracy of a \({}^{133}\)Cs microwave clock is limited by large systematic effects [11; 12], precise determination of electric dipole (E1) polarizabilities for estimating the Stark effects of the clock states are quite useful.
The other promising application of the transition between the \(F=3\) and \(F=4\) ground state hyperfine levels (\(|FM_{F}\rangle\)) of \({}^{137}\)Cs is to make them as qubits for quantum computers. To realize reliable quantum control and ensure high fidelity for these applications in quantum science and technology, it is imperative to minimize decoherence in the single trapped atoms [13]. When an atomic qubit is encoded as a superposition of two hyperfine levels within the ground states of an alkali-metal atom, it encounters imbalanced light shifts induced by the trapping laser field [14; 15; 16; 17]. Consequently, a thorough analysis of systematic effects is required to understand the influence of the trapping laser beam's wavelength, polarization, and intensity on the energy levels.
From the point of view of studying parity violation (PV) effects in atomic systems, \({}^{133}\)Cs is also very unique as it is the only atom in which electric dipole amplitude between the \(|FM_{F}\rangle\) levels of the ground and 7S states due to PV has been measured to sub-one percent accuracy [18]. This has implication for inferring beyond the Standard Model of particle physics. In fact, measuring PV amplitude of the transition between the \(F=3\) and \(F=4\) hyperfine levels of the ground state in \({}^{133}\)Cs would be of particular interest for probing spin-dependent PV effect. Such an experiment would also require precise values of the E1 polarizabilities of the involved hyperfine levels to estimate the systematic effects.
In this paper, we focus on the accurate determination of E1 polarizabilities (\(\alpha_{F,M_{F}}\)) of the \(|FM_{F}\rangle\) levels of the ground state in \({}^{133}\)Cs. The differential shift in the clock transition between these hyperfine levels due to background blackbody radiation (BBR) has recently sparked interest to estimate the \(\alpha_{F,M_{F}}\) values accurately [12]. Several research groups have extensively investigated the impact of a static electric field on the hyperfine levels of the ground state in the \({}^{133}\)Cs atom [19; 20; 21; 22; 23]. However, there are discrepancies about 10% among the calculated results on the differential scalar E1 polarizability values from various methods. This discrepancy is further compounded by variations observed in two different experimental results [24; 25]. Subsequently, it was claimed that these inconsistencies could be attributed to the neglected contributions of intermediate continuum states in certain calculations [23]. Similar discrepancy was also seen between the theoretical and experimental findings for the tensor component of \(\alpha_{F,M_{F}}\)[16]. However, it was later discovered that there was a sign mistake in the theoretical formulation [26; 27]. Later Dzuba et al. utilized the time-dependent Hartree-Fock (TDHF) method (equivalent to random phase approximation (RPA)) in conjunction with Brueckner orbitals (BO) to estimate the tensor polarizability, incorporating the corrected formula for the hyperfine levels [28]. Even then, the obtained TDHF result for the \(F=4\) level deviated from the experimental value by approximately 30% [29]. Such substantial dis
crepancies in both the scalar and tensor components of the static \(\alpha_{F,M_{F}}\) values in the ground state of \({}^{133}\)Cs demands for further investigations on these quantities.
We carry out analyses of both the static and dynamic \(\alpha_{F,M_{F}}\) values of the hyperfine levels of the ground state in the \({}^{133}\)Cs atom. In particular, we have determined the dynamic \(\alpha_{F,M_{F}}\) values at two wavelengths (\(\lambda=2\pi c/\omega\) with the speed of light \(c\) and angular frequency \(\omega\)), namely 936 nm and 1064 nm, for two specific reasons. The \(\lambda=936\) nm value aligns closely with the magic wavelength for the \(6S_{1/2}\) - \(6S_{3/2}\) transition, which is widely employed for effective laser cooling of the \({}^{133}\)Cs atoms [30; 31]. However, the available powers of lasers around 936 nm are limited to a few Watts (W). Conversely, the ytterbium doped fiber laser at \(\lambda=1064\) nm offers more than 50 W of power and is frequently used in laboratories. First, we verify the accuracy of the static \(\alpha_{F,M_{F}}\) values compared with the available experimental and other theoretical results. Based on these analyses, accuracy of the dynamic \(\alpha_{F,M_{F}}\) values are gauged.
## II Theory
A uniform oscillating electric field with angular frequency \(\omega\) at a given time \(t\) is given by
\[\vec{\mathcal{E}}_{L}(\omega,t)=\frac{1}{2}|\mathcal{E}_{0}|\vec{ \varepsilon}e^{-i\omega t}+\text{c.c.}, \tag{1}\]
where \(|\mathcal{E}_{0}|\) is the strength of the field, \(\vec{\varepsilon}\) is the degree of polarization and c.c. means complex conjugate term. Interaction of \(\vec{\mathcal{E}}_{L}(\omega,t)\) with an atom can be described by the interaction Hamiltonian
\[H_{int} = -\vec{\mathcal{E}}_{L}(\omega,t)\cdot\vec{D} \tag{2}\] \[= -\frac{|\mathcal{E}_{0}|}{2}\left[\vec{\varepsilon}\cdot\vec{D}e ^{-i\omega t}+\vec{\varepsilon}^{*}\cdot\vec{D}e^{i\omega t}\right],\]
where \(\vec{D}\) is the E1 operator. Since \(H_{int}\) is an odd-parity operator, the first-order shift to the energy levels of atomic states diminishes and the leading second-order energy shift in power of \(|\mathcal{E}_{0}|\) in a hyperfine level \(|FM_{F}\rangle\) can be given by
\[\Delta E_{\text{light}}=-\frac{1}{2}\alpha_{F,M_{F}}(\omega) \mathcal{E}_{L}^{2}(\omega), \tag{3}\]
where \(\alpha_{F,M_{F}}(\omega)\) is known as the dynamic E1 polarizability and it corresponds to the static E1 polarizability when \(\omega=0\). It would be imperative to have knowledge of \(\alpha_{F,M_{F}}(\omega)\) to estimate \(\Delta E_{\text{light}}\) at arbitrary values of \(|\mathcal{E}_{0}|\) and \(\omega\). \(\alpha_{F,M_{F}}(\omega)\) can be evaluated as expectation value of an effective operator
\[D_{eff}^{(2)} = \left[\vec{\varepsilon}^{*}\cdot\vec{D}R_{F}^{+}\vec{\varepsilon} \cdot\vec{D}+\vec{\varepsilon}\cdot\vec{D}R_{F}^{-}\vec{\varepsilon}^{*}\cdot \vec{D}\right], \tag{4}\]
where \(R_{F}^{\pm}\) are the resolvent operators, given by
\[R_{F}^{\pm} = \sum_{F^{\prime},M_{F^{\prime}}}\frac{|F^{\prime}M_{F^{\prime}} \rangle\langle F^{\prime}M_{F^{\prime}}|}{E_{F}-E_{F^{\prime}}\pm\omega}. \tag{5}\]
It is possible to separate polarization vectors from the electronic operators from Eq. (4) by expressing
\[\vec{\varepsilon}^{*}\cdot\vec{D}R_{F}^{\pm}\vec{\varepsilon}\cdot \vec{D}=\sum_{L=0,1,2}(-1)^{L}\left(\vec{\varepsilon}^{*}\otimes\vec{ \varepsilon}\right)^{L}\cdot\left(\vec{D}\otimes R_{F}^{+}\vec{D}\right)^{L}. \tag{6}\]
Thus, the effective operator is given by
\[D_{eff}^{(2)} = \sum_{L=0,1,2}(-1)^{L}\left(\vec{\varepsilon}^{*}\otimes\vec{ \varepsilon}\right)^{L}\cdot\left[\left(\vec{D}\otimes R_{F}^{+}\vec{D}\right) ^{L}\right. \tag{7}\] \[\left.+(-1)^{L}\left(\vec{D}\otimes R_{F}^{-}\vec{D}\right)^{L} \right].\]
Figure 2: Goldstone diagrams representing the top contribution to the third-order hyperfine interaction induced E1 polarizability. Each diagram contains a hyperfine interaction \(\mathbf{T}_{\mathbf{J}}^{(1)}\) (shown by curly line) in addition to two interactions by the E1 operator \(D\) (shown by horizontal line).
Figure 1: Goldstone diagrams representing the DHF contributions to the second-order E1 polarizability of the ground state of \({}^{133}\)Cs. Here, double arrows represent valence orbital (v), single arrows going down mean occupied orbitals (a), and single arrows going up mean virtual orbitals (p). The E1 operator \(D\) is represented by the horizontal line.
using which, we get
\[\alpha_{F,M_{F}} = -\langle FM_{F}|D^{(2)}_{eff}|FM_{F}\rangle \tag{8}\] \[= -\sum_{L=0,1,2}\sum_{Q=-L}^{L}(-1)^{L-Q}\left(\vec{\varepsilon}^{ \ast}\otimes\vec{\varepsilon}\right)^{L}_{Q}\] \[\times\langle FM_{F}|\left(\vec{D}\otimes R^{+}_{F}\vec{D}\right) ^{L}_{Q}\] \[+(-1)^{L}\langle FM_{F}|\left(\vec{D}\otimes R^{-}_{F}\vec{D} \right)^{L}_{Q}|FM_{F}\rangle.\]
Using the polarization dependent factors, we can rewrite the aforementioned expression as
\[\alpha_{F,M_{F}} = \alpha^{S}_{F}+{\cal A}\frac{M_{F}}{2F}\cos\theta_{k}\alpha^{A}_{F} \tag{9}\] \[+\frac{3M_{F}^{2}-F(F+1)}{F(2F-1)}\frac{3\cos^{2}\theta_{p}-1}{2 }\alpha^{T}_{F},\]
where \(\theta_{k}\) is the angle between the wave vector and quantization axis, \(\theta_{p}\) is the polarization angle and \({\cal A}\) denotes degree of polarization. Again, \(\alpha^{S}_{F}\), \(\alpha^{A}_{F}\), and \(\alpha^{T}_{F}\) are known as the scalar, axial-vector, and tensor components of \(\alpha_{F,M_{F}}\), which are \(M_{F}\) independent and are given by
\[\alpha^{S}_{F}(\omega) = -\frac{1}{3(2F+1)}\sum_{F^{\prime}}|\langle F||{\bf D}||F^{ \prime}\rangle|^{2} \tag{10}\] \[\times\left[\frac{1}{E_{F}-E_{F^{\prime}}+\omega}+\frac{1}{E_{F} -E_{F^{\prime}}-\omega}\right],\] \[\alpha^{A}_{F}(\omega) = -\sqrt{\frac{6F}{(F+1)(2F+1)}}\sum_{F^{\prime}}(-1)^{F+F^{\prime }+1}\] (11) \[\times\left\{\begin{array}{ccc}F&1&F\\ 1&F^{\prime}&1\end{array}\right\}|\langle F||{\bf D}||F^{\prime}\rangle|^{2}\] \[\times\left[\frac{1}{E_{F}-E_{F^{\prime}}+\omega}-\frac{1}{E_{F} -E_{F^{\prime}}-\omega}\right],\]
and
\[\alpha^{T}_{F}(\omega) = 2\sqrt{\frac{5F(2F-1)}{6(F+1)(2F+3)(2F+1)}} \tag{12}\] \[\times(-1)^{F+F^{\prime}+1}\left\{\begin{array}{ccc}F&2&F\\ 1&F^{\prime}&1\end{array}\right\}|\langle F||{\bf D}||F^{\prime}\rangle|^{2}\] \[\times\left[\frac{1}{E_{F}-E_{F^{\prime}}+\omega}+\frac{1}{E_{F} -E_{F^{\prime}}-\omega}\right].\]
It is strenuous to deal with the wave functions in the hyperfine coordinate system to evaluate the above quantities. To address this, we can express the \(|FM_{F}\rangle\) levels with a good approximation considering up to the first-order perturbation as
\[|FM_{F}\rangle = |IM_{I};JM_{J}\rangle+\sum_{J^{\prime},M_{J^{\prime}}}|IM_{I};J^{ \prime}M_{J^{\prime}}\rangle \tag{13}\] \[\times\frac{\langle IM_{I};J^{\prime}M_{J^{\prime}}|H_{hf}|IM_{I} ;JM_{J}\rangle}{E_{J}-E_{J^{\prime}}},\]
where \(I\) is the nuclear spin with azimuthal component \(M_{I}\) and \(J\) is the total angular momentum of the atomic state with azimuthal component \(M_{J}\). In the above expression, \(H_{hf}\) denotes the scalar hyperfine interaction Hamiltonian, which can be defined as
\[H_{hf}=\sum_{k}T^{(k)}_{J}\cdot T^{(k)}_{I}, \tag{14}\]
where \(T^{(k)}_{J}\) and \(T^{(k)}_{I}\) are defined as the electronic and nuclear components, respectively, of \(H_{hf}\) with rank \(k\) of the multipole expansion with \(k=1,3,5\cdots\) denoting contributions from the magnetic multipoles while \(k=2,4,6\cdots\) give contributions from the electric multipoles. For the present interest, we consider only the dominant \(k=1\) term in the calculation corresponding to magnetic dipole (M1) hyperfine interaction as contributions from the other multipoles to these quantities are negligibly small [27; 28]. The \(\langle IM_{I};J^{\prime}M_{J^{\prime}}|H_{hf}|IM_{I};JM_{J}\rangle\)
Figure 3: Goldstone diagrams representing the center part of the third-order hyperfine interaction induced E1 polarizability. All notations are same with the previous two figures.
Figure 4: Goldstone diagrams representing the normalization part of the third-order hyperfine interaction induced E1 polarizability. This has similarity with the diagrams representing the second-order E1 polarizability.
matrix element can, then, be evaluated using the relation
\[\langle IM_{I};J^{\prime}M_{J^{\prime}}|T^{(1)}_{J}\cdot T^{(1)}_{I}| IM_{I};JM_{J}\rangle=(-1)^{I+J+F}\] \[\times\left\{\begin{array}{ccc}J^{\prime}&J&1\\ I&I&F\end{array}\right\}\langle J^{\prime}||\mathbf{T}^{(1)}_{\mathbf{J}}||J \rangle\langle I||\mathbf{T}^{(1)}_{\mathbf{I}}||I\rangle, \tag{15}\]
in which the nuclear coordinate part is converted to a factor as
\[\langle I||\mathbf{T}^{(1)}_{\mathbf{I}}||I\rangle=\sqrt{I(I+1)(2I+1)}g_{I}\mu_ {N}, \tag{16}\]
with \(g_{I}=\mu_{I}/I\) for the M1 moment \(\mu_{I}\) and nuclear Bohr magnetron \(\mu_{N}\).
After substituting all the relations, we can express \(\alpha_{F}^{S}\), \(\alpha_{F}^{A}\) and \(\alpha_{F}^{T}\) components as
\[\alpha_{F}^{S} = \alpha_{F}^{S(2,0)}+\alpha_{F}^{S(2,1)}, \tag{17}\]
\[\alpha_{F}^{A} = \alpha_{F}^{A(2,0)}+\alpha_{F}^{A(2,1)}, \tag{18}\]
and
\[\alpha_{F}^{T} = \alpha_{F}^{T(2,0)}+\alpha_{F}^{T(2,1)}, \tag{19}\]
where \(\alpha_{F}^{S/A/T(m,n)}\) means the components are including \(m\)-orders of E1 interactions and \(n\)-orders of M1 interactions, respectively. The hyperfine interaction independent components can be evaluated conveniently now by using the relations
\[\alpha_{F}^{S(2,0)}(\omega) = -\frac{1}{3(2J+1)}\sum_{J^{\prime}}|\langle J||\mathbf{D}||J^{ \prime}\rangle|^{2} \tag{20}\] \[\times\left[\frac{1}{E_{J}-E_{J^{\prime}}+\omega}+\frac{1}{E_{J} -E_{J^{\prime}}-\omega}\right]\] \[\equiv \alpha_{J}^{S}(\omega),\]
\[\alpha_{F}^{A(2,0)}(\omega) = -\sqrt{\frac{6F(2F+1)}{(F+1)}}\left\{\begin{array}{ccc}J&F&I\\ F&J&1\end{array}\right\} \tag{21}\] \[\times\sum_{J^{\prime}}(-1)^{F+J^{\prime}+I+2J}\left\{\begin{array} []{ccc}1&1&1\\ J&J&J^{\prime}\end{array}\right\}\] \[\times\left[\frac{|\langle J||\mathbf{D}||J^{\prime}\rangle|^{2}}{ E_{J}-E_{J^{\prime}}+\omega}-\frac{|\langle J||\mathbf{D}||J^{\prime}\rangle|^{2}}{ E_{J}-E_{J^{\prime}}-\omega}\right]\]
\[= \sqrt{\frac{F(2F+1)(J+1)(2J+1)}{J(F+1)}}\] \[\times(-1)^{I+J+F+1}\left\{\begin{array}{ccc}J&F&I\\ F&J&1\end{array}\right\}\alpha_{J}^{A}(\omega),\]
and
\[\alpha_{F}^{T(2,0)}(\omega) = -\sqrt{\frac{20F(2F-1)(2F+1)}{6(F+1)(2F+3)}}\left\{\begin{array}[] {ccc}J&F&I\\ F&J&2\end{array}\right\} \tag{22}\] \[\times\sum_{J^{\prime}}(-1)^{I+F+J^{\prime}+2J}\left\{\begin{array} []{ccc}1&1&2\\ J&J&J^{\prime}\end{array}\right\}\] \[\times\left[\frac{|\langle J||\mathbf{D}||J^{\prime}\rangle|^{2}}{ E_{J}-E_{J^{\prime}}+\omega}+\frac{|\langle J||\mathbf{D}||J^{\prime}\rangle|^{2}}{ E_{J}-E_{J^{\prime}}-\omega}\right]\]
\[= -\sqrt{\frac{(J+1)(2J+3)(2J+1)F(2F-1)}{J(2J-1)(F+1)(2F+3)(2F+1)}}\] \[\times(2F+1)(-1)^{I+J+F+1}\left\{\begin{array}{ccc}J&F&I\\ F&J&2\end{array}\right\}\] \[\times\quad\alpha_{J}^{T}(\omega),\]
where \(\alpha_{J}^{S}\), \(\alpha_{J}^{A}\) and \(\alpha_{J}^{T}\) are nothing but the components of atomic state E1 polarizabilities whose evaluations depend on the electronic wave functions and energies only. It can be followed from the selection rules that \(\alpha_{J}^{T}\) will not contribute to the states with \(J<3/2\).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & \multicolumn{3}{c}{\(\alpha_{GS}^{S}\) values} & \multicolumn{3}{c}{\(\alpha_{GS}^{\star}\) values} \\ \cline{2-5} & \(\lambda=\infty\) & \(\lambda=936\) nm & \(\lambda=1064\) nm & \(\lambda=936\) nm & \(\lambda=1064\) nm \\ \hline This work & & & & & \\ DHF & 662.6 & \(-2303.2\) & 7945.1 & \(-459.7\) & 20772.2 \\ RCCSD & 404.8 & 2684.0 & 1138.7 & \(-1300.8\) & \(-196.8\) \\ RCCSDT & 400.0 & 3094.3 & 1164.4 & \(-1819.3\) & \(-206.3\) \\ Final & **401.0(6)** & **3022.1(40)** & **1170.8(16)** & **-1599.5(59)** & **-201.8(18)** \\ \hline Others & & & & & \\ Theory [42] & 399.8 & & & & \\ Theory [43] & 403.9 & & & & \\ Theory [44] & 399.9(1.9) & & & & \\ Experiment [45] & 401.00(6) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Calculated values of the second-order static and dynamic E1 polarizabilities (in a.u.) of the ground state of the Cs atom.
Proceeding in the similar manner, we can express [32; 33; 23]
\[\alpha_{F}^{\mathcal{K}(2,1)}(\omega)=W_{F}^{\mathcal{K}}\left[2T_{F}^{\mathcal{K} }(\omega)+C_{F}^{\mathcal{K}}(\omega)+R_{F}^{\mathcal{K}}(\omega)\right], \tag{23}\]
where the symbol \(\mathcal{K}\) denotes scalar, axial-vector, and tensor components for the integer values \(K=0\), 1 and 2, respectively, as used below. Here, each component is divided into contributions from three different terms defined as top (\(T_{F}^{\mathcal{K}}\)), center (\(C_{F}^{\mathcal{K}}\)), and residual (or normalization) (\(R_{F}^{\mathcal{K}}\)) that are given by
\[T_{F}^{\mathcal{K}}(\omega) = \sqrt{(2K+1)I(I+1)(2I+1)}g_{I}\mu_{N} \tag{24}\] \[\times \sum_{J^{\prime},J^{\prime\prime}}\left\{\begin{array}{ccc}I&I &1\\ J&J^{\prime\prime}&F\end{array}\right\}\Biggl{\{}\begin{array}{ccc}K&J^{\prime \prime}&J\\ I&F&F\end{array}\Biggr{\}}\] \[\times \left\{\begin{array}{ccc}K&J^{\prime\prime}&J\\ J^{\prime}&1&1\end{array}\right\}\] \[\times (-1)^{J+J^{\prime}}\frac{\langle J||\mathbf{T}_{J}^{(1)}||J^{ \prime\prime}\rangle\langle J^{\prime\prime}||\mathbf{D}||J^{\prime}\rangle \langle J^{\prime}||\mathbf{D}||J\rangle}{(E_{J}-E_{J^{\prime\prime}})}\] \[\times \left[\frac{1}{(E_{J}-E_{J^{\prime}}+\omega)}+\frac{(-1)^{K}}{( E_{J}-E_{J^{\prime}}-\omega)}\right],\]
\[C_{F}^{\mathcal{K}}(\omega) = \sqrt{(2K+1)I(I+1)(2I+1)}g_{I}\mu_{N} \tag{25}\] \[\times \sum_{J^{\prime},J^{\prime\prime}}\sum_{L}\left\{\begin{array}[] {ccc}F&K&F\\ J&1&J^{\prime\prime}\\ I&1&L\end{array}\right\}\Biggl{\{}\begin{array}{ccc}I&J&F\\ I&J^{\prime}&J^{\prime\prime}\\ I&1&L\end{array}\right\}\] \[\times (-1)^{I+K-F+J}\] \[\times \langle J||\mathbf{D}||J^{\prime\prime}\rangle\langle J^{\prime \prime}||\mathbf{T}_{J}^{(1)}||J^{\prime}\rangle\langle J^{\prime}||\mathbf{D }||J\rangle\] \[\times \left[\frac{1}{(E_{J}-E_{J^{\prime}}+\omega)^{2}}+\frac{(-1)^{K} }{(E_{J}-E_{J^{\prime}}-\omega)^{2}}\right].\]
Also, the pre-angular factors are given by
\[W_{F}^{S} = \sqrt{\frac{(2F+1)}{3}}, \tag{27}\] \[W_{F}^{A} = -\sqrt{\frac{2F(2F+1)}{(F+1)}}, \tag{28}\]
\begin{table}
\begin{tabular}{l l l l l l l l} Quantity & Method & \(\overline{\lambda=\infty}\) & \(\lambda=936\) nm & \(\lambda=1064\) nm & \(\overline{\lambda=\infty}\) & \(\lambda=936\) nm & \(\overline{\lambda=1064\) nm} \\ \hline \(\alpha_{F}^{S(2,1)}\) & DHF & \(-3.1420\) & \(-49.5027\) & \(-2381.2965\) & \(2.4423\) & \(38.5007\) & \(1852.1969\) \\ & RCCSD & \(-2.5706\) & \(-153.5968\) & \(-26.5174\) & \(1.9993\) & \(119.4880\) & \(8.3956\) \\ & RCCSDT & \(-2.5586\) & \(-225.2741\) & \(-25.3313\) & \(1.9898\) & \(175.2118\) & \(19.6881\) \\ & Final & \(\mathbf{-2.559(11)}\) & \(\mathbf{-201.1(17)}\) & \(\mathbf{-25.3(13)}\) & \(\mathbf{1.990(10)}\) & \(\mathbf{156.4(14)}\) & \(\mathbf{19.7(10)}\) \\ & TDHF+BO [28] & \(-2.5419\) & & & & \(1.9770\) & \\ \hline \(\alpha_{F}^{A(2,1)}\) & DHF & \(0.0\) & \(8.6958\) & \(561.5658\) & \(0.0\) & \(9.0179\) & \(582.5113\) \\ & RCCSD & \(0.0\) & \(-132.2379\) & \(-9.0495\) & \(0.0\) & \(-137.1366\) & \(-9.2136\) \\ & RCCSDT & \(0.0\) & \(-238.6758\) & \(-11.0932\) & \(0.0\) & \(-247.5169\) & \(-11.5043\) \\ & Final & \(0.0\) & \(-\mathbf{185.59(51)}\) & \(\mathbf{-9.70(7)}\) & \(0.0\) & \(\mathbf{-192.47(53)}\) & \(\mathbf{-10.06(7)}\) \\ \hline \(\alpha_{F}^{T(2,1)}\) & DHF & \(0.0344\) & \(0.4310\) & \(25.8040\) & \(-0.0639\) & \(-0.8044\) & \(-48.1693\) \\ & RCCSD & \(0.0183\) & \(6.0153\) & \(0.4561\) & \(-0.0339\) & \(-11.2287\) & \(-0.8888\) \\ & RCCSDT & \(0.0188\) & \(10.4966\) & \(0.5508\) & \(-0.0350\) & \(-19.5937\) & \(-1.0279\) \\ & Final & \(\mathbf{0.0185(8)}\) & \(\mathbf{8.482(16)}\) & \(\mathbf{0.5084(21)}\) & \(\mathbf{-0.0342(15)}\) & \(\mathbf{-15.834(30)}\) & \(\mathbf{-0.9487(39)}\) \\ & TDHF+BO [28] & \(0.0141\) & & & & \(-0.0262\) & \\ & Semi-empirical [26] & & & & & \(-0.0372(25)\) & \\ & Experiment [29] & & & & & \(-0.0334(2)_{stat}(25)_{syst}\) & \\ \end{tabular}
\end{table}
Table 2: Magnetic dipole hyperfine interaction induced E1 polarizabilities (in \(10^{-10}\) Hz/(V/m)\({}^{2}\)) of the hyperfine levels of the ground state of \({}^{133}\)Cs at various \(\lambda\) values.
and
\[W_{F}^{T} = -\sqrt{\frac{2F(2F-1)(2F+1)}{3(F+1)(2F+3)}}. \tag{29}\]
## III Approaches for evaluation
As can be inferred from the above discussion, we need a large set of matrix elements of the \(D\) and \(T_{J}^{(1)}\) operators for precise estimate of the \(\alpha_{F}\) values in \({}^{133}\)Cs. Since wave functions of the atomic states of \({}^{133}\)Cs cannot be solved exactly, we can determine these matrix elements using a mean-field approximation. We use the Dirac-Hartree-Fock (DHF) approach to obtain mean-field wave functions of the Dirac-Coulomb (DC) Hamiltonian, which in atomic unit (a.u.) is given by
\[H_{DC} = \sum_{i=1}^{N_{e}}\left[c\vec{\alpha}_{D}\cdot\vec{p}_{i}+(\beta- 1)c^{2}+V_{n}(r_{i})\right]+\sum_{i>j}\frac{1}{r_{ij}},\]
where \(N_{e}\) is the number of electrons in the atom, \(\vec{\alpha}_{D}\) and \(\beta\) are the Dirac matrices, \(V_{n}(r)\) is the nuclear potential, and \(r_{ij}\) is the inter-electronic distances between electrons located at \(r_{i}\) and \(r_{j}\). We have also included corrections due to Breit and lower-order quantum electrodynamics to improve accuracy in the calculations.
To produce as many as bound states possible that are having a common core [\(5p^{6}\)] but differ by a valence orbital \(v\) in \({}^{133}\)Cs, we consider the \(V^{N-1}\) potential in the DHF method. In this approach, the DHF wave functions of the interested states are denoted by
\[|\Phi_{v}\rangle=a_{v}^{\dagger}|\Phi_{0}\rangle, \tag{30}\]
where \(|\Phi_{0}\rangle\) is the DHF wave function of the closed-core [\(5p^{6}\)]. Using these wave functions, we can determine the dominant part of the \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{A}(\omega)\) values of the ground state of in \({}^{133}\)Cs. In Fig. 1, we show Goldstone diagram representations of the DHF contributions for \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{A}(\omega)\). Since \(D\) is an one-body operator, the DHF diagrams include contributions only from the intermediate states that are represented by single orbital excitations. Thus, we can classify these diagrams as core, core-valence, and valence orbital contributions corresponding to Figs. 1(i), (ii), and (iii) respectively. In order to improve these calculations for precise estimations of the E1 polarizabilities, it is imperative to include electron correlation effects that are arising through other configurations that are neglected at the DHF method. It is possible to adopt a linear response approach [34; 35] to include the electron correlation effects for carrying out _ab initio_ calculations of the above quantities. However, accuracy of the first-principle results will be restricted by the uncertainties associated with both the calculated energies and E1 matrix elements. To minimize uncertainties in the calculations, we intend to use the experimental energies from the National Institute of Science and Technology (NIST) database [36] which are known with very high accuracy. Similarly, we want to use very precise values of the E1 matrix elements either from the theory or experiments wherever available. First, we attempt to evaluate these E1 matrix elements using the relativistic coupled-cluster (RCC) method. Wherever we find the experimental E1 values are available with more accuracy than our RCC results, we use the experimental results in those cases. However, it should be noted that the extracted experimental E1 values cannot decide their signs properly, whose information are essential in the determination of the hyperfine interaction induced E1 polarizabilities. So, we take the help of our calculated E1 matrix elements for assigning signs to the precisely known experimental E1 values. Again, contributions from the high-lying continuum orbitals to the valence contributions are estimated using lower-order methods and quoted them as "tail' contributions while we list the valence contributions from low-lying bound states as "main" contributions to distinguish them in the analyses.
In the RCC theory ansatz, wave function of an atomic state with a closed-shell electronic configuration and a valence orbital can be expressed by [37]
\[|\Psi_{v}\rangle = e^{T}\left\{1+S_{v}\right\}|\Phi_{v}\rangle, \tag{31}\]
where \(T\) is the RCC operator that accounts for the excitations of core electrons to virtual orbitals, and \(S_{v}\) is the RCC operator that excites the valence and core orbitals together to virtual orbitals due to the correlation effects. Amplitudes of the \(T\) and \(S_{v}\) excitation operators are obtained by
\[\langle\Phi_{0}^{*}|(He^{T})_{c}|\Phi_{0}\rangle=0 \tag{32}\]
and
\[\langle\Phi_{v}^{*}|[(He^{T})_{c}-E_{v}]S_{v}|\Phi_{v}\rangle=- \langle\Phi_{v}^{*}|(He^{T})_{c}|\Phi_{v}\rangle, \tag{33}\]
where subscript \(c\) denotes the connected terms and projected states with superscript \(*\) stand for the excited state Slater determinants with respect to the respective DHF states. The energy of the exact state is given by
\[E_{v} = \langle\Phi_{v}|H_{eff}|\Phi_{v}\rangle=\langle\Phi_{v}|(He^{T}) _{c}\left\{1+S_{v}\right\}|\Phi_{v}\rangle. \tag{34}\]
We have considered singles, doubles, and triples excitations in the RCC method (RCCSDT method) by defining
\[T=T_{1}+T_{2}+T_{3} \tag{35}\]
and
\[S_{v}=S_{1v}+S_{2v}+S_{3v}, \tag{36}\]
where subscripts 1, 2, and 3 denote levels of single, double, and triple excitations respectively. Since it is challenging to include triple excitations from a large set of basis functions, we first considered only the singles and doubles excitations in the RCC method (RCCSD method)
for a sufficiently larger basis functions. From the analysis of the results from the RCCSD method, we find out the most active orbitals that contribute predominantly in \({}^{133}\)Cs. Then, we allow triple excitations only from those selected orbitals in the RCCSDT method.
After obtaining amplitudes of the RCC operators, matrix element of a physical operator \(O\) between the \(|\Psi_{f}\rangle\) and \(|\Psi_{i}\rangle\) states is evaluated by
\[\langle O\rangle_{fi} = \frac{\langle\Psi_{f}|O|\Psi_{i}\rangle}{\sqrt{\langle\Psi_{f}| \Psi_{f}\rangle\langle\Psi_{i}|\Psi_{i}\rangle}} \tag{37}\] \[= \frac{\langle\Phi_{f}|\{S_{f}^{\dagger}+1\}\overline{O}\{1+S_{i }\}|\Phi_{i}\rangle}{\langle\Phi_{f}|\{S_{f}^{\dagger}+1\}\overline{N}\{1+S_{i }\}|\Phi_{i}\rangle},\]
where \(\overline{O}=e^{T^{\dagger}}Oe^{T}\) and \(\overline{N}=e^{T\dagger}e^{T}\). Both \(\overline{O}\) and \(\overline{N}\) are the non-terminating series, which are evaluated by adopting iterative procedures [38; 39; 40].
It is possible only to improve the valence contributions to \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{A}(\omega)\) in the aforementioned approach as only the E1 matrix elements involving the bound excited states can be evaluated using the RCC method. However, correlation contributions involving bound core excitations to the core and core-valence Goldstone diagrams shown as Figs. 1(i) and (ii) have to be obtained from the first-principle calculations. We have employed RPA to evaluate the core and core-valence contributions to \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{A}(\omega)\). In both cases, we rewrite the expressions for both \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{A}(\omega)\) in a general form as
\[\alpha_{J}^{K=S/A} = \langle\Phi_{0}|D|\Phi_{0}^{(\infty,1)+}\rangle+\langle\Phi_{0}| D|\Phi_{0}^{(\infty,1)-}\rangle, \tag{38}\]
where \(|\Phi_{0}^{(\infty,1)\pm}\rangle\) are the first-order perturbation wave functions for \(\pm\omega\) values that contain core-polarization effects to all-orders and one-order of dipole interaction. It should be noted that for the scalar and axial-vector com
\begin{table}
\begin{tabular}{l l l} Transition & RCCSDT method & Experiment \\ \hline \(6S_{1/2}\)-\(6S_{1/2}\) & \(5.817[-7]\) & \(5.797[-7]\)[48] \\ \(6S_{1/2}\)-\(7S_{1/2}\) & \(2.859[-7]\) & \(2.825[-7]\)[48; 49] \\ \(6S_{1/2}\)-\(8S_{1/2}\) & \(1.795[-7]\) & \(1.790[-7]\)[48; 50] \\ \(6S_{1/2}\)-\(5D_{3/2}\) & \(-1.674[-8]\) & \\ \(6S_{1/2}\)-\(6D_{3/2}\) & \(8.770[-9]\) & \\ \(6P_{1/2}\)-\(6P_{1/2}\) & \(7.341[-8]\) & \(7.364[-8]\)[51] \\ \(6P_{1/2}\)-\(7P_{1/2}\) & \(4.143[-8]\) & \(4.187[-8]\)[51; 52] \\ \(6P_{1/2}\)-\(8P_{1/2}\) & \(2.759[-8]\) & \(2.821[-8]\)[51; 53] \\ \(6P_{1/2}\)-\(7P_{1/2}\) & \(4.143[-8]\) & \\ \(6P_{1/2}\)-\(9P_{1/2}\) & \(-1.968[-8]\) & \\ \(6P_{1/2}\)-\(6P_{3/2}\) & \(-4.394[-9]\) & \\ \(6P_{1/2}\)-\(7P_{2/2}\) & \(-2.572[-9]\) & \\ \(7P_{1/2}\)-\(7P_{1/2}\) & \(2.371[-8]\) & \(2.381[-8]\)[52] \\ \(7P_{1/2}\)-\(8P_{1/2}\) & \(1.567[-8]\) & \(1.606[-8]\)[52; 53] \\ \(7P_{1/2}\)-\(9P_{1/2}\) & \(-11.177[-9]\) & \\ \(7P_{1/2}\)-\(6P_{3/2}\) & \(-2.402[-9]\) & \\ \(7P_{1/2}\)-\(7P_{3/2}\) & \(-1.417[-9]\) & \\ \(8P_{1/2}\)-\(8P_{1/2}\) & \(10.595[-9]\) & \(10.840[-9]\)[53] \\ \(8P_{1/2}\)-\(9P_{1/2}\) & \(-7.446[-9]\) & \\ \(8P_{1/2}\)-\(6P_{3/2}\) & \(-1.610[-9]\) & \\ \(8P_{1/2}\)-\(7P_{3/2}\) & \(-9.460[-10]\) & \\ \(9P_{1/2}\)-\(9P_{1/2}\) & \(5.313[-9]\) & \\ \(6P_{3/2}\)-\(6P_{3/2}\) & \(3.874[-8]\) & \\ \(6P_{3/2}\)-\(7P_{3/2}\) & \(2.214[-8]\) & \\ \(6P_{3/2}\)-\(7P_{3/2}\) & \(1.500[-8]\) & \\ \(7P_{3/2}\)-\(7P_{3/2}\) & \(12.648[-9]\) & \\ \end{tabular}
\end{table}
Table 4: Some of the important matrix elements (in a.u.) of the \(\bf{T}_{J}^{(1)}\) operator of \({}^{133}\)Cs. Numbers appearing as \(a[b]\) can be read as \(a\times 10^{b}\). See the text for details to learn about how the experimental values for the off-diagonal matrix elements are inferred.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Transition} & \multirow{2}{*}{E1 matrix element} & \multicolumn{3}{c}{\(\alpha_{GS}^{S}\) values} & \multicolumn{2}{c}{\(\alpha_{GS}^{A}\) values} \\ \cline{3-6} & & \(\lambda=\infty\) & \(\lambda=936\) nm & \(\lambda=1064\) nm & \multicolumn{2}{c}{\(\lambda=936\) nm & \(\lambda=1064\) nm} \\ \hline Main & & & & & & \\ \(6S_{1/2}-6P_{1/2}\) & \(4.5067(40)^{a}\) & \(132.93\) & \(1536.35\) & \(453.54\) & \(-2936.77\) & \(-762.65\) \\ \(6S_{1/2}-6P_{3/2}\) & \(6.3403(64)\)[46] & \(250.67\) & \(1467.97\) & \(699.66\) & \(1336.78\) & \(560.48\) \\ \(6S_{1/2}-7P_{1/2}\) & \(0.27810(45)\)[47] & \(0.26\) & \(0.34\) & \(0.32\) & \(-0.34\) & \(-0.28\) \\ \(6S_{1/2}-7P_{3/2}\) & \(0.57417(57)\)[48] & \(1.10\) & \(1.44\) & \(1.35\) & \(0.70\) & \(0.58\) \\ \(6S_{1/2}-8P_{1/2}\) & \(0.0824(10)^{a}\) & \(0.02\) & \(0.02\) & \(0.02\) & \(-0.02\) & \(-0.02\) \\ \(6S_{1/2}-8P_{3/2}\) & \(0.2294(15)^{a}\) & \(0.15\) & \(0.18\) & \(0.17\) & \(0.08\) & \(0.06\) \\ \(6S_{1/2}-9P_{1/2}\) & \(0.0424(15)^{a}\) & \(0.01\) & \(0.01\) & \(0.01\) & \(-0.01\) & \(\sim 0.0\) \\ \(6S_{1/2}-9P_{3/2}\) & \(0.1268(11)^{a}\) & \(0.04\) & \(0.05\) & \(0.05\) & \(0.02\) & \(0.02\) \\ \hline Total & & \(385.2(6)\) & \(3006.4(40)\) & \(1155.1(16)\) & \(-1599.5(59)\) & \(-201.8(18)\) \\ Tail & & & \(0.20\) & \(0.14\) & \(0.14\) & \(0.
ponents the corresponding angular factors are multiplied, which are not shown explicitly in the above expression.
Since experimental value for \(\alpha_{J}^{S}(0)\) of the ground state of \({}^{133}\)Cs is known very precisely, comparison between our calculation with the experimental result will help to validate our calculations for the dynamic values of \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{A}(\omega)\). Also, this test would be useful for determining hyperfine-induced third-order polarizabilities. In Figs. 2, 3 and 4, we show the Goldstone diagram representations of all possible contributions to the DHF values of \(\alpha_{F}^{S/A/T(2,1)}\) for the top, center, and normalization contributions respectively. Though these contributions are much smaller than the second-order contributions to \(\alpha_{F,M_{F}}\), but their accurate evaluations are more challenging than the second-order contributions. For easy understanding of various contributions to these quantities, we denote contributions from Fig. 2 (i) and (ii) together as core, (iii) as core-core, (iv) as core-valence, (v) as valence-core, and (vi) as valence contributions. Analogous division has been followed for diagrams shown in Fig.3 with Fig. 2 as both figures have striking similarities. In Fig. 4, diagram (i) is denoted as core, diagram (ii) as valence-core, and diagram (iii) as valence contributions as in the case of the second-order E1 polarizabilities.
We adopt similar procedures of evaluating the second-order E1 polarizabilities to estimate the valence contributions to \(T^{\mathcal{K}}\), \(C^{\mathcal{K}}\), and \(R^{\mathcal{K}}\). As can be seen in Fig. 2, estimation of the valence contribution to \(T^{\mathcal{K}}\) requires a large number of matrix elements involving the \(S_{1/2}\), \(P_{1/2;3/2}\), and \(D_{3/2}\) states. Unlike the second-order polarizabilities, knowing correct signs for the E1 and \(T_{J}^{(1)}\) matrix elements are essential for the evaluation of \(T^{\mathcal{K}}\). For the evaluation of the valence contribution to \(C^{\mathcal{K}}\), it requires E1 matrix elements among the ground state and the \(P_{1/2;3/2}\) states and \(T_{J}^{(1)}\) matrix elements between the \(P_{1/2;3/2}\) states as per the parity and angular momentum selection rules. Since the expressions for \(R^{\mathcal{K}}\) and second-order polarizability have similar forms, its valence contribution evaluation requires the same E1 matrix elements as the case of the second-order E1 polarizabilities along with the expectation value of \(T_{J}^{(1)}\) in the ground state.
It is important to fathom about the core, core-core, core-valence, and valence-core contributions to \(T^{\mathcal{K}}\) and \(C^{\mathcal{K}}\) judiciously in order to claim accuracy of the third-order E1 polarizability calculations. The core and valence-core contributions to \(R^{\mathcal{K}}\) are determined by adopting the same approaches as mentioned earlier in the case of the second-order E1 polarizabilities. Unlike for \(R^{\mathcal{K}}\), the core, core-core, core-valence, and valence-core contributions to \(T^{\mathcal{K}}\) and \(C^{\mathcal{K}}\) have to be estimated very carefully. As can be seen from Figs. 2 and 3, the core contributions to these quantities require matrix elements involving the core-core, core-virtual, and virtual-virtual orbitals. It is evident that evaluations of the core-valence and valence-core contributions require similar set of matrix elements. However, different set of core and virtual orbitals are involved in the determination of the core and valence contributions to \(T^{\mathcal{K}}\) and \(C^{\mathcal{K}}\) owing to different angular momentum selection rules in both the expressions. Matrix elements between the bound states are taken from the RCC theory or experiments as appropriate depending on their accuracy. We also use here the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{\(F=3\)} & \multicolumn{3}{c}{\(F=4\)} \\ \cline{3-8} Polarizability & Contribution & \(\lambda=\infty\) & \(\lambda=936\) nm & \(\lambda=1064\) nm & \(\lambda=\infty\) & \(\lambda=936\) nm & \(\lambda=1064\) nm \\ \hline \(\alpha_{F}^{S(2,1)}\) & Valence & \(-2.5584\) & \(-201.0945\) & \(-25.3858\) & \(1.9904\) & \(156.4064\) & \(19.7445\) \\ & Valence-Core & \(-0.0016\) & \(-0.0032\) & \(0.0601\) & \(0.0013\) & \(0.0025\) & \(-0.0467\) \\ & Core-Valence & \(0.0010\) & \(-0.0040\) & \(0.0402\) & \(-0.0008\) & \(0.0031\) & \(-0.0313\) \\ & Core-Core & \(-0.0009\) & \(-0.0009\) & \(-0.0009\) & \(0.0007\) & \(\sim 0.0\) & \(0.0007\) \\ & Core & \(0.0010\) & \(0.0010\) & \(0.0010\) & \(-0.0015\) & \(-0.0015\) & \(-0.0015\) \\ \hline \(\alpha_{F}^{A(2,1)}\) & Valence & \(0.0\) & \(-185.6502\) & \(-9.6217\) & \(0.0\) & \(-192.5270\) & \(-9.9781\) \\ & Valence-Core & \(0.0\) & \(0.0317\) & \(-0.0258\) & \(0.0\) & \(0.0329\) & \(-0.0268\) \\ & Core-Valence & \(0.0\) & \(0.0265\) & \(-0.0548\) & \(0.0\) & \(0.0275\) & \(-0.0569\) \\ & Core-Core & \(0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(0.0\) & \(\sim 0.0\) & \(\sim 0.0\) \\ & Core & \(0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(0.0\) & \(\sim 0.0\) & \(\sim 0.0\) \\ \hline \(\alpha_{F}^{T(2,1)}\) & Valence & \(0.0165\) & \(8.4872\) & \(0.5794\) & \(-0.0308\) & \(-15.8428\) & \(-1.0815\) \\ & Valence-Core & \(0.0010\) & \(-0.0024\) & \(-0.0355\) & \(-0.0017\) & \(0.0045\) & \(0.0664\) \\ & Core-Valence & \(0.0010\) & \(-0.0024\) & \(-0.0355\) & \(-0.0017\) & \(0.0045\) & \(0.0664\) \\ & Core-Core & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) \\ & Core & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) & \(\sim 0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Breakdown of our calculated \(\alpha_{F}^{S(2,1)}\), \(\alpha_{F}^{A(2,1)}\) and \(\alpha_{F}^{T(2,1)}\) values for the \(F=3\) and \(F=4\) levels of \({}^{133}\)Cs in terms of the valence, valence-core, core-valence, core-core and core contributions. Results are given for both the static and dynamic E1 polarizabilities (in \(10^{-10}\) Hz/(V/m)\({}^{2}\)).
experimental energies in the denominator wherever possible, otherwise, the calculated energies are being used. The E1 matrix elements between the core orbitals are taken from the DHF method, while between the core and virtual orbitals are taken from RPA as required.
## IV Results and discussion
In Tables 1 and 2, we present the \(\alpha_{J}^{S}\), \(\alpha_{J}^{A}\), \(\alpha_{F}^{S(2,1)}\), \(\alpha_{F}^{A(2,1)}\) and \(\alpha_{F}^{T(2,1)}\) values of the \(6S\) state of \({}^{133}\)Cs at different wavelengths. We have used \(g_{I}=0.737885714\) with \(I=7/2\) from Ref. [41] for carrying out these evaluations. To understand the importance of the correlation effects and sensitivity of the results due to use of the calculated and experimental energies, we have given _ab initio_ results from the DHF, RCCSD, and RCCSDT methods in the tables. However, we give our final recommended values from the semi-empirical approach after utilizing experimental energies and E1 matrix elements as discussed in the previous section. These recommended results, shown in bold fonts in the above tables, are compared with the available experimental results and previous calculations from the literature. As can be seen from these tables, there are significant differences between the DHF values and the RCCSD results. This suggests that the electron correlations play significant roles in the accurate determination of both the second-order and third-order E1 polarizabilities. These differences are more prominent in the dynamic E1 polarizabilities. In fact, there are sign differences between the DHF and RCCSD values from the atomic polarizabilities indicating that correlation contributions are unusually large in these quantities. By analysing the DHF and RCCSD results carefully, we observe that large differences in these results are mostly due to the energy denominators. This justifies the reason why the results are improved significantly when experimental energies are used. Though differences among the _ab initio_ results and the semi-empirical values reduce when correlation effects through triple excitations are included in the calculations, there are still significant differences are found among the RCCSDT method and semi-empirical values for the dynamic polarizabilities. Since our objective is to offer precise values of the E1 polarizabilities of the hyperfine levels of the ground state in \({}^{133}\)Cs, the semi-empirical results are recommended for their future applications. At this stage, we would like to clarify that only the valence contributions are improved through the semi-empirical approach but the core, core-valence, and valence-core contributions are taken from our calculations. Thus, there are still scope to improve accuracy of the calculated results by including higher-order correlation effects in the determination of the core, core-core,
Figure 5: Demonstration of contributions from two different combinations of intermediate states (\(J^{\prime}\) and \(J^{\prime\prime}\)) to the (a) top, (b) center and (c) normalization parts of the static \(\alpha_{F}^{S(2,1)}\) value of the \(F=3\) level of \({}^{133}\)Cs. States with subscript \(-\) symbol in the figure represent the lower angular momentum state of a fine-structure partner; i.e. \(P_{\bullet}\) means \(P_{1/2}\) and \(D_{\bullet}\) denotes \(D_{3/2}\), while \(P\) and \(D\) stand for the \(P_{3/2}\) and \(D_{5/2}\) states respectively.
Figure 6: Contributions from different combinations of intermediate states (\(J^{\prime}\) and \(J^{\prime\prime}\)) to the (a) top and (b) center parts of the static \(\alpha_{F}^{T(2,1)}\) value of the \(F=3\) level of \({}^{133}\)Cs. The meanings of notations are same as in the previous figure.
core-valence, and valence-core contributions. Nonetheless, uncertainties to our semi-empirical values quoted in both Tables 1 and 2 include typical order of magnitudes from these neglected contributions.
Comparison of the static \(\alpha_{J}^{S}\) and \(\alpha_{F}^{T(2,1)}\) values with their experimental results shows that our recommended values agree perfectly with the measurements [29; 45]. Compared to the previous calculations of the static \(\alpha_{J}^{S}\) values reported in Refs. [42; 43; 44], our value is very close to the experimental result. This is owing to the fact that we have used many precisely estimated E1 matrix elements from the latest measurements [46; 47] as discussed later. From this, we expect that our other calculated values including the dynamic polarizabilities at wavelengths 936 nm and 1064 nm are also equally accurate. We could not find experimental results for \(\alpha_{F}^{S(2,1)}\) and \(\alpha_{F}^{A(2,1)}\) for either the \(F=3\) level or the \(F=4\) level to make direct comparison with our estimated values. However, comparison with another calculation reported in Ref. [28] show that the results for \(\alpha_{F}^{S(2,1)}\) agree reasonably but they differ significantly for \(\alpha_{F}^{T(2,1)}\). In Ref. [28], authors have employed the combined TDHF and BO (TDHF+BO) method that accounts for core-polarization effects to all-orders and pair-correlation contributions are estimated using the Bruckner orbitals. The RCC method includes all the RPA effects and pair-correlations to all-orders implicitly. We also found another semi-empirical result for \(\alpha_{F}^{T(2,1)}\) of the \(F=4\) level, in which calculation was performed using the statistical Thomas-Fermi potential approach and by scaling some of the matrix elements with the experimental data. It has overestimated the \(\alpha_{F}^{T(2,1)}\) value compared to the experimental result and also differs from our calculation.
After discussing the final results, we intend now to analyze individual contributions to the final results to understand their roles for accurate determination of both the second-order and third-order E1 polarizabilities. We have given intermediate contributions to \(\alpha_{J}^{S}(\omega)\) and \(\alpha_{J}^{J}(\omega)\) at different \(\omega\) (rather \(\lambda\)) values in Table 3. We have listed the E1 matrix elements of many important transitions that give dominant contributions to the valence part and are referred as'main'. As mentioned before, many of these E1 matrix elements are borrowed from the precise measurements of lifetime or E1 polarizability in different atomic states that are reported in Refs. [46; 47]; else they are taken from the RCCSDT method. The "tail" contributions to the valence part from the high-lying virtual states are estimated by using the E1 matrix elements from the DHF method and energies from the NIST database. The core and core-valence contributions are estimated using RPA. It shows that precise estimate of the second-order E1 polarizabilities mainly depend on the accurate E1 matrix elements of the \(6s\ ^{2}S_{1/2}\to 6p\ ^{2}P_{1/2;3/2}\) transitions and core contribution. However, contributions from the E1 matrix elements of the \(6s\ ^{2}S_{1/2}\to 7p\ ^{2}P_{1/2;3/2}\) transitions are also important to consider for improving the precision of the results.
We discuss then the \(\alpha_{F}^{S(2,1)}\), \(\alpha_{F}^{A(2,1)}\) and \(\alpha_{F}^{T(2,1)}\) contributions to both the \(F=3\) and \(F=4\) hyperfine levels at different wavelengths. As mentioned in the previous section, these calculations require a large set of E1 and \(T_{J}^{(1)}\) matrix elements. Some of the dominantly contributing E1 matrix elements used in these calculations are already given in Table 3. In Table 4, we list many \(T_{J}^{(1)}\) matrix elements that are important for the evaluation of \(\alpha_{F}^{S(2,1)}\), \(\alpha_{F}^{A(2,1)}\) and \(\alpha_{F}^{T(2,1)}\). Most of these results are obtained using the RCCSDT method, except in a few cases for which we use the precise values from the experiments
[48; 49; 50; 51; 52; 53]. Some of the off-diagonal matrix elements from this list are inferred from the experimental M1 hyperfine structure constants by using the relation
\[\langle J_{f}||T_{J}^{(1)}||J_{i}\rangle\simeq\sqrt{\langle J_{f}||T_{J}^{(1)}||J _{f}\rangle\langle J_{i}||T_{J}^{(1)}||J_{i}\rangle}. \tag{39}\]
We have also used the experimental energies [36] wherever possible in order to reduce uncertainties in the calculations.
Following the previous section discussion, these quantities are estimated by dividing their contributions into \(T^{\mathcal{K}}\), \(C^{\mathcal{K}}\), and \(R^{\mathcal{K}}\). Further, each of these has core, core-core, core-valence, valence-core, and valence contributions. In Table 5, we have given the individual contributions from the core, core-core, core-valence, valence-core, and valence parts to the \(\alpha_{F}^{S(2,1)}\), \(\alpha_{F}^{A(2,1)}\) and \(\alpha_{F}^{T(2,1)}\) values by adding them from \(T^{\mathcal{K}}\), \(C^{\mathcal{K}}\) and \(R^{\mathcal{K}}\) separately. It is evident from the above table that the valence contributions are the dominant ones and contribute almost all to the final values whereas in \(\alpha_{F}^{S(2,1)}\) and \(\alpha_{F}^{A(2,1)}\), contributions from the core, core-core, core-valence and valence-core parts are negligibly small. One should also note that contributions from the valence-core or core-valence correlations to the tensor polarizabilities are non-negligible. Since experimental result for the static \(\alpha_{F}^{T(2,1)}\) value of the \(F=4\) level in \({}^{133}\)Cs is available, we intend to analyze it in terms of different correlation contributions. It is evident from Table 5 that the valence contribution to this quantity from our calculation is \(-3.08\times 10^{-8}\) Hz/(V/cm)\({}^{2}\), whereas the central value of the experimental result is \(-3.34\times 10^{-8}\) Hz/(V/cm)\({}^{2}\)[29]. Thus, there is about 8% difference between both the values after neglecting their uncertainties. Reducing uncertainty due to systematic effects in the measurement of \(\alpha_{F}^{T(2,1)}\) would be extremely difficult, so it is important to figure out roles of other physical contributions to the theoretical result in order to help future experiments to carry out its measurement more precisely. Our analysis shows that the core and core-core contributions to the static \(\alpha_{F}^{T(2,1)}\) value of the \(F=4\) level are negligibly small, while the valence-core and core-valence contributions are quite significant. As can be seen from the table, difference between the theoretical and experimental value reduces drastically to 2% after taking into account these contributions. Interestingly, these valence-core and core-valence contributions to the dynamic \(\alpha_{F}^{T(2,1)}\) values at \(\lambda=936\) nm and \(\lambda=1064\) nm are found to be extremely small compared to their valence contributions.
Unlike the second-order E1 polarizabilities, it is not possible to demonstrate contributions from the intermediate states easily as their formulas possess two summations (see Eqs. (24) and (25)). However, we adopted a different approach to show importance of contributions from various intermediate states. In Figs. 5 and 6, we have shown three-dimensional plots to depict contributions from two different sets of intermediate states to the valence parts of \(T^{\mathcal{K}}\), \(C^{\mathcal{K}}\), and \(R^{\mathcal{K}}\) to the static \(\alpha_{F}^{S(2,1)}\) and \(\alpha_{F}^{T(2,1)}\) values respectively. They are shown only for the \(F=3\) level as a representative case. As can be seen from these figures, matrix elements of a few selective transitions involving combinations of a few selective intermediate states are contributing predominantly to the third-order E1 polarizabilities. Gaining this knowledge is quite important in order to improve precision of these quantities further. It is evident from Fig. 5 that the contributions from the \(6P_{1/2,3/2}\) and \(7S_{1/2}\) intermediate states make the largest contributions to the top, center and normalization parts of \(\alpha_{F}^{S(2,1)}\). However, significant portion of the contribution come from the \(6P_{1/2,3/2}\) and \(5D_{3/2}\) states to the top and center parts of \(\alpha_{F}^{T(2,1)}\) as seen in Fig. 6. After understanding the roles of different intermediate states in the determination of the third-order E1 polarizabilities, we present the main contributions to both the static and dynamic \(T^{\mathcal{K}}\), \(C^{\mathcal{K}}\), and \(R^{\mathcal{K}}\) values of \(\alpha_{F}^{S(2,1)}\), \(\alpha_{F}^{A(2,1)}\) and \(\alpha_{F}^{T(2,1)}\) by taking sum of total contributions from all possible intermediate states in Table 6. As can be seen from the table, the \(R^{\mathcal{K}}\) component exhibits the dominant contribution to \(\alpha_{F}^{S(2,1)}\) followed by \(T^{\mathcal{K}}\) and then the \(C^{\mathcal{K}}\) component. For \(\alpha_{F}^{A(2,1)}\) also \(R^{\mathcal{K}}\) contribution dominates followed by the \(C^{\mathcal{K}}\) part. In the case of \(\alpha_{F}^{T(2,1)}\), the leading contribution comes from the \(C^{\mathcal{K}}\) part while the \(R^{\mathcal{K}}\) component is zero.
In Table 7, we present a comparison between our calculated Stark shift coefficient, \(k_{s}=-\frac{1}{2}\big{(}\alpha_{F=4}^{S(2,1)}-\alpha_{F=3}^{S(2,1)}\big{)}\), between the present work and the previously reported values. As can be seen from the table, our value \(-2.274(10)\times 10^{-10}\) Hz/(V/m)\({}^{2}\) agrees quite well with an earlier reported experimental result \(-2.271(4)\times 10^{-10}\) Hz/(V/m)\({}^{2}\)[24] while it differs substantially from another measurement reported later [25]. We also find that our result as precise as the calculated value reported in Ref. [23] compared to other theoretical works [19; 20; 21; 28]. This may be due to our semi-empirical treatment of various contributions to the estimations of the \(\alpha_{F=3}^{S(2,1)}\) and \(\alpha_{F=4}^{S(2,1)}\) values. Also, our DHF value \(-2.792\times 10^{-10}\) Hz/(V/m)\({}^{2}\) of \(k_{s}\) agrees with the DHF value \(-2.799\times 10^{-10}\) Hz/(V/m)\({}^{2}\) of Ref. [23]. Again, authors of Ref. [23] have argued about important contributions arising from the continuum (tail) to the \(k_{s}\) value.
\begin{table}
\begin{tabular}{l c} \hline \hline Reference & \(k_{s}\) value \\ \hline This work & \(-2.274(10)\) \\ Theory [19] & \(-1.97(9)\) \\ Theory [20] & \(-2.06(1)\) \\ Theory [21] & \(-2.28\) \\ Theory [23] & \(-2.271(8)\) \\ Theory [28] & \(-2.26(2)\) \\ Experiment [24] & \(-2.271(4)\) \\ Experiment [25] & \(-2.05(4)\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Summary of the \(k_{s}\) value from different theoretical and experimental works in units of \(10^{-10}\) Hz/(V/m)\({}^{2}\).
In this work, we also independently verify this fact and affirm that without the tail contribution the \(k_{s}\) value comes out to be \(-2.085\times 10^{-10}\) Hz/(V/m)\({}^{2}\). One can infer these tail contributions from our calculations to the hyperfine interaction induced E1 polarizabilities explicitly by analyzing various contributions listed in Tables 5 and 6. It can be seen from these tables that the tail contribution to \(k_{s}\) comes out to be \(8\%\) to the total contribution and the largest uncertainty to our final \(k_{s}\) value arises mainly from this part.
## V Summary
We have conducted comprehensive analyses of the second-order and magnetic dipole hyperfine interaction induced third-order electric dipole polarizabilities of the hyperfine levels of the ground state of the \({}^{133}\)Cs isotope. Results are presented for the DC electric field and for the AC electric field with two different wavelengths. One of them corresponds to the magic wavelength of the cooling line of the \({}^{133}\)Cs atom but power of the available laser at this wavelength is usually very low. One can get high-power laser for the other chosen wavelength and such lasers are often used in the laboratory to carry out high-precision measurements. First, we present the second-order electric dipole polarizabilities and compare them with the precisely reported experimental value and other theoretical results. After validating calculations through these results, we persuaded with the determination of the magnetic dipole hyperfine interaction induced third-order electric dipole polarizabilities. In order to understand these results thoroughly, we have given breakdown of the results in terms of contributions from intermediate states involving both the core and valence orbitals. Our static values for both the second-order and third-order electric dipole polarizability values match with the available experimental results quite nicely and explain about the roles of various contributions to the accurate evaluation of these quantities. The reported static and dynamic electric dipole polarizability results of both the hyperfine levels of the ground state in \({}^{133}\)Cs can be immensely useful to the experimentalists for estimating the Stark effects precisely to carry out any high-precision measurements in the laboratory.
## Acknowledgment
The computations reported in the present work were carried out using the ParamVikram-1000 HPC cluster of the Physical Research Laboratory (PRL), Ahmedabad, Gujarat, India
|
2301.05049 | On Voronoi visibility maps of 1.5D terrains with multiple viewpoints | Given an $n$-vertex 1.5D terrain $\T$ and a set $\A$ of $m<n$ viewpoints, the
Voronoi visibility map $\vorvis(\T,\A)$ is a partitioning of $\T$ into regions
such that each region is assigned to the closest (in Euclidean distance)
visible viewpoint. The colored visibility map $\colvis(\T,\A)$ is a
partitioning of $\T$ into regions that have the same set of visible viewpoints.
In this paper, we propose an algorithm to compute $\vorvis(\T,\A)$ that runs in
$O(n+(m^2+k_c)\log n)$ time, where $k_c$ and $k_v$ denote the total complexity
of $\colvis(\T,\A)$ and $\vorvis(\T,\A)$, respectively. This improves upon a
previous algorithm for this problem. We also generalize our algorithm to higher
order Voronoi visibility maps, and to Voronoi visibility maps with respect to
other distances. Finally, we prove bounds relating $k_v$ to $k_c$, and we show
an application of our algorithm to a problem on limited range of sight. | Vahideh Keikha, Maria Saumell | 2023-01-12T14:29:34Z | http://arxiv.org/abs/2301.05049v1 | # On Voronoi visibility maps of 1.5D terrains
###### Abstract
Given an \(n\)-vertex 1.5D terrain \(\mathcal{T}\) and a set \(\mathcal{P}\) of \(m<n\) viewpoints, the Voronoi visibility map \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) is a partitioning of \(\mathcal{T}\) into regions such that each region is assigned to the closest (in Euclidean distance) visible viewpoint. The colored visibility map \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) is a partitioning of \(\mathcal{T}\) into regions that have the same set of visible viewpoints. In this paper, we propose an algorithm to compute \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) that runs in \(O(n+(m^{2}+k_{c})\log n)\) time, where \(k_{c}\) and \(k_{v}\) denote the total complexity of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) and \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\), respectively. This improves upon a previous algorithm for this problem. We also generalize our algorithm to higher order Voronoi visibility maps, and to Voronoi visibility maps with respect to other distances. Finally, we prove bounds relating \(k_{v}\) to \(k_{c}\), and we show an application of our algorithm to a problem on limited range of sight.
keywords: Visibility, 1.5D terrains, Voronoi diagrams, multiple viewpoints. +
Footnote †: journal:
## 1 Introduction
A 1.5D terrain \(\mathcal{T}\) is an \(x\)-monotone polygonal chain of \(n\) vertices in \(\mathbb{R}^{2}\). Two points on \(\mathcal{T}\) are _visible_ if the segment connecting them does not contain any point strictly below \(\mathcal{T}\).
Visibility problems in terrains are fundamental in geographical information science and have many applications, such as placing fireguard or telecommunication towers [4], identifying areas that are not visible from sensitive sites [15], or solving problems related to sensor networks [17]. Although 2.5D terrains are more interesting for modelling and forecasting, 1.5D terrains are easier to visualize and to analyze. They give insights into the difficulties of 2.5D terrains in terrain analysis, and their proper understanding is seen as an essential step towards the ultimate goal of settling the 2.5D case. For this reason, visibility problems in 1.5D terrains have been intensively studied by the computational geometry community during the last 15 years.
In this paper, we focus on the variant where a set \(\mathcal{P}\) of \(m<n\) viewpoints are located on vertices of \(\mathcal{T}\) (we refer to the end of this section for a discussion on the assumption \(m<n\)). For each viewpoint \(p\in\mathcal{P}\), the _viewshed_ of \(p\) is the set of points of \(\mathcal{T}\) that are visible from \(p\) (see Fig. 1 for an example). Our goal is to efficiently extract information about the visibility of \(\mathcal{T}\) with respect to \(\mathcal{P}\). We continue the work initiated in [12], where the following structures are introduced.
The _visibility map_\(\mathrm{Vis}(\mathcal{T},\mathcal{P})\) is a partitioning of \(\mathcal{T}\) into a _visible_ region (containing all portions of \(\mathcal{T}\) that are visible by at least one element in \(\mathcal{P}\)) and an _invisible_ region (containing the portions that are not visible by any element in \(\mathcal{P}\)). See Fig. 1(a) for an example. The visible region of the visibility map is equal to the union of the viewsheds of all viewpoints in \(\mathcal{P}\).
The _colored visibility map_\(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) is a partitioning of \(\mathcal{T}\) into regions that have the same set of visible viewpoints. See Fig. 2b for an example.
Finally, the _Voronoi visibility map_\(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) is a partitioning of \(\mathcal{T}\) into regions that have the same closest visible viewpoint, where the distance used is the Euclidean distance (not the distance along the terrain). See Fig. 2c for an example.
Algorithms to compute these structures for both 1.5D and 2.5D terrains are proposed in [12]. The algorithm to obtain \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) of a 1.5D terrain runs in \(O(n+(m^{2}+k_{c})\log n+k_{v}(m+\log n\log m))\) time, where \(k_{c}\) and \(k_{v}\) denote the total complexity of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) and \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\), respectively. Both \(k_{c}\) and \(k_{v}\) have size \(O(mn)\), and this bound is asymptotically tight [12]. The algorithm first computes \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\), and then it spends \(\Theta(m)\) time to find each single region of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\). In this paper, we show that \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) can be extracted from \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) in a 1.5D terrain much more efficiently, resulting in an \(O(n+(m^{2}+k_{c})\log n)\)-time algorithm. We use an observation related to intersections of the terrain with bisectors of pairs of viewpoints that also allows us to prove a relationship between \(k_{c}\) and \(k_{v}\).
Let us point out that, apart from the mentioned output-sensitive algorithm for \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) of a 1.5D terrain, the authors of [12] also propose a divide-and-conquer algorithm running in \(O(mn\log m)\) time, which is worst-case nearly optimal (recall that the maximum complexity of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) is \(\Theta(mn)\)). Therefore, our new algorithm does not represent an improvement in the worst-case instances, but in instances where the original output-sensitive algorithm is faster than the divide-and-conquer one, and \(k_{v}m\) is the dominant term in the running time. An example of such an instance is \(m=\Theta(\sqrt{n})\), \(k_{c}=\Theta(n^{3/4})\) and \(k_{v}=\Theta(n^{3/4})\).
In this paper, we also provide generalizations of our algorithm to compute Voronoi visibility maps of higher order (that is, containing the information about the \(k\) closest visible viewpoints, for some \(k>1\)), and Voronoi visibility maps with respect to two other distances: the Euclidean distance along the terrain and the link distance. All of these generalizations have the same running time as the original algorithm.
Finally, the new algorithm for \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) also allows us to solve efficiently a problem related to limited range of sight. These problems are motivated by the fact that, even though many visibility problems assume an infinite range of visibility, the intensity of light, signals and other phenomena modelled with viewpoints decreases over distance in realistic environments. In this spirit, the problem of illuminating a polygonal area with the minimum total energy was introduced by O'Rourke [16], and studied in [7; 8]. We consider a related problem on terrains, namely, computing the minimum value \(r^{*}\) such that, if the viewpoints can only see objects within distance \(r^{*}\), the obtained visibility map is the same as \(\mathrm{Vis}(\mathcal{T},\mathcal{P})\). We show that this problem can also be solved in \(O(n+(m^{2}+k_{c})\log n)\) time.
Related WorkWhen there is only one viewpoint, computing the visibility map of a 1.5D terrain can be done in \(O(n)\) time by converting the terrain into a simple polygon and applying the algorithm from [13]. One of the first results on the variant with more than one viewpoint is an \(O((n+m)\log m)\) time algorithm to detect if there are any visible pairs
Figure 1: The viewshed of \(p\).
of viewpoints above a 1.5D terrain [2]. Later, a systematic study of Vis(\(\mathcal{T},\mathcal{P}\)), VorVis(\(\mathcal{T},\mathcal{P}\)) and ColVis(\(\mathcal{T},\mathcal{P}\)) was carried out in [12] for both 1.5D and 2.5D terrains. A problem that is very related to the construction of Vis(\(\mathcal{T},\mathcal{P}\)) is that of computing the total visibility index of the terrain, that is, the number of viewpoints that are visible from each of the viewpoints. This problem can be solved in \(O(n\log^{2}n)\) time [1].
The situation where the locations of the viewpoints are unknown has been thoroughly studied. It is well-known that computing the minimum number of viewpoints to keep a 1.5D terrain illuminated is NP-complete [9; 14], but the problem admits a PTAS [9; 10; 11]. If the viewpoints are restricted to lie on a line, the same problem can be solved in linear time [6].
AssumptionsAs in [12], we assume that no three vertices of \(\mathcal{T}\) are aligned. For the sake of simplicity, we also assume that no edge of \(\mathcal{T}\) is contained in the bisector of two viewpoints in \(\mathcal{P}\), and that no point on \(\mathcal{T}\) is at the same distance from three or more viewpoints in \(\mathcal{P}\).
As mentioned earlier, we restrict to the case where the viewpoints lie on terrain vertices; the same assumption is made in [12], and it has the implication that \(m\leq n\). Notice that no generality is lost because, if viewpoints are located in the interior of terrain edges, we can simply add vertices to the terrain and apply our algorithms. Furthermore, placing a superlinear number of viewpoints on the terrain does not seem to make much sense: If more than two viewpoints lie on the same edge, it is easy to see that the union of the viewsheds of the leftmost and rightmost viewpoints contains the viewshed of any other viewpoint on the edge. Therefore, all the intermediate viewpoints are somewhat irrelevant for visibility purposes.
Finally let us mention that, in [12], \(k_{v}\) and \(k_{c}\) do not only include the number of points of \(\mathcal{T}\) that are on the boundary of two distinct regions of the respective diagrams, but also the total number of vertices of \(\mathcal{T}\), that is, \(n\). For the sake of consistency, we follow the same convention in this paper.
## 2 Complexity of the Voronoi visibility map
In [12], it is stated that the complexity of VorVis(\(\mathcal{T},\mathcal{P}\)) can be higher than, lower than, or equal to that of ColVis(\(\mathcal{T},\mathcal{P}\)). In this section, we refine this statement. Recall that in both cases the complexity is \(O(mn)\), and this bound is asymptotically tight [12].
Let us introduce some terminology. The _Voronoi viewshed '\(\mathcal{W}_{\mathcal{T}}(p,\mathcal{P})\)_ of \(p\) is the set of points in the viewshed of \(p\) that are closer to \(p\) than to any other viewpoint that is visible from them.
Since we have assumed that no edge of \(\mathcal{T}\) is contained in the bisector of two viewpoints, the shared boundary between two consecutive regions of VorVis(\(\mathcal{T},\mathcal{P}\)) is always a single point of \(\mathcal{T}\). We call such points _event_ points of VorVis(\(\mathcal{T},\mathcal{P}\)). _Event_ points of ColVis(\(\mathcal{T},\mathcal{P}\)) are defined analogously, that is, as points on the boundary of two consecutive regions of the map.
We denote by \(b_{i,j}\) the perpendicular bisector of two viewpoints \(p_{i},p_{j}\). Additionally, we denote by \(q_{i,j}\) an event point of VorVis(\(\mathcal{T},\mathcal{P}\)) such that a point infinitesimally to the left and right of \(q_{i,j}\) belongs to \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P})\) and \(\mathcal{W}_{\mathcal{T}}(p_{j},\mathcal{P})\), respectively (notice that an event \(q_{i,j}\) is different from an event \(q_{j,i}\)). There are three (not mutually exclusive) possibilities: (i) \(p_{i}\) becomes invisible at \(q_{i,j}\)1; (ii) \(p_{j}\) becomes visible at \(q_{i,j}\); (iii) \(p_{i}\) and \(p_{j}\) are visible at \(q_{i,j}\), and \(q_{i,j}\) is an intersection
point between \(b_{i,j}\) and \(\mathcal{T}\).
In the following lemma, we prove the key observation of this paper: Even though a bisector \(b_{i,j}\) might intersect the terrain \(\Theta(n)\) times, only two such intersections are relevant and might produce events of type (iii).
**Lemma 1**.: _Let \(p_{i}\in\mathcal{P}\) be lower2 than \(p_{j}\in\mathcal{P}\). Let \(q\) be an intersection point between \(b_{i,j}\) and \(\mathcal{T}\) to the left (respectively, right) of \(p_{i}\). Then any point to the left (respectively, right) of \(q\) that is visible from \(p_{i}\) is closer to \(p_{j}\) than to \(p_{i}\). Hence, there is no event \(q_{i,j}\) or \(q_{j,i}\) of type (iii) that lies to the left (respectively, right) of \(q\)._
Footnote 2: We say that \(p\) is _lower_ (respectively, _higher_) than \(q\) when it has a smaller (respectively, greater) \(y\)-coordinate than that of \(q\).
Proof.: Since \(p_{i}\) is assumed to have a smaller \(y\)-coordinate than that of \(p_{j}\), the region of the plane closer to \(p_{i}\) than to \(p_{j}\) is the one below \(b_{i,j}\). But any point \(r\) that is on \(b_{i,j}\) or below it and to the left (respectively, right) of \(q\) is not visible from \(p_{i}\) because the line segment \(\overline{p_{i}r}\) contains a point (specifically, the point vertically aligned with \(q\)) that lies strictly below the terrain surface; see Fig. 3 for an illustration.
The second part of the statement follows because visibility from \(p_{i}\) is one of the conditions of events of type (iii).
To prove our bounds, we also use this well-known property of visibility in 1.5D terrains, known as _order claim_:
**Lemma 2** (Claim 2.1 in [3]).: _Let \(a,b,c\), and \(d\) be four points on \(\mathcal{T}\) such that \(x(a)<x(b)<x(c)<x(d)\). If a sees \(c\) and \(b\) sees \(d\), then \(a\) sees \(d\)._
We denote by \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{\ell})\) and \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\) the Voronoi and colored visibility maps of \(\mathcal{T}\) assuming that viewpoints can only see themselves and to their left. Further, we denote by \(\mathcal{W}_{\mathcal{T}}(p,\mathcal{P}_{\ell})\) the Voronoi viewshed of \(p\) under the same assumption. \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\), \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\) and \(\mathcal{W}_{\mathcal{T}}(p,\mathcal{P}_{r})\) are defined analogously using visibility to the right. We can now prove the following:
**Theorem 1**.: _Given a terrain \(\mathcal{T}\) with \(n\) vertices and a set \(\mathcal{P}\) of \(m\) viewpoints placed on vertices of \(\mathcal{T}\), the following bound holds:_
\[k_{v}\leq\min\{k_{c}+m^{2},2k_{c}+8m-4\}.\]
Proof.: Since the vertices of \(\mathcal{T}\) are counted in both \(k_{v}\) and \(k_{c}\), we exclude them from our analysis.
We start by proving that \(k_{v}\leq k_{c}+m^{2}\). Notice that events of type (i) and (ii) are also events of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\). Let us prove that there are at most \(m^{2}\) events of type (iii).
Let \(p_{i}\), \(p_{j}\) be a pair of viewpoints. If \(p_{i}\) and \(p_{j}\) are at the same height, \(b_{i,j}\) is vertical and only intersects \(\mathcal{T}\) once, so there is at most one event of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) on \(b_{i,j}\cap\mathcal{T}\). Otherwise, we assume without loss of generality that \(p_{i}\) is lower than \(p_{j}\). By Lemma 1 the only candidates for events \(q_{i,j}\) or \(q_{j,i}\) of type (iii) are the left-most intersection point of type
Figure 4: (a) Illustration of the charging scheme of events of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\) at the intersection of a bisector and \(\mathcal{T}\) (proof of Theorem 1): The event \(q\) is charged to \(p_{l}\) because no point \(t\) to the right of \(q\) belongs to \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}_{r})\). (b) An instance where \(k_{v}=k_{c}+2m-2\): \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) consists of three portions (a portion visible by all viewpoints surrounded by two portions not visible by any viewpoint), while in \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) (illustrated in the figure) the visible portion is subdivided into \(2m-1\) parts.
\(b_{i,j}\cap\mathcal{T}\) among all such points to the right of \(p_{i}\) and the right-most one among all points to the left. Thus, every pair of viewpoints creates at most two events of type (iii).
We next prove the second upper bound for \(k_{v}\). We denote by \(k_{v}^{\ell}\), \(k_{c}^{\ell}\), \(k_{v}^{r}\) and \(k_{c}^{r}\) the total complexity of all the regions of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{\ell})\), \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\), \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\) and \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\), respectively.
Each event of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) can be uniquely assigned to an event of either \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\) or \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\): If the event concerns viewpoint \(p_{i}\) (becoming visible or invisible) and it is to the left of \(p_{i}\), the same event appears in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\) and it is assigned to it. If the event is to the right of \(p_{i}\), it is assigned to the same event in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\). If the event is on \(p_{i}\), it is easy to see that \(p_{i}\) is either the left-most point of \(\mathcal{T}\), in which case we assign it to the same event in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\), or the right-most point of \(\mathcal{T}\), in which case we assign it to the same event in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\).
Each event of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\) or \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\) that did not get any event of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) assigned to it lies at the same position of some viewpoint that is not the left-most or the right-most point of \(\mathcal{T}\). Indeed, if \(p_{i}\in\mathcal{P}\) is such a viewpoint, then, at the position where it lies, \(p_{i}\) becomes visible in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\) and invisible in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\)3, but there are no such events in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) (where there is a portion of \(\mathcal{T}\) visible from \(p_{i}\) containing \(p_{i}\) not on its boundary but in its interior). This proves that \(k_{c}^{\ell}+k_{c}^{r}\leq k_{c}+2m\).
Footnote 3: Strictly speaking, in \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{\ell})\)\(p_{i}\) becomes invisible immediately to its right.
Next, we show a relationship between \(k_{v}^{r}\) and \(k_{c}^{r}\). Suppose that we traverse \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\) from left to right, and we stop at every event that is not an event of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P}_{r})\), that is, the event is at the intersection of a bisector \(b_{i,j}\) and \(\mathcal{T}\). Since in \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\) viewpoints can only see themselves and to their right, \(p_{i}\) and \(p_{j}\) are to the left of the event \(q\). Without loss of generality, suppose that \(p_{j}\) is to the left of \(p_{i}\). As in the proof of Lemma 1, if \(p_{i}\) was higher than \(p_{j}\), no point on \(b_{i,j}\) to the right of \(p_{i}\) would be visible from \(p_{j}\), contradicting the existence of the event at the intersection of \(b_{i,j}\) and \(\mathcal{T}\). Further, \(p_{i}\) and \(p_{j}\) are not at the same height because both are to the left of \(q\). Hence, \(p_{i}\) is lower than \(p_{j}\). Let \(t\) be a point to the right of \(q\) that is visible from \(p_{i}\) (see Fig. 3(a)). By the Lemma 1 with \(a=p_{j}\), \(b=p_{i}\), \(c=q\) and \(d=t\), \(t\) is also visible from \(p_{j}\). By Lemma 1, \(t\) is closer to \(p_{j}\) than to \(p_{i}\). Therefore, \(t\notin\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}_{r})\). This implies that there is no portion of \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}_{r})\) to the right of \(q\) and, in particular no more event caused by the intersection of a bisector of \(p_{i}\) and another viewpoint. We charge the event \(q\) to \(p_{i}\), and obtain that there are at most \(m-1\) events of this type. Hence, \(k_{v}^{r}\leq k_{c}^{r}+m-1\).
Finally, we derive a bound for \(k_{v}\) based on \(k_{v}^{r}\) and \(k_{v}^{\ell}\).
Let us take a continuous portion \(\mathcal{T}^{\prime}\) of \(\mathcal{T}\) that belongs to the Voronoi viewshed of some viewpoint \(p_{i}\) in \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\), and to the Voronoi viewshed of some viewpoint \(p_{j}\) in \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{\ell})\). Let \(\mathcal{T}^{\prime}\) be maximal with this property. Notice that \(p_{i}\) is to the left of \(\mathcal{T}^{\prime}\), while \(p_{j}\) is to its right. Furthermore, in \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\), every point of \(\mathcal{T}^{\prime}\) belongs to the Voronoi viewshed of \(p_{i}\) or \(p_{j}\). We next show that \(b_{i,j}\) intersects \(\mathcal{T}^{\prime}\) at most once. The claim is clear when \(y(p_{i})=y(p_{j})\). Otherwise, we assume without loss of generality that \(p_{i}\) is lower than \(p_{j}\). Let \(q\) be the left-most intersection point (if any) between \(b_{i,j}\) and \(\mathcal{T}^{\prime}\). We have that \(q\) is to the right of \(p_{i}\). Additionally, all points of \(\mathcal{T}^{\prime}\) to the right of \(q\) are visible from \(p_{i}\) because they belong to the Voronoi viewshed of \(p_{i}\) in \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\). By Lemma 1, all points of \(\mathcal{T}^{\prime}\) to the right of \(q\) are closer to \(p_{j}\) than to \(p_{i}\). In consequence, there is no intersection point between \(b_{i,j}\) and \(\mathcal{T}^{\prime}\) to the right of \(q\), and \(b_{i,j}\) intersects \(\mathcal{T}^{\prime}\) at most once. This implies that \(\mathcal{T}^{\prime}\) gets split into at most two portions of the final diagram.
The situation where in at least one of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{r})\) or \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}_{\ell})\) a portion does not have any visible viewpoint is trivial.
Consequently, \(k_{v}\leq 2(k_{v}^{r}+k_{v}^{\ell})\).
Putting everything together,
\[k_{v} \leq 2(k_{v}^{r}+k_{v}^{\ell})\] \[\leq 2(k_{c}^{r}+k_{c}^{\ell}+2m-2)\] \[\leq 2(k_{c}+4m-2)=2k_{c}+8m-4.\qed\]
Regarding lower bounds, we show the following:
**Example 1**.: _There exists a terrain with \(n\) vertices and a set of \(m\) viewpoints placed on vertices of the terrain such that \(k_{v}=k_{c}+2m-2\). The construction is illustrated in Fig. 3(b)._
## 3 Computation of the Voronoi visibility map
The algorithm we propose is simple: We sweep the terrain from left to right, and maintain the set of visible points in a balanced binary search tree, where the key of every viewpoint is the Euclidean distance to the point of the terrain currently swept by the sweep line; the relevant viewpoint is always the closest visible one. The algorithm is based on the observation that maintaining the whole set of viewpoints sorted by distance to \(\mathcal{T}\) might be expensive (since a bisector of two viewpoints might intersect the terrain \(\Theta(n)\) times), while, by Lemma 1, maintaining the set of the _visible_ ones is not (since, out of the potential \(\Theta(n)\) intersections, the two viewpoints are visible in at most two). Thanks to this observation, new events of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) are found in \(O(\log m)\) time rather than \(O(m)\). We next present the details.
The algorithm sweeps the terrain from left to right and stops at points that are candidates for event points. The candidates for events of type (i) and (ii) are the events of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\). We explain in Section 3.2 which are the candidates for events of type (iii).
### Events of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\)
We compute \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) using the version of the algorithm from [12] that returns a doubly-linked list with the vertices of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) sorted from left to right, together with the visibility information provided as follows: The visible viewpoints are specified for the first component of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) and, for the other components, the algorithm outputs the changes in the set of visible viewpoints with respect to the component immediately to the left.
### Candidates for events of type (iii)
We next describe the candidates for events of type (iii) associated with a pair of viewpoints \(p_{i},p_{j}\in\mathcal{P}\).
If \(p_{i}\) and \(p_{j}\) are at the same height, the only intersection point of \(b_{i,j}\) with \(\mathcal{T}\) lies between both viewpoints. If such point is visible from both \(p_{i}\) and \(p_{j}\), we add it as a candidate for event of type (iii).
Otherwise, we may assume, without loss of generality, that \(p_{i}\) is lower than \(p_{j}\). By Lemma 1 the only candidates for events of type (iii) involving \(p_{i}\) and \(p_{j}\) are the left-most intersection point of type \(b_{i,j}\cap\mathcal{T}\) among all such points to the right of \(p_{i}\) and the right-most one among all points to the left. For the sake of simplicity, we first assume that \(b_{i,j}\) is not tangent to \(\mathcal{T}\) at any of these intersection points. Then each of these intersection points is added to the list of candidates for events swept by the line if and only if it is visible from both \(p_{i}\) and \(p_{j}\).
Finally, let \(q\) be one of the two candidates for events of type (iii) involving \(p_{i}\) and \(p_{j}\). Suppose that \(q\) is to the right of \(p_{i}\) (the other case is symmetric). If \(b_{i,j}\) is tangent to \(\mathcal{T}\) at \(q\), points of \(\mathcal{T}\) infinitesimally to the left or right of \(q\) are closer to \(p_{i}\) than to \(p_{j}\) (while \(q\) is equidistant). Additionally, \(p_{i}\) becomes invisible right after \(q\). In consequence, it is not needed to add \(q\) to the list of candidates for events of type (iii): Right before \(q\), the algorithm knows that \(p_{i}\) is closer to the terrain than \(p_{j}\). At \(q\), the algorithm processes that \(p_{i}\) becomes invisible, and \(p_{j}\) (if it is visible) automatically gets higher priority than \(p_{i}\) in the list of candidates for the "owner" of the current Voronoi visibility region. The key argument (there are no more candidates for events of type (iii) to the right of \(q\)) also holds in this case.
### Data structures
The algorithm uses the following data structures.
We maintain a balanced binary search tree \(H\) that contains the viewpoints that are visible at the current point of the sweep. These viewpoints are sorted in the tree according to their corresponding key, which is the distance from the viewpoint to the current intersection point between the sweep line and the terrain. The keys are not stored in the tree because they change as the sweep line moves, but each of them can be computed when needed in constant time. The algorithm always chooses as the "owner" of the current Voronoi visibility region the viewpoint of \(H\) with the minimum key.
In \(H\), we perform insertions and deletions when viewpoints become visible and invisible, respectively. During these operations, when at some node of the tree we need to decide whether we move to its left or right subtree, we simply compute the key associated to the viewpoint in that node, and compare it with the key of the viewpoint that we want to insert or delete. Therefore, insertions and deletions can be performed in the standard way in \(O(\log m)\) time.
When the sweep line encounters a candidate for an event of type (iii) (let us call it \(q\)), the relative order of two visible viewpoints with respect to their current distance to the terrain changes (formally speaking, it changes right after \(q\)). A possible way to reflect this in \(H\) is to delete from the tree one of the two viewpoints associated with \(q\), and then
insert it again using as keys the distances from the viewpoints to a point of \(\mathcal{T}\) infinitesimally to the right of \(q\) (and still to the left of the next event in the list). Thus, candidates for events of type (iii) can be processed in \(H\) in \(O(\log m)\) time.
Additionally, we use a data structure that allows us to answer ray-shooting queries in \(\mathcal{T}\) in \(O(\log n)\) time [5]. Such queries are used to decide whether a given pair of points are mutually visible, and to find the relevant intersections between \(\mathcal{T}\) and the bisector of a pair of viewpoints.
### Description of the algorithm
Given \(q,r\) on \(\mathcal{T}\) with \(x(q)<x(r)\), we denote by \(\mathcal{T}(q,r)\) and \(\mathcal{T}[q,r]\) the open and closed portion of the terrain between \(q\) and \(r\), respectively.
Our algorithm, outlined in Fig. 5, takes as input \(\mathcal{T}\), \(\mathcal{P}\) and a list \(E\) of potential events sorted from left to right containing all events of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) together with the \(O(m^{2})\) candidates for events of type (iii). The list \(E\) also contains an event at the right-most point of the terrain.
The algorithm outputs \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) as a list of pairs \(((q,r),p_{i})\) such that \(p_{i}\) is the closest visible viewpoint in \(\mathcal{T}(q,r)\) (if \(\mathcal{T}(q,r)\) is not visible from any viewpoint, we output \(((q,r),\bot)\)). The variables \(t_{\ell}\) and \(p_{*}\) in the algorithm refer to the left endpoint of the portion of \(\mathcal{T}\) currently analyzed by the algorithm and the closest visible viewpoint in that portion, respectively. The variable \(p_{\min}\) refers to the viewpoint in \(H\) with the minimum key (if \(H\) is empty, \(p_{\min}=\bot\)).
Initially, \(H:=\emptyset\), \(t_{\ell}:=\) left-most point of \(\mathcal{T}\), and \(p_{*}:=\bot\).
We repeat the following procedure until \(E\) is empty: We extract the next element \(q\) from \(E\), and proceed according to four cases, corresponding to lines 4, 7, 9, and 11 of the pseudocode in Fig. 5. For the sake of simplicity, in the description in Fig. 5 we deliberately ignore the situation where several events of distinct type occur at the same point of \(\mathcal{T}\), which we tackle in the next paragraph. The cases in lines 4, 7 and 9 are clear. Regarding the case starting at line 11, in line 12 we update the positions of \(p_{i}\) and \(p_{j}\) in \(H\) as explained in Section 3.3 (see the paragraph where we discuss the case where the sweep line encounters a candidate for an event of type (iii)). We also point out that, if \(q\) is an intersection point between \(\mathcal{T}\) and more than one bisector of type \(b_{i,j}\), the bisectors can be processed in any order.4
Footnote 4: By our general position assumptions, \(q\) is not equidistant from three or more viewpoints, so at most one of the bisectors through \(q\) might involve \(q\).
Figure 5: Computation of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\). \(E\) is the list of potential events, \(H\) is the tree containing the viewpoints that are currently visible, \(t_{\ell}\) is the left endpoint of the current portion of \(\mathcal{T}\), \(p_{*}\) is the closest visible viewpoint in that portion, and \(p_{\min}\) is the viewpoint in \(H\) with the minimum key.
It remains to explain how to deal with the situation where several events of distinct type occur at the same point of \(\mathcal{T}\). In this case, we first perform the modifications in \(H\) triggered by _all_ the events at that point (insertions of viewpoints becoming visible, deletions of viewpoints becoming invisible and updates of the positions of pairs of viewpoints). After updating \(H\) in this way, we update \(p_{\min}\); if \(p_{\min}\neq p_{*}\), we output \(((t_{\ell},q),p_{*})\), set \(t_{\ell}:=q\), and set \(p_{*}:=p_{\min}\).
### Correctness and running time
We first show that the algorithm for \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) always selects the closest visible viewpoint. Changes in the visibility status of the viewpoints correspond to events of \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\), which are added to \(E\), so the set of visible viewpoints contained in \(H\) is correct at any time of the sweep. Regarding the distances from the viewpoints to the terrain, every time that a viewpoint is swept or becomes visible, it is inserted in \(H\) correctly (according to its current distance to the terrain). Changes in the order of the visible viewpoints with respect to their distances to \(\mathcal{T}\) coincide with intersections of \(\mathcal{T}\) with the bisectors among them. As argued in the proof of Theorem 1, for every pair of viewpoints it happens at most twice that both viewpoints are visible at an intersection point between \(\mathcal{T}\) and their bisector. Such an event is precomputed and stored in \(E\), and later processed by the algorithm.
We next analyze the complexity of the algorithm.
The map \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) can be computed in \(O(n+(m^{2}+k_{c})\log n)\) time using the algorithm in [12]. This map has at most \(k_{c}\) regions; however, due to the fact that several viewpoints might become visible or invisible at the same time, when sweeping \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) from left to right, the number of times that a viewpoint becomes visible or invisible, added over all viewpoints, can be higher; an upper bound of \(k_{c}+m^{2}\) is given in [12]. Each time that a viewpoint changes its visibility status, we perform an insertion or a deletion in \(H\), which takes \(O(\log m)\) time. The algorithm processes at most \(m^{2}\) intersections between the terrain and bisectors of endpoints in \(O(\log m)\) time each. Consequently, \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) can be extracted from \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) in \(O((m^{2}+k_{c})\log m)\) time. The space complexity of the algorithm is the space required to store the terrain, the events and the data structures, that is, \(O(n+m^{2}+k_{c})\).
We conclude with the following:
**Theorem 2**.: _The Voronoi visibility map of a 1.5D terrain can be constructed in \(O(n+(m^{2}+k_{c})\log n)\) time and \(O(n+m^{2}+k_{c})\) space._
## 4 Extensions
In this section, we present adaptations of the previous algorithm to compute related maps.
### Higher order Voronoi visibility maps
We define the \(k\)th-order Voronoi visibility map \(\mathrm{VorVis}_{k}(\mathcal{T},\mathcal{P})\) as a partitioning of \(\mathcal{T}\) into regions that have the same set of \(\ell\) closest visible viewpoints, where \(\ell\) is the minimum of \(k\) and the number of visible viewpoints in the region. Observe that the \(m\)th-order Voronoi visibility map is equal to \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\).
We can easily compute \(\mathrm{VorVis}_{k}(\mathcal{T},\mathcal{P})\) by adapting the algorithm from Section 3. In this case, we need to maintain two additional variables: the total number \(b\) of viewpoints that are visible at the point currently swept by the line, and, from the current set of \(\ell\) closest visible viewpoints, the furthest one, denoted \(p_{\max}\). Analogously to the algorithm for \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\), for space reasons our algorithm for \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) returns a doubly-linked list with the vertices of \(\mathrm{VorVis}_{k}(\mathcal{T},\mathcal{P})\) sorted from left to right, together with the following information: The set of \(\ell\) closest visible viewpoints is specified for the first component of \(\mathrm{VorVis}_{k}(\mathcal{T},\mathcal{P})\) and, for the other components, the algorithm outputs the changes in the set of \(\ell\) closest visible viewpoints with respect to the component immediately to the left.
Let \(q\) be the next element from the list of events \(E\), computed as in the previous section. We explain in detail the case where one or more viewpoints become visible at \(q\), and leave the remaining cases to the interested reader. Let \(\mathcal{P}^{\prime}\) denote the set of viewpoints becoming visible at \(q\). We update \(b\). If, after this update, \(b\leq k\), we report vertex \(q\) together with the set \(\mathcal{P}^{\prime}\) (containing the new viewpoints in the set of \(\ell\) closest visible viewpoints). We also insert the viewpoints of \(\mathcal{P}^{\prime}\) in \(H\). Otherwise, let \(b^{\prime}\) and \(b\) be the number of visible viewpoints right before \(q\) and at \(q\), respectively. If \(b^{\prime}<k\), we remove from \(\mathcal{P}^{\prime}\) the set of \(k-b^{\prime}\) closest viewpoints to \(q\) (obtained after sorting the viewpoints of \(\mathcal{P}^{\prime}\) according to
their distance to \(q\)), we add these viewpoints to a set \(\mathcal{P}^{\prime}_{in}\), and we insert them in \(H\). After possibly performing this operation in \(\mathcal{P}^{\prime}\), we proceed as follows: We extract the closest viewpoint to \(q\) of \(\mathcal{P}^{\prime}\); if it is closer to \(q\) than \(p_{\max}\), we add this viewpoint to \(\mathcal{P}^{\prime}_{in}\), we insert it in \(H\), we add viewpoint \(p_{\max}\) to \(\mathcal{P}_{out}\), and we update \(p_{\max}\). Notice that \(p_{\max}\) can be updated by finding the predecessor in \(H\) of the "old" \(p_{\max}\), that is, in \(O(\log m)\) time. We repeat this process until \(\mathcal{P}^{\prime}\) is empty or the next element in \(\mathcal{P}^{\prime}\) is farther to \(q\) than \(p_{\max}\). Then we insert the remaining viewpoints of \(\mathcal{P}^{\prime}\) (if any) in \(H\). Finally, we report vertex \(q\) together with the set \(\mathcal{P}^{\prime}_{in}\) (containing the new viewpoints in the set of \(\ell\) closest visible viewpoints) and the set \(\mathcal{P}_{out}\) (containing the viewpoints that stop belonging to the set of \(\ell\) closest visible viewpoints).
Clearly, every change in the visibility status of a viewpoint and every intersection of \(\mathcal{T}\) with the bisector of two visible viewpoints can be processed in \(O(\log m+\log n)\) time. Hence, we obtain:
**Theorem 3**.: _The kth-order Voronoi visibility map of a 1.5D terrain can be constructed in \(O(n+(m^{2}+k_{c})\log n)\) time and \(O(n+m^{2}+k_{c})\) space._
### Other distances
Given \(q,r\) on \(\mathcal{T}\) with \(x(q)<x(r)\), two other natural distances between \(q\) and \(r\) are the Euclidean length of the portion \(\mathcal{T}[q,r]\), which we will call _Euclidean distance along the terrain_, and the number of vertices in the portion \(\mathcal{T}(q,r)\), which we will call _link distance_.5 We may define the Voronoi visibility map of \(\mathcal{T}\) based on these distances.
Footnote 5: For the link distance, we take the open portion of the terrain \(\mathcal{T}(q,r)\) so that any two points on the same edge (including the endpoints) are at (link) distance zero.
The relevant difference with respect to the standard case is the shape of the bisectors between two viewpoints \(p_{i}\) and \(p_{j}\). In the case of the Euclidean distance along the terrain, there is exactly one point of \(\mathcal{T}\) that is equidistant to \(p_{i}\) and \(p_{j}\), and this point can be computed in \(O(\log n)\) time after preprocessing \(\mathcal{T}\) so that the Euclidean distance along the terrain between any pair of vertices of \(\mathcal{T}\) can be computed in \(O(1)\) time.6 Regarding the link distance, if there is an odd number of vertices between \(p_{i}\) and \(p_{j}\), there is exactly one vertex of \(\mathcal{T}\) that is equidistant to \(p_{i}\) and \(p_{j}\), and this vertex can be computed in \(O(1)\) time. However, if there is an even number of vertices between \(p_{i}\) and \(p_{j}\), there is an open edge of \(\mathcal{T}\) such that all of its points are at the same link distance from \(p_{i}\) and \(p_{j}\). In this case, we must either allow the border between two consecutive Voronoi regions to be 1-dimensional, or, if simplicity is more desirable, we might (artificially) select an interior point of this edge as the intersection point between \(\mathcal{T}\) and the bisector of \(p_{i}\) and \(p_{j}\).
Footnote 6: If we store, for every vertex \(q\) of \(\mathcal{T}\), the Euclidean distance along the terrain \(q_{d}\) between \(q\) and the left-most point of \(\mathcal{T}\), then the Euclidean distance along the terrain between vertices \(q\), \(r\) of \(\mathcal{T}\) such that \(x(q)<x(r)\) is \(r_{d}-q_{d}\).
After adding the corresponding candidates for events of type (iii) based on the explanations in the previous paragraph, the rest of the algorithm is equal to the one for the general case. The running time remains the same because, given a pair of points on \(\mathcal{T}\), in both cases the distance between them can be computed in \(O(1)\) time. Therefore, we conclude:
**Theorem 4**.: _The Voronoi visibility map of a 1.5D terrain with respect to the Euclidean distance along the terrain or to the link distance can be constructed in \(O(n+(m^{2}+k_{c})\log n)\) time and \(O(n+m^{2}+k_{c})\) space._
## 5 Computation of \(r^{*}\)
We recall that \(r^{*}\) is the minimum value of \(r\) such that, if the viewpoints can only see objects that are within distance \(r\), the visibility map of \(\mathcal{T}\) does not change.
Let \(\mathcal{P}^{\prime}\) denote the set of viewpoints \(\mathcal{P}\) with the restriction that the visibility range of the viewpoints is \(r\). We then may define Vis(\(\mathcal{T},\mathcal{P}^{\prime}\)), VorVis(\(\mathcal{T},\mathcal{P}^{\prime}\))...in the natural way. Notice that, for \(\mathcal{P}^{\infty}\), we obtain the same objects as in the standard case.
Let \(d(x,y)\) denote the Euclidean distance between two points \(x,y\in\mathbb{R}^{2}\).
**Lemma 3**.: \(r^{*}=\max\limits_{i=1,\ldots,m}\{\sup\limits_{x\in\mathcal{W}^{ \prime}(p_{i},\mathcal{P}^{\prime\infty})}d(p_{i},x)\}\)_._
Proof.: Let \(p_{i}\) and \(x\) be a viewpoint and a point of \(\mathcal{T}\) achieving the maximum in the right hand expression. If \(r^{*}<d(p_{i},x)\), \(x\) would not be visible from \(p_{i}\) in Vis(\(\mathcal{T},\mathcal{P}^{\infty}\)). Since \(x\) belongs to the boundary of \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}^{\infty})\), all other viewpoints seeing \(x\) have a distance to \(x\) that is greater than or equal to \(d(p_{i},x)\); thus, \(x\) would also not be visible from any of them in Vis(\(\mathcal{T},\mathcal{P}^{\infty}\)). Since \(x\) is visible in Vis(\(\mathcal{T},\mathcal{P}^{\infty}\))7, we reach a contradiction. Therefore, \(r^{*}\geq d(p_{i},x)\).
Footnote 7: It follows from our definition of visibility that the maximal visible portions of \(\mathcal{T}\) are closed and, hence, the points on the boundary of the Voronoi viewsheds are visible.
On the other hand, to keep Vis(\(\mathcal{T},\mathcal{P}^{\infty}\)) unchanged, it is enough to maintain the closure of \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}^{\infty})\) visible for all \(i\), since Vis(\(\mathcal{T},\mathcal{P}^{\infty}\)) is equal to the union of the closures of the regions \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}^{\infty})\). If we set a visibility range of \(\sup\limits_{x\in W_{\mathcal{T}}(p,\mathcal{P}^{\infty})}d(p_{i},x)\), the closure of \(\mathcal{W}_{\mathcal{T}}(p_{i},\mathcal{P}^{\infty})\) indeed remains visible. Consequently, \(r^{*}\leq\max\limits_{i=1,\ldots,m}\{\sup\limits_{x\in W_{\mathcal{T}}(p, \mathcal{P}^{\infty})}d(p_{i},x)\}\).
Using this characterization of \(r^{*}\), we can prove the following:
**Theorem 5**.: _The problem of computing the minimum value \(r^{*}\) such that \(\mathrm{Vis}(\mathcal{T},\mathcal{P}^{r^{*}})=\mathrm{Vis}(\mathcal{T},\mathcal{ P}^{\infty})\) can be solved in \(O(n+(m^{2}+k_{c})\log n)\) time._
Proof.: By Lemma 3, it suffices to consider the distances between the vertices of \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}^{\infty})\) (that is, the points on the boundary of the Voronoi viewsheds) and their associated viewpoints. Consequently, the problem can be trivially solved in linear time if \(\mathrm{VorVis}(\mathcal{T},\mathcal{P}^{\infty})\) is known.
## 6 Final remark
As indicated in [12], in the running time of the algorithm to compute \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\), the term \(m^{2}\log n\) disappears if we assume that no two viewpoints change from invisible to visible at the same point of \(\mathcal{T}\). This can always be achieved by infinitesimally perturbing the terrain. However, such a perturbation does not make the same term disappear from the running time of the presented algorithm to compute \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\). Given that one of the bounds in Theorem 1 guarantees that \(k_{v}=O(k_{c}+m)\), it remains as an open problem to design an algorithm for \(\mathrm{VorVis}(\mathcal{T},\mathcal{P})\) that is equally faster than that for \(\mathrm{ColVis}(\mathcal{T},\mathcal{P})\) for all possible instances.
|
2306.01288 | Theoretical light curve models of the symbiotic nova CN Cha -- Optical
flat peak for three years | CN Cha is a slow symbiotic nova characterized by a three-years-long optical
flat peak followed by a rapid decline. We present theoretical light curves for
CN Cha, based on hydrostatic approximation, and estimate the white dwarf (WD)
mass to be $\sim 0.6 ~M_\odot$ for a low metal abundance of Z = 0.004. This
kind of flat peak novae are border objects between classical novae having a
sharp optical peak and extremely slow novae, the evolutions of which are too
slow to be recognized as a nova outburst in human timescale. Theoretically,
there are two types of nova envelope solutions, static and optically-thick
wind, in low mass WDs ($\lesssim 0.7 ~M_\odot$). Such a nova outburst begins
first in a hydrostatic manner, and later it could change to an optically-thick
wind evolution due to perturbation by the companion star in the nova envelope.
Multiple peaks are a reflection of the relaxation process of transition. CN Cha
supports our explanation on the difference between long-lasted flat peak novae
like CN Cha and multiple peak novae like V723 Cas, because the companion star
is located far outside, and does not perturb, the nova envelope in CN Cha. | Mariko Kato, Izumi Hachisu | 2023-06-02T06:14:13Z | http://arxiv.org/abs/2306.01288v1 | # Theoretical light curve models of the symbiotic nova CN Cha -- Optical flat peak for three years
###### Abstract
CN Cha is a slow symbiotic nova characterized by a three-years-long optical flat peak followed by a rapid decline. We present theoretical light curves for CN Cha, based on hydrostatic approximation, and estimate the white dwarf (WD) mass to be \(\sim 0.6~{}M_{\odot}\) for a low metal abundance of \(Z=0.004\). This kind of flat peak novae are border objects between classical novae having a sharp optical peak and extremely slow novae, the evolutions of which are too slow to be recognized as a nova outburst in human timescale. Theoretically, there are two types of nova envelope solutions, static and optically-thick wind, in low mass WDs (\(\lesssim 0.7~{}M_{\odot}\)). Such a nova outburst begins first in a hydrostatic manner, and later it could change to an optically-thick wind evolution due to perturbation by the companion star in the nova envelope. Multiple peaks are a reflection of the relaxation process of transition. CN Cha supports our explanation on the difference between long-lasted flat peak novae like CN Cha and multiple peak novae like V723 Cas, because the companion star is located far outside, and does not perturb, the nova envelope in CN Cha.
novae, cataclysmic variables -- stars: individual (CN Cha, PU Vul, V723 Cas) -- stars: winds 0000-0002-4820-8820]Mariko Kato
0000-0002-4882-7880]Izumi Hachisu
## 1 Introduction
CN Cha is a galactic symbiotic nova outbursted in late 2012 or early 2013. It was identified as a Mira variable (Hoffmeister, 1963) long before the outburst. The outburst shows a stable flat peak at \(m_{V}\sim 8\) mag that lasted three years followed by a rapid decline. Lancaster et al. (2020) summarized observational information in literature and database and made a comprehensive \(V\) light curve (see their Figure 2). They also presented an optical/near IR spectrum taken on UT 2019 March 12 that shows emission lines including P-Cygni profiles. The light curve of CN Cha resembles the eight-years-long flat peak of the symbiotic nova PU Vul.
CN Cha is located one kpc below the galactic plane for the Gaia early Data Release 3 (Gaia eDR3) distance of \(d=3.05^{+0.19}_{-0.17}\) kpc (Bailer-Jones et al., 2021). Lancaster et al. (2020) presented a possible historical orbit in the Galaxy and concluded that this star is most likely a thick disk component star of \(>8\) Gyr old. Note that wind mass loss is systematically weaker and their evolutions are slower in Population II novae (Kato et al., 2013) than in Population I novae (e.g., Della Valle, 2002, for a summary).
A long-lasted flat-peak rarely appears among a number of novae, whereas many show a sharp optical maximum (e.g., Strope et al., 2010). A flat peak in novae is theoretically explained as follows:
A nova is a thermonuclear runaway event on a mass-accreting white dwarf (WD) (Nariai et al., 1980; Iben, 1982; Prialnik et al., 1986; Prialnik & Kovetz, 1995; Sion et al., 1979; Sparks et al., 1978; Kato et al., 2017). Once hydrogen shell burning begins, a hydrogen-rich envelope atop the WD expands to a giant size. Strong optically-thick winds are accelerated (Kato & Hachisu, 1994) that blow off a large part of the envelope. The nova evolves fast to reach its optical maximum and immediately enters the decay phase. This wind mass loss accelerates the nova evolution and, as a result, stronger wind mass-loss makes a sharper maximum in the optical light curve (e.g., Hachisu et al., 2020).
In general, the wind is stronger in more massive WDs and/or larger heavy element enrichment
## 1 Introduction
The discovery of the \(V\) band of
(Hachisu & Kato, 2010; Kato et al., 2013). On the other hand, in less massive WDs and/or lower metallicity environments, the nova evolves more slowly because of their weaker mass losses. In an extreme case, no optically thick winds are accelerated and, then, the nova evolves very slowly (Kato & Hachisu, 2009). The envelope structure hardly changes because its mass is decreasing due only to hydrogen shell-burning, about a thousand times longer timescale compared with that due to strong winds. Therefore, the nova stays at an expanded stage for a long time. This makes the optical peak flat for a thousand times longer time.
A good example of long-lasted flat peak is the symbiotic nova PU Vul (Cuneo et al., 2018, for a recent summary). This is a well observed nova, but there is no indication of optically-thick wind mass-loss (e.g., Yamashita et al., 1982; Kanamitsu, 1991). The flat peak lasts as long as eight years (see Figure 1). If such a flat peak lasts much longer, e.g., a century, the outbursting WD may not be recognized as a nova outburst, instead as a supergiant. The flat-peak nova PU Vul could be an object on the border that we can recognize a thermonuclear runaway event as a nova in human timescale.
Theoretically a hydrogen shell flash occurs on a WD of mass as small as 0.4 \(M_{\odot}\)(Nariai et al., 1980; Yaron et al., 2005; Shen et al., 2009). On the other hand, estimated WD masses in novae are more massive (\(>0.55-0.6M_{\odot}\), see, e.g., Horne et al., 1993; Smith et al., 1998; Thoroughgood et al., 2001; Kato & Hachisu, 2011; Hachisu & Kato, 2015; Selvelli & Gilmozzi, 2019). A possible explanation of this discrepancy is that we have missed such variables fading very slowly after red-giant-like stage.
Kato & Hachisu (2009) studied the condition of occurrence of optically thick winds. The border of WD mass for occurrence of winds lies at \(M_{\rm WD,cr}\approx 0.6\) - 0.7 \(M_{\odot}\), depending on the metallicity. A shell flash on a WD (\(M_{\rm WD}<M_{\rm WD,cr}\)) evolves too slow to be recognized as a nova outburst in human timescale. PU Vul is a rare object that was identified as a nova close to this border.
In this paper, we present light curve models for CN Cha and determine the WD mass. We also discuss whether or not CN Cha is the next close example of the border object. This paper is organized as follows. First, we compare the light curve of CN Cha with various types of novae in Section 2. Then, we present our model light curves to estimate the WD mass in section 3. Discussion and conclusions follow in sections 4 and 5, respectively.
Figure 2: **Top panel:** The theoretical \(V\) light curves of model A in Table 1. Observed data are taken from ASAS-SN (\(V\): red dots, \(g\): magenta dots), AAVSO (cyan blue dots) and Lancaster et al. (2020) (green squares). The dotted line parts represent the pre-maximum phase in which we determined \(T_{\rm ph}\) to be consistent with observational estimates (green squares). The optically thin mass loss is assumed when the photospheric temperature rises to \(\log T_{\rm ph}\) (K) \(>4.0\). The orange, black, and blue lines correspond to the optically thin wind mass-loss rates of \(\dot{M}_{\rm wind}=3\times 10^{-6}\), 5\(\times 10^{-7}\), and 1\(\times 10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\), respectively. **Middle panel:** The photospheric temperature \(\log T_{\rm ph}\) for each model. The open circle indicates the epoch when optically thin mass loss begins. **Bottom panel:** The temporal change of UV flux (UVW2: 1120-2640 Å) for each \(\dot{M}_{\rm wind}\) model. The wind mass loss rate is attached beside the curve in units of \(10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\).
## 2 Comparison of Optical Light Curves Among Different Speed Classes
Figure 1 shows light curves of well observed novae having different speed classes1. We compare them with that of CN Cha and clarify what are the differences between them.
Footnote 1: The nova speed class is defined by \(t_{3}\) or \(t_{2}\) (days of 3 or 2 mag decay from optical maximum). For example, very fast novae (\(t_{2}\leq 10\) day), fast novae (\(11\leq t_{2}\leq 25\) day), moderately fast nova (\(26\leq t_{2}\leq 80\) day), slow novae (\(81\leq t_{2}\leq 150\) day), and very slow novae (\(151\leq t_{2}\leq 250\) day), which are defined by Payne-Gaposchkin (1957).
### Fast Novae
The bottom panel in Figure 1 shows, from left to right, the very fast novae V1500 Cyg 1975 and V838 Her 1991, fast nova V1668 Cyg 1978, and moderately fast nova PW Vul 1984#1. They show a sharp optical peak. The WD masses of these novae have been obtained by theoretically reproducing their light curves in the decay phase. The estimated WD mass is \(1.2~{}M_{\odot}~{}(X,Y,Z,Z_{\rm CO},Z_{\rm Ne})=(0.55,0.30,0.02,0.1,0.03)\) for V1500 Cyg (see Appendix of Hachisu & Kato, 2014), \(1.35~{}M_{\odot}~{}(0.55,0.33,0.02,0.03,0.07)\) for V838 Her (Kato et al., 2009b), \(0.98~{}M_{\odot}~{}(0.45,0.18,0.02,0.35,0.0)\) for V1668 Cyg (Hachisu & Kato, 2016a), \(0.83~{}M_{\odot}~{}(0.55,0.23,0.02,0.2,0.0)\) for PW Vul (Hachisu & Kato, 2015).
### PU Vul: a Slow Nova with a Flat Peak
The symbiotic nova PU Vul is a unique nova having a long-lasted optical flat-peak (see Figure 1a). A very stable flat peak lasted as long as eight years. The early spectra mimicked those of an F supergiant and no indication of strong mass ejection (Yamashita et al., 1982; Kanamitsu, 1991a). The optical spectrum was absorption-dominated until JD 2,446,000, but showed a distinct nebular feature on JD 2,447,000 (Iijima, 1989; Kanamitsu et al., 1991b). On JD 2,448,000, the optical and UV spectra showed rich emission lines which are typical in the nebular phase (Vogel & Nussbaumer, 1992; Kanamitsu et al., 1991b; Tomov et al, 1991). P Cygni line-profiles appeared in the decay phase, indicating optically thin mass-ejection from the WD photosphere (Belyakina et al., 1989; Vogel & Nussbaumer, 1992; Sion et al., 1993; Nussbaumer & Vogel, 1996).
Based on these observational aspects, Kato et al. (2012) presented a light curve model for PU Vul with the assumption of hydrostatic envelope, i.e., no optically thick winds. They calculated a theoretical light curve that fits with the observed UV 1455 A and optical \(V\) data. They used a narrow (20A wide) spectral band centered at 1455 A, which is known to be emission-line free and can be a representative of continuum flux in classical novae (Cassatella et al., 2002).
Their model shown in Figure 1 is a \(0.6~{}M_{\odot}\) WD with the optically-thin mass-loss rate of \(5\times 10^{-7}~{}M_{\odot}\) yr\({}^{-1}\) in the decay phase. The chemical composition of the envelope is assumed to be \(X=0.7\), \(Y=0.29\), and \(Z=0.01\) because the spectra show no indication of C, O, or Ne enrichment. The blue line shows the \(V\) magnitude of the photospheric emission. This model is obtained from the UV light curve fitting. The black line represent the total \(V\) flux which is the summation of the photospheric emission and the emission from optically thin plasma surrounding the WD. The origin of the plasma is optically thin mass-loss in the decay phase (see Kato et al., 2012, for more detail).
### Slow novae with Multiple Peaks
The slow nova V723 Cas shows an oscillatory behavior around \(M_{V}\sim-5\) in the early phase which is settled down to a smooth decline in the later phase (Figure 1b). The WD mass in V723 Cas was estimated from a model light curve analysis to be 0.5-0.55 \(M_{\odot}\) with the chemical composition of \((0.55,0.23,0.02,0.2,0.0)\)(Hachisu & Kato, 2015).
Kato & Hachisu (2011) presented the light curve model assuming that the outburst of V723 Cas began as a hydrostatic envelope without winds like the PU Vul evolution, but somewhat later changed to a normal evolution with smooth light curve decline, i.e., optically
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Model & \(M_{\rm WD}\) & \(X\) & \(Y\) & \(Z\) & \(Z_{\rm CO}\) & \(\log R_{\rm WD}\) & \(M_{\rm ig}\) & \((m-M)_{V}\) & \(\dot{M}_{\rm wind}\) \\ & (\(M_{\odot}\)) & & & & & (\(R_{\odot}\)) & (\(10^{-5}~{}M_{\odot}\)) & & (\(10^{-7}~{}M_{\odot}\) yr\({}^{-1}\)) \\ \hline A &... & 0.57 & 0.70 & 0.296 & 0.004 & 0.2 & -1.82 & 4.4 & 13.2 & \(1,5,30\) \\ B &... & 0.55 & 0.70 & 0.296 & 0.004 & 0.2 & -1.80 & 5.5 & 13.05 & \(2,5,20\) \\ C &... & 0.6 & 0.70 & 0.296 & 0.004 & 0.0 & -1.83 & 14 & 13.05 & \(5,50,80\) \\ D &... & 0.6 & 0.70 & 0.29 & 0.01 & 0.0 & -1.90 & 4.0 & 13.2 & \(5,30\) \\ \hline \end{tabular}
\end{table}
Table 1: Model parameters
thick wind phase. This model is indicated by the black line in Figure 1b.
Such a transition from hydrostatic to winds does not normally occur because a redgiant-like hydrostatic envelope has a very different structure from that of a wind mass-loss envelope. Kato & Hachisu (2011) concluded that the transition could occur only when the extended nova envelope engulfs the companion and its motion in the envelope triggered the transition. The large energy release owing to the frictional energy produces additional luminosity. We see a relaxation process in a large spike-like oscillation of the light curve.
We do not expect such a transition in CN Cha or PU Vul because they are a very wide binary and their nova envelopes/photospheres do not engulf the companion.
### CN Cha
Figure 2 shows the optical data for CN Cha, \(V\) (red dots) and \(g\) (magenta dots) taken from ASAS-SN (Shappee et al., 2014; Jayasinghe et al., 2019), and \(V\) (cyan blue dots) from AAVSO. We see that the flat phase variation in CN Cha is much smaller than that in V723 Cas, or in other similar novae, like HR Del, V5558 Sgr, in which optical/\(V\) magnitudes change by 2-3 mag (Figure 6 in Kato & Hachisu, 2011).
Lancaster et al.'s spectrum taken on JD 2458554.672 (downward arrow labeled "Sp" in Figure 1a) shows many narrow emission lines including P Cygni profiles, suggesting optically-thin wind mass loss. Although we do not find UV flux data in the decay phase of CN Cha, its resemblance to PU Vul suggests that the \(V\) band flux is dominated by that of optically thin plasma outside the WD photosphere. Our model in Figure 2 is calculated from the photospheric blackbody emission, which does not include the \(V\) flux from optically-thin nebular emission. Thus, we may take the observed data as the upper limit for our model \(V\) light curve.
## 3 Model Light Curve Fittings
### Method
In slow novae, the envelope is almost in hydrostatic balance until it reaches optical maximum. In the phases of flat optical peak and after that, the energy generation rate of nuclear burning is balanced with the energy loss from the photosphere, i.e., \(L_{\rm nuc}=L_{\rm ph}\). First, we integrated the hydrostatic equation with the diffusion equation from the photosphere to the base of envelope. We use the OPAL opacity (Iglesias & Rogers, 1993). The model parameters are the WD mass, WD radius (radius at the base of envelope), and chemical composition of envelope. Then, we make a sequence of envelope solutions in the order of decreasing mass. This sequence is a good approximation of nova evolution
Figure 4: Distance modulus in the \(V\) band obtained from our theoretical light curve fittings with the flat-peak phase of CN Cha. Filled red dot: model A and model B. Open black circles: models of no CO enrichment including model C. Open orange circle: model D. The horizontal solid/dotted blue lines denote \((m-M)_{V}=12.9\pm 0.3\).
Figure 3: **Top panel:** Same as those in Figure 2(top), but for model B. **Middle panel:** Model C. **Bottom panel:** Model D.
when no optically thick winds occur. The time interval \(\Delta t\) between two successive solutions is obtained from \(\Delta t=\Delta M_{\rm env}/(\dot{M}_{\rm nuc}+\dot{M}_{\rm wind})\), where \(\Delta M_{\rm env}\) is the difference between the envelope masses of the two successive solutions, and \(\dot{M}_{\rm wind}\) and \(\dot{M}_{\rm nuc}\) are the wind mass-loss rate of optically thin winds and hydrogen nuclear burning rate, respectively. Here, \(\dot{M}_{\rm nuc}\) is calculated from the envelope structure and chemical composition of envelope, but we assume the value of \(\dot{M}_{\rm wind}\) as mentioned below. This method has been used to follow the supersoft X-ray phase of novae after the optically thick wind stops (Kato & Hachisu, 1994; Sala & Hernanz, 2005) or to follow PU Vul evolution in which the optically thick winds does not occur throughout the outburst (Kato et al., 2011, 2012). In the flat-peak phase, we have no wind mass loss, \(\dot{M}_{\rm wind}=0\). In the later decay phase of \(T_{\rm ph}>10,000\) K, we assume optically thin mass-loss (\(\dot{M}_{\rm wind}\sim\) several \(\times 10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\)) after Kato et al. (2011, 2012) in PU Vul.
In the rising phase, we assume that the envelope is in hydrostatic balance, but the energy generation rate is slightly larger than the thermal equilibrium, i.e., \(L_{\rm nuc}>L_{\rm ph}\), by a few to several percent. We obtained envelope solutions to be consistent with the observational data. This part is plotted with a dotted line in Figure 2(top).
The ignition mass is approximately obtained as the mass at \(t=0\) when the photospheric temperature goes down to \(\log T\) (K)=4.9, as shown in the middle panel of Figure 2. The envelope mass decreases only by a few percent from \(t=0\) to \(t=600\) day.
We assume Population II composition for the accreted matter, i.e., \(Z=0.004\) with/without additional CO enhancement \(Z_{\rm CO}\). For comparison, we calculated model D (\(Z=0.01\)) as listed in Table 1.
The absolute \(V\) magnitudes, \(M_{V}\), for the standard Johnson \(V\) bandpass are calculated from the photospheric temperature, \(T_{\rm ph}\), and luminosity, \(L_{\rm ph}\). The UVW2 band flux is calculated from the blackbody emission between 1120 and 2640 A.
### Light Curve Fitting
Table 1 lists our models. It shows, from left to right, the WD mass \(M_{\rm WD}\), chemical composition of the envelope \(X\), \(Y\), \(Z\), \(Z_{\rm CO}\), WD radius, ignition mass \(M_{\rm ig}\), distance modulus in the \(V\) band \((m-M)_{V}\), and assumed optically-thin wind mass-loss rates \(\dot{M}_{\rm wind}\).
The top panel in Figure 2 shows the light curve fitting of model A with the observation. The model light curve shows a flat optical peak, which is the most expanded phase of photosphere, and its photospheric temperature is as low as \(T_{\rm ph}<10,000\) K, as shown in the middle panel. We fit our theoretical curve with the observation to obtain the distance modulus in the \(V\) band, i.e., \((m-M)_{V}=m_{V}({\rm obs})\)\(-M_{V}({\rm model})\)=13.2.
The envelope mass gradually decreases because of hydrogen burning at a rate of \(\dot{M}_{\rm nuc}\sim 1.7\times 10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\). In the decay phase of \(T_{\rm ph}>10,000\) K, we assume the optically-thin wind mass-loss as done in the PU Vul model by Kato et al. (2011) and Kato et al. (2012). For simplicity, we assume a constant mass loss rate (\(\dot{M}_{\rm wind}\)). The \(V\) magnitude decays as \(T_{\rm ph}\) increases with time, while the photospheric luminosity \(L_{\rm ph}\) is almost constant. If we assume a larger mass-loss rate, the \(V\) magnitude decays faster because the envelope mass decreases more quickly.
From the resemblance of the light curve to PU Vul and the presence of emission lines that indicate the optically thin mass-loss, we regard that the \(g\) light curve is strongly contaminated with nebular emission as often observed in the nebular phase of novae (e.g., Kato et al. (2012) for PU Vul, Hachisu & Kato (2006) for V1668 Cyg). In other words, the photospheric (continuum) component would be much fainter than \(g\) light curve.
We plot three light curves with different wind mass-loss rates in the top panel of Figure 2. The mass loss starts at the point denoted by the open small circle in the middle panel. The model with \(\dot{M}_{\rm wind}=1\times 10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\) is too slow and rejected.
Figure 3 shows the light curve fittings of model B, model C, and model D. We obtain the distance modulus in the \(V\) band from the flat-peak phase, and the lower limit of mass-loss rates from the decay phase.
It is difficult to further constrain the mass loss rate in CN Cha. The bottom panel in Figure 2 shows the UV light curves of the three different mass-loss models. If we have UV observations in the decay phase, we can determine the mass-loss rate by comparing them with the model light curves.
In PU Vul, UV 1455 A band light curve is obtained with IUE that represents temperature evolution in the decay phase. Kato et al. (2011) and Kato et al. (2012) determined the optically-thin wind mass-loss rate to be \(\dot{M}_{\rm wind}=(2-5)\times 10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\) for a \(\sim 0.6\)\(M_{\odot}\) WD.
The mass loss rate obtained in PU Vul may be a representative of mass loss rates in the extended nova envelope for the particular case of no optically thick winds. We plot the model of \(\dot{M}_{\rm wind}=5\times 10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\) by the thick solid black line in Figures 2 and 3. We thus exclude model C.
For a less massive WD (\(M_{\rm WD}\lesssim 0.55\)\(M_{\odot}\)), the envelope mass is larger while the nuclear burning rate \(\dot{M}_{\rm nuc}\) is smaller. Thus, the evolution timescale is much more longer than those in models B and C. To reproduce a reasonable light curve for CN Cha, we need to take a
much more larger \(\dot{M}_{\rm wind}\). Thus, less massive WDs than those in models B and C are unlikely.
### Distance Modulus
Figure 4 shows the distance modulus in the \(V\) band, \((m-M)_{V}\), calculated from light curve fitting. The photospheric luminosity at the flat-peak is brighter for more massive WDs, so that the \((m-M)_{V}\) becomes larger with the increasing \(M_{\rm WD}\). The horizontal solid/dotted blue lines indicate \((m-M)_{V}=12.9\pm 0.3\), calculated from a Gaia eDR3 distance, \(d=3.05^{+0.19}_{-0.17}\) kpc (Bailer-Jones et al., 2021), and scattering in ASAS-SN \(V\) data. From this plot, we may exclude \(M_{\rm WD}\lesssim 0.5~{}M_{\odot}\) and \(M_{\rm WD}>0.6~{}M_{\odot}\). Thus, models A and B are reasonable for CN Cha.
For models A, B, and C, we adopt a WD (radius) in a thermal balance with a relatively large mass-accretion rate of several \(\times 10^{-8}~{}M_{\odot}\) yr\({}^{-1}\)(Kato et al., 2020). For model D, we take a smaller WD radius for a colder WD for comparison, and increase the heavy element enrichment to \(Z=0.01\). The resultant distance modulus \((m-M)_{V}\) increases by 0.15 mag, because model D is brighter than model C mainly because of a smaller WD radius. Note that model D is for a Population I star and disfavored as a model of CN Cha.
## 4 Discussion
We could not accurately obtain/constrain the mass accretion rate because our static-sequence approach cannot be applied to the early rising phase of a shell flash. However, the mass accretion rate corresponding to our ignition mass can be found in literature.
Chen et al. (2019) presented hydrogen shell flash calculation and obtained the ignition mass of \(M_{\rm ig}=3.3\times 10^{-5}~{}M_{\odot}\) (the recurrence time \(P_{\rm rec}=2,500\) yr) for a 0.6 \(M_{\odot}\) WD with \(Z=10^{-4}\) and CO-rich accretion of \(\dot{M}_{\rm acc}=1\times 10^{-8}~{}M_{\odot}\) yr\({}^{-1}\). This ignition mass is close to that of model A, which suggests that the mass accretion rate of CN Cha is around \(10^{-8}~{}M_{\odot}\) yr\({}^{-1}\).
Chen et al. (2019) also obtained the ignition mass of \(M_{\rm ig}=2.2\times 10^{-4}~{}M_{\odot}\) (\(P_{\rm rec}=22,000\) yr) for a 0.6 \(M_{\odot}\) WD with no CO enrichment, \(Z=10^{-4}\), and \(\dot{M}_{\rm acc}=1\times 10^{-8}~{}M_{\odot}\) yr\({}^{-1}\). Kato et al. (2020) obtained the ignition mass to be \(M_{\rm ig}=2.0\times 10^{-4}~{}M_{\odot}\) (\(P_{\rm rec}=10,000\) yr) for a 0.6 \(M_{\odot}\) WD with \(Z=0.001\), no CO enhancement, and \(\dot{M}_{\rm acc}=2\times 10^{-8}~{}M_{\odot}\) yr\({}^{-1}\). These values are roughly consistent with model C.
The above values indicate that the mass accretion rate of CN Cha is \(\dot{M}_{\rm acc}\sim 1\times 10^{-8}~{}M_{\odot}\) yr\({}^{-1}\).
## 5 Concluding Remarks
CN Cha is a rare nova that has a long-lasted flat-peak. Our 0.55-0.57 \(M_{\odot}\) WD models show reasonable \(V\) light curve fittings with the observation. This is the second well-observed flat-peak nova after PU Vul that also hosts a less massive WD of \(M_{\rm WD}\sim 0.6~{}M_{\odot}\).
On the other hand, some slow novae, V723 Cas, HR Del, and V5558 Sgr have similar WD masses of \(M_{\rm WD}\sim 0.55\)-0.6 \(M_{\odot}\)(e.g., Hachisu and Kato, 2015), but they show violent multiple spike-like peaks, instead of a flat peak.
We point out that this difference arises from the binary nature: a wide binary or not. In a close binary like V723 Cas, a nova envelope at/near optical maximum extends beyond the binary orbit. In other words, the companion main-sequence star moves in the nova envelope.
This idea is supported by the following theoretical implications: (1) In low mass WDs, there are two possible nova evolutions, which are represented by an optically-thick wind mass-loss envelope and static envelope with no winds (Kato and Hachisu, 2009). The evolutions of these two envelope solutions do not cross each other because their envelope structures are very different. (2) Transition from static to wind mass-loss solutions could occur, if the companion star moves in the envelope because the static envelope structure becomes close to that of wind solution considering both the centrifugal force and companion's gravity (Kato and Hachisu, 2011). The observed multiple spike-like peaks are interpreted as a relaxation process of the transition to wind solutions.
The main theoretical points of the present work are summarized as follows:
1. Nova outbursts accompany, in most, strong optically-thick wind mass-loss. Then the nova evolution is fast, and the optical light curve shows a sharp peak in a white dwarf of mass \(M_{\rm WD}\gtrsim 0.55~{}M_{\odot}\)(e.g., Hachisu and Kato, 2015).
2. When the acceleration is too weak to emit optically thick winds (Kato and Hachisu, 2009), the nova envelope evolves very slowly and show a flat optical peak like PU Vul and CN Cha. This slow and static evolution occurs in a low mass white dwarf of \(M_{\rm WD}\lesssim 0.7~{}M_{\odot}\)(Kato and Hachisu, 2011).
3. In the region between the above two cases (\(0.5~{}M_{\odot}\lesssim M_{\rm WD}\lesssim 0.7~{}M_{\odot}\)), a nova outburst begins first in a hydrostatic manner, and later it could change to an evolution with optically-thick wind mass-loss due to perturbation by the companion star in the nova envelope (Kato and Hachisu, 2011).
4. This hypothesis predicts that such a transition occurs only in close binaries like V723 Cas and
does not occur in wide binaries like PU Vul. CN Cha is the second example of flat-peak novae in wide binaries, and strengthens the idea by Kato & Hachisu (2011).
Thus, we encourage observations for searching slow novae. We suggest that a long-lasted flat-peak nova appears only in long-period binaries, and not in close binaries. Photometric and spectroscopic observations are highly valuable.
We are grateful to the anonymous referee for useful comments, which improved the manuscript. We also thank L. Lancaster for discussion on ASAS-SN data and T. Jayasinghe for updating ASAS-SN sky patrol data of CN Cha. We also thank the American Association of Variable Star Observers (AAVSO) for the archival data of CN Cha.
|
2305.01900 | Demonstrating repetitive non-destructive readout (RNDR) with SiSeRO
devices | We demonstrate so-called repetitive non-destructive readout (RNDR) for the
first time on a Single electron Sensitive Readout (SiSeRO) device. SiSeRO is a
novel on-chip charge detector output stage for charge-coupled device (CCD)
image sensors, developed at MIT Lincoln Laboratory. This technology uses a
p-MOSFET transistor with a depleted internal gate beneath the transistor
channel. The transistor source-drain current is modulated by the transfer of
charge into the internal gate. RNDR was realized by transferring the signal
charge non-destructively between the internal gate and the summing well (SW),
which is the last serial register. The advantage of the non-destructive charge
transfer is that the signal charge for each pixel can be measured at the end of
each transfer cycle and by averaging for a large number of measurements
($\mathrm{N_{cycle}}$), the total noise can be reduced by a factor of
1/$\mathrm{\sqrt{N_{cycle}}}$. In our experiments with a prototype SiSeRO
device, we implemented nine ($\mathrm{N_{cycle}}$ = 9) RNDR cycles, achieving
around 2 electron readout noise (equivalent noise charge or ENC) with spectral
resolution close to the fano limit for silicon at 5.9 keV. These first results
are extremely encouraging, demonstrating successful implementation of the RNDR
technique in SiSeROs. They also lay foundation for future experiments with more
optimized test stands (better temperature control, larger number of RNDR
cycles, RNDR-optimized SiSeRO devices) which should be capable of achieving
sub-electron noise sensitivities. This new device class presents an exciting
technology for next generation astronomical X-ray telescopes requiring very
low-noise spectroscopic imagers. The sub-electron sensitivity also adds the
capability to conduct in-situ absolute calibration, enabling unprecedented
characterization of the low energy instrument response. | Tanmoy Chattopadhyay, Sven Herrmann, Peter Orel, Kevan Donlon, Gregory Prigozhin, R. Glenn Morris, Michael Cooper, Beverly LaMarr, Andrew Malonis, Steven W. Allen, Marshall W. Bautz, Chris Leitz | 2023-05-03T05:23:26Z | http://arxiv.org/abs/2305.01900v2 | # Demonstrating repetitive non-destructive readout (RNDR) with SiSeRO devices
###### Abstract
We demonstrate so-called repetitive non-destructive readout (RNDR) for the first time on a Single electron Sensitive Readout (SiSeRO) device. SiSeRO is a novel on-chip charge detector output stage for charge-coupled device (CCD) image sensors, developed at MIT Lincoln Laboratory. This technology uses a p-MOSFET transistor with a depleted internal gate beneath the transistor channel. The transistor source-drain current is modulated by the transfer of charge into the internal gate. RNDR was realized by transferring the signal charge non-destructively between the internal gate and the summing well (SW), which is the last serial register. The advantage of the non-destructive charge transfer is that the signal charge for each pixel can be measured at the end of each transfer cycle and by averaging for a large number of measurements (N\({}_{\rm cycle}\)), the total noise can be reduced by a factor of 1/\(\sqrt{\rm N_{cycle}}\). In our experiments with a prototype SiSeRO device, we implemented nine (N\({}_{\rm cycle}\) = 9) RNDR cycles, achieving around 2 electron readout noise (equivalent noise charge or ENC) with spectral resolution close to the fano limit for silicon at 5.9 keV. These first results are extremely encouraging, demonstrating successful implementation of the RNDR technique in SiSeROs. They also lay foundation for future experiments with more optimized test stands (better temperature control, larger number of RNDR cycles, RNDR-optimized SiSeRO devices) which should be capable of achieving sub-electron noise sensitivities. This new device class presents an exciting technology for next generation astronomical X-ray telescopes requiring very low-noise spectroscopic imagers. The sub-electron sensitivity also adds the capability to conduct in-situ absolute calibration, enabling unprecedented characterization of the low energy instrument response.
Single electron Sensitive Read Out (SiSeRO), X-ray detector, X-ray charge-coupled devices, repetitive non-destructive readout (RNDR), readout electronics, instrumentation.
*Tanmoy Chattopadhyay, [email protected]
## 1 Introduction
Repetitive non-destructive readout (RNDR) was first experimentally demonstrated by Kraft et al. 1995,[1] who achieved a readout noise better than the noise of a single amplifying stage by moving the same charge multiple times between two readout nodes (floating gate amplifiers). In recent times, the RNDR technique has seen applications in various silicon sensor technologies, making them sensitive to single electrons, e.g. skipper CCDs[2, 3] and Depleted P-channel Field-Effect Transistor (DEPFET[4, 5]). The final noise in RNDR measurements is given by \(\sigma_{\rm N}=\sigma/\sqrt{\rm N_{cycle}}\), where \(\sigma\) is the noise for a single read and \(\rm N_{cycle}\) is the number of repetitive cycles. The final noise, therefore, solely depends on the number of RNDR cycles, the single readout noise, and leakage current (noise). The Single electron Sensitive Read Out (SiSeRO) readout stage developed by MIT Lincoln Laboratory (MIT-LL) is a novel charge detector for X-ray CCDs, with a working principle similar in some respects to DEPFET sensors[6, 7] and extremely high responsivity floating-gate amplifiers.[8] By transferring the signal charge from and to the buried gate, SiSeRO devices can
operate in RNDR mode. The advantage of this specific technology is that it can be used not only to augment CCDs but also to build active pixel sensor (APS) arrays in which each pixel includes a SiSeRO, thereby enabling extremely low noise performance at high readout speeds, without the large distance charge transfer needed for CCDs.
The devices studied here consist of a CCD pixel array with a SiSeRO amplifier straddling the n-channel of the CCD's output register (see the simplified 3D schematic in Fig. 1a). When a charge packet is present in the CCD channel beneath the SiSeRO p-MOSFET, it modulates the transistor drain current, which is then readout by the readout electronics [9, 10].
For a single polysilicon MOSFET in SiSeROs, the internal gate minimizes parasitic capacitance on the sense node, resulting in a bias of \(\sim\)100%. The bias of \(\sim\)100% is due to the bias of \(\sim\)10%. The bias of \(\sim\)10% is due to the bias of \(\sim\)10%.
in a high conversion gain and minimized noise. Recently, for a prototype CCD device (Fig. 1b) with a buried-channel SiSeRO at the output stage, a read noise of 6 \(\mathrm{e}_{\mathrm{RMS}}^{-}\) was achieved. This was further improved to 4.5 \(\mathrm{e}_{\mathrm{RMS}}^{-}\) by applying digital filtering techniques to minimize the 1/f noise.[11] The FWHM of the 5.9 keV Mn K\({}_{\alpha}\) line was measured to be \(\sim\) 132 eV. The test device, from the CCID-93 family developed by MIT-LL, has an active area of \(\sim\)4 mm \(\times\) 4 mm with a 512 \(\times\) 512 array of 8 \(\mu\)m pixels. Figure 1c shows a typical SiSeRO digital video waveform at \(\sim\)625 kpixel/s readout rate (each pixel is \(\sim\)1600 ns long) sampled every 10 ns. The change from the baseline (the first shaded region from the left) to the signal level (the second shaded region) after the charge packet is transferred is proportional to the signal strength. The signal amplitude is extracted by taking the difference between these two levels (correlated double sampling or CDS) for each pixel, which is then used to generate the 2D images.
Since the charge packet in the internal gate is unaffected by the readout process, SiSeROs offer the potential for repetitive non-destructive readout (RNDR). The charge can be moved around like any charge packet in a CCD. This can, in principle, reduce the noise below the 1/f barrier and deep into the sub-electron regime. As a proof of concept with our prototype SiSeRO device (Fig. 1b), we employed clocking to the output gate and summing well to pull the charge back from the buried gate under the SiSeRO, carrying out nine repetitive transfers of the charge signal. The results are highly encouraging and suggest the potential for significant improvements in performance with future RNDR optimized SiSeRO devices.
X-ray Charge Coupled Devices (CCDs)[12, 13, 14] have been the primary detector technology for soft X-ray instrumentation for more than three decades of X-ray astronomy. The latest generation of X-rays CCDs with advanced readout electronics exhibit impressive readout speeds and noise performance.[15, 16, 17, 18, 19, 20, 21] However, the CCDs still lack in very high readout speed, region-of-interest (ROI) readout, and radiation hardness. Other competing technologies, for example, Hybrid CMOS detectors (HCDs[22, 23, 24]) and Monolithic CMOS (MCMOS[25]) detectors, can provide very high readout speeds. These different detector technologies, however, either lack in low noise performance, in the case of HCDs, or sufficient detection efficiency beyond 5 keV in the case of MCMOS. SiSeRO with RNDR can provide very low noise readout (in sub-electron levels). In the future, RNDR optimized SiSeRO devices with a SiSeRO for every pixel, might remedy all of the aforementioned weaknesses of existing X-ray detection technologies, providing the capability to combine full frame, very low-noise readout with high speed, ROI readout in the same observation, while minimizing the sensitivity to radiation induced displacement damage. The RNDR technique also opens opportunities for unprecedentedly precise gain calibration,[3] where the instrument gain can be corrected the in-situ by identifying individual electron bumps in the energy spectrum. This will be especially useful in characterizations in the low energy response of instruments (\(<1\) keV), where much astrophysical discovery space residesi ii.
Footnote i: [https://axis.astro.umd.edu/](https://axis.astro.umd.edu/)
Footnote ii: [https://www.lynxobservatory.com/](https://www.lynxobservatory.com/)
In the next sections, we give a brief description of our experimental setup, the technical details of the RNDR technique, and our experimental results. A summary and discussion of future plans can be found in Sec. 3.
## 2 Repetitive non-destructive readout (RNDR) in SiSeROs
### Experimental setup
The experimental setup (also known as the 'Tiny Box', see Chattopadhyay et al. 2020[26] for details) is shown in Fig. 2. A compact (13 cm \(\times\) 15 cm \(\times\) 6.5 cm) aluminum vacuum chamber houses the X-ray detector. The detector is mounted on an aluminum block, which is epoxied on a thermoelectric cooler (TEC) that is used to cool down the detectors. A liquid plate on the back of the bottom flange removes the heat deposited by the TEC. A Proportional-Integral-Derivative (PID) algorithm controls the detector temperature with better than 0.1\({}^{\circ}\)C accuracy. A beryllium X-ray entrance window on the top flange allows X-ray photons to illuminate the detectors.
As mentioned earlier, for this experiment, we used an MIT-LL CCID-93 prototype X-ray CCD with a buried-channel SiSeRO at the output stage. The readout electronics[9] consist of a custom preamplifier board (the circuit board in Fig. 2) and a commercial Archon controller (the black box in the figure). The preamplifier uses a drain current readout, where an I2V amplifier first converts and amplifies the SiSeRO output and a differential driver (at the second stage) converts the output to a fully differential signal. The Archon,[27] procured from Semiconductor Technology Associates, Inc (STAiii), provides the required bias and clock signals to run the detector, and digitize the output signal.
Footnote iii: [http://www.sta-inc.net/archon/](http://www.sta-inc.net/archon/)
### Repetitive non-destructive readout (RNDR) technique
Since in SiSeRO, the charge packet remains unaffected by the readout process (not transferred to a doped node as in conventional CCDs), we explored the possibility to utilize RNDR by moving the charge packet multiple times between the internal channel and the adjacent output gate. This technique has been demonstrated for DEPFET devices with sub-electron read noise yield,[4, 5] and related efforts have been demonstrated on Skipper CCDs.[2] An advantage of repetitive readout of the same charge signal is that the read noise can be reduced significantly, resulting in extremely
Figure 2: The experimental setup used for RNDR experiment. A CCID-93 buried channel SiSeRO device is mounted inside the chamber. A beryllium window mounted on the top flange (not shown here) is used for X-ray entrance.
low noise. It should also be noted that this technique keeps the full signal range intact, an important aspect for X-ray detectors where these ranges can span two orders of magnitude (0.1-10 keV).
In Fig. 3, we show a simplified schematic of how RNDR is implemented in our setup, and the waveform for one pixel in RNDR readout. Here we implemented nine repetitive readout cycles, which increases the readout time by nine times overall (readout rate \(\sim\)63 kpixel/s). In the current setup, the detector temperature is limited to -23\({}^{\circ}\)C. Therefore, the increase in readout time results in a larger leakage contribution (which limits the repetitive readout to nine cycles in this experiment). For each repetitive cycle, we first move the charge from the internal channel back to the output gate (OG, see Fig. 3a) by applying a positive clock potential (\(>\) channel potential, which is around 2 V) to the OG. Note that in normal applications (non-RNDR), OG is connected to a positive DC bias (0.5 V). However, in RNDR, to move the charge back and forth, we clock OG between 0.5 V (OGLow) and 4 V (OGHigh). The last serial clock (Summing Well or SW in CCID-93) is then made high and OG is simultaneously made low, to move the charge to SW (Fig. 3a). As OG settles down to OGLow, this introduces the individual baselines (from B\({}_{2}\) to B\({}_{9}\) in Fig. 3b) that we see in the waveform. The charge packet is then moved back to the internal channel by changing the SW potential to SWLow (\(<\)OGLow). This introduces the individual signal regions (from S\({}_{2}\) to S\({}_{9}\) in Fig. 3b) in the waveform. After the final readout, the reset transistor (see Fig. 1a) is opened and a positive potential to the reset drain removes the charge from the internal channel, making the detector ready for charge transfer from the next pixel.
### Repetitive non-destructive readout (RNDR) results
We first discuss the read noise measurements obtained using the RNDR technique. The read noise is calculated by measuring the width of distribution of signal in the overclocked pixels, which are an array of over-scanned pixels at the end of each pixel row. The amount of signal charge
Figure 3: (a) Schematic of RNDR technique. Two p-MOSFET SiSeRO transistor sit next to output gate (OG) and the last serial register gate (SW or summing well), allowing the signal charge to be moved between the buried back gate and SW for repeated measurement. (b) The waveform for one pixel over nine repetitive readout cycles showing baseline (B) and signal (S) measurements for each of the nine cycles.
is expected to be negligibly small and the noise measured should be entirely due to the readout circuitry. In Fig. 4a, we show the charge distributions obtained after averaging over consecutive repetitive cycles. For example, the distribution shown in black (first from the left) is obtained at the end of the first cycle, while the distribution in red (rightmost) is obtained at the end of ninth cycle, after averaging over all nine measurements. The distributions all peak at zero ADU; however, for display purposes we add a constant offset of 25 ADU to the consecutive distributions. The distributions are then fitted with a Gaussian model to quantify the width of the distributions. In Fig. 4b, we show the consecutively averaged read noise measurements (red circles) along with the nine individual results (blue squares). The individual cycles yield similar noise measurements (shown by the blue dashed line fitted to the data), suggesting that the transfer of the signal charge into and out of the back gate of the transistor is sufficiently efficient. Note that there is a slight difference between the first and the second measurement. This stems from unequal baseline sampling for the first cycle. The first baseline follows the reset of the internal channel whereas for the remaining cycles, the baselines follow the OG clock pulse. The widths of the reset and OG pulses differ by some amount resulting in an unequal widths of the baseline regions. The red dashed line shows a fit with a power law model to the averaged measurements, \(N_{\rm RNDR}^{\alpha}\), which yields a best fit value of -0.499 for \(\alpha\). We achieved a read noise of 2.06 \(\rm e_{RMS}^{-}\) at the end of the ninth RNDR cycle.
The resulting spectra of Mn K\({}_{\alpha}\) (5.9 keV) and K\({}_{\beta}\) (6.4 keV) lines generated from the single-pixel events are shown in Fig. 5 for the 1\({}^{\rm st}\) and 9\({}^{\rm th}\) RNDR cycles. To generate this spectrum, the X-ray images were corrected for bias, by subtracting the overclocked region for each frame, and then for dark current, by subtracting dark current obtained after averaging over dark frames.
Figure 4: Read noise from the RNDR analysis. (a) Distribution of charge (in digital unit / ADU) obtained from the overclocked pixels. The distributions are plotted in an increasing order of the RNDR cycles from left to right. A constant offset of 25 ADU is added to the distributions for display purpose. With each RNDR cycle, the distribution achieves a narrower width. (b) The blue points show the individual read noise measurements from the 9 individual measurements while the red points are obtained when we average over consecutive measurements. The red data points follow the expected 1/\(\sqrt{N}\) trend. The read noise for the first cycle is measured to be 6.17 \(\rm e_{RMS}^{-}\), while at the end of ninth cycle the read noise has improved to 2.06 \(\rm e_{RMS}^{-}\). The error bars are the 1\(\sigma\) uncertainties on the noise measurements.
An event list was generated from the corrected image frames, with each event consisting of signal amplitudes of 9-pixels around every local maximum in spatial distribution (presumably due to X-ray interaction with silicon). The selection of such events is based on a primary threshold (7 times the read noise). We apply a secondary threshold (2.6 times the read noise) to determine whether an event contains signal charge in the pixels surrounding the center and grade the events depending on number of pixels containing such extra charge. Spectra for each of the event grades are generated by adding charge in the adjacent pixels exceeding the secondary threshold. The FWHM at 5.9 keV at the end of 9\({}^{\rm th}\) RNDR cycle is measured to be around 124 eV, whereas the single read measurement is around 143 eV. Note that the Fano limit for silicon at 5.9 keV is \(\sim\)119 eV.
In order to examine the charge transfer efficiency with each transfer cycle, we compare the measured gain in digital unit / ADU at 5.9 keV as a function of RNDR cycles (see Fig. 6a). The blue points show the individual gain measurements from the 9 individual measurements while the red points are obtained when we average over consecutive measurements. The gains remain fairly constant implying no charge loss during the repetitive transfer of charge. As mentioned earlier, there is a slight difference in the baseline region in the first transfer cycle from the rest, giving rise to slight change in the gain for the first measurement (and therefore the read noise). The measured FWHMs at 5.9 keV as a function of RNDR cycles are shown in Fig. 6b. The individual measurements (blue data points) all lie within the \(\sim\)1\(\sigma\) error bar of each other. The measurements obtained from the RNDR cycles (red) show a clear improvement in the spectral resolution with this method, approaching the Fano limit (119 eV) for these energies in silicon. With leakage current dominating the total noise after the fourth RNDR cycle, we do not see any further improvement in FWHM. We note that the read noise is estimated from the overclocked pixel array, which has a negligible leakage current contribution and therefore does not affect the read noise measurements. In future work, with an advanced setup providing better control of the detector temperature, the leakage current should be negligibly low, while keeping the number of RNDR cycles high.
Figure 5: Spectrum obtained at the beginning (1\({}^{\rm st}\) RNDR cycle) (a) and at the end of the 9\({}^{\rm th}\) RNDR cycle (b) showing the Mn K\({}_{\alpha}\) (5.9 keV) and K\({}_{\beta}\) (6.4 keV) lines from a \({}^{55}\)Fe radioactive source for single-pixel (grade 0) events. The measured FWHM at 5.9 keV improves from 143 eV to 124 eV. The Fano limit for silicon at 5.9 keV is 119 eV. The corresponding read noise for these two measurements are 6.17 and 2.06 \(\rm e^{-}_{RMS}\) respectively.
## 3 Summary and future plans
The SiSeRO amplifier, developed by MIT Lincoln Laboratory, is a novel technology for the output stage of X-ray CCDs. The charge packet remains unaffected in the readout process, which offers the possibility to transfer the charge non-destructively between the output stage amplifier and the transfer gates, known as RNDR. At the end of each cycle, the same charge packet can be measured and, over many such measurements, the final average noise can be reduced by a factor of \(1/\sqrt{\mathrm{N_{cycle}}}\).
In this work, we implemented the RNDR technique with nine repetitive cycles on a prototype SiSeRO device, obtaining 2.06 \(\mathrm{e^{-}_{RMS}}\) read noise at the end of the ninth cycle. This represents a factor of 3 (square root of 9) improvement for the individual measurement of around 6 \(\mathrm{e^{-}_{RMS}}\) noise. The individual cycles are readout at 625 KHz speed over the nine cycles, for an effective readout speed of \(\sim\)63 KHz. Due to the limitations of our current setup, we limited the repetitive readout to nine cycles in order to keep the accumulated leakage current low.
Significant improvement in noise performance, potentially to sub-electron levels, can in principle be achieved by increasing the number of repetitive cycles. In future work with RNDR-optimized SiSeRO devices, we plan to achieve these goals with the following upgrades in our setup, readout electronics and detector output stage:
* A new X-ray test setup, currently under development, will support operation at temperatures down to -100\({}^{\circ}\)C. This should reduce the leakage current noise and allow for a larger number
Figure 6: (a) Measured gain in digital unit / ADU at 5.9 keV as a function of RNDR cycles. The blue points show the individual gains for the 9 individual measurements, while the red points are obtained after averaging over consecutive measurements. The gains remain constant, implying no charge loss during the repetitive transfer of charge. (b) Measured FWHM at 5.9 keV as a function of RNDR cycles. The individual measurements (blue data points) lie within the \(\sim\)1\(\sigma\) error bar of each other. The measurements obtained from the RNDR cycles (red) show a clear improvement in spectral resolution with the results approaching fano limit (119 eV) for these energies in silicon. With leakage current dominating the total noise (leakage noise \(\gg\) read noise), the FWHM measurements do not improve after the fourth cycle.
of RNDR cycles.
* We have developed an ASIC-based system at Stanford to readout these devices (Herrmann et al. 2020[19] and Orel et al. 2022[21]). The ASIC is expected to provide higher readout speed and lower noise compared to the existing discrete electronics. With these changes, it should be possible to achieve noise as low as 1 \(\mathrm{e}_{\mathrm{RMS}}^{-}\) or better, even with first generation SiSeRO devices.
* For our proof of principle measurements, we employed a simple clocking of the output gate and summing well to pull the charge back from the buried gate under the SiSeRO. A potential improvement is to utilize two SiSeRO p-MOSFETs next to each other, allowing the charge packet to be transferred between the two. Such a design would allow for the fast transfer of the signal charge between the transistors and higher operational RNDR switch frequencies. This concept is comparable to that of RNDR optimized DEPFET detectors[4]. We will conduct device simulations with the MIT Lincoln Laboratory process development kit to investigate various arrangements and layouts. Our SiSeRO building block can be the basis for other RNDR devices: for example, a cluster of multiple SiSeRO stages at the output should allow us to perform additional measurements of the same signal charge, resulting in a reduction of noise by an additional 1/\(\sqrt{\mathrm{N}}\) while maintaining a high frame rate. One of our long-term goals is to build SiSeRO active pixel sensor (APS) arrays in which each pixel will include the basic SiSeRO building block. This will allow extremely low noise performance at high readout speeds. In our preliminary layout design, we could accommodate two-SiSeRO structures in 24 \(\mu\)m pixel size. In the first devices, we plan to include 16 \(\times\) 16 pixels, where the readout can be done using our existing 8 channel ASIC wire-bonded to one side of the matrix, while matrix control can be performed with a separate ASIC on the other side.
SiSeROs with RNDR offer an exciting solution for future low-noise spectroscopic imagers for next generation astronomical X-ray telescopes. In addition, RNDR optimized SiSeROs provide opportunities for the precise gain calibration of these detectors[3], allowing to conduct in-situ absolute calibration at low X-ray energies (\(<\)1 keV). In combination, these characteristics should help open up a new discovery space for X-ray astronomy.
### Acknowledgments
This work has been supported by NASA grants APRA 80NSSC19K0499 "Development of Integrated Readout Electronics for Next Generation X-ray CCDs" and SAT 80NSSC20K0401 "Toward Fast, Low-Noise, Radiation-Tolerant X-ray Imaging Arrays for Lynx: Raising Technology Readiness Further."
|
2302.03168 | Multiplicity of northern bright O-type stars with optical long baseline
interferometry | The study of the multiplicity of massive stars gives hints on their formation
processes and their evolutionary paths, which are still not fully understood.
Large separation binaries (>50 milliseconds of arc, mas) can be probed by
adaptive-optics-assisted direct imaging and sparse aperture masking, while
close binaries can be resolved by photometry and spectroscopy. However, optical
long baseline interferometry is mandatory to establish the multiplicity of
Galactic massive stars at the separation gap between 1 and 50 mas. In this
paper, we aim to demonstrate the capability of the new interferometric
instrument MIRC-X, located at the CHARA Array, to study the multiplicity of
O-type stars and therefore probe the full range of separation for more than 120
massive stars (H<7.5 mag). We initiated a pilot survey of bright O-type stars
(H<6.5mag) observable with MIRC-X. We observed 29 O-type stars, including two
systems in average atmospheric conditions around a magnitude of H=7.5 mag. We
systematically reduced the obtained data with the public reduction pipeline of
the instrument. We analyzed the reduced data using the dedicated python
software CANDID to detect companions. Out of these 29 systems, we resolved 19
companions in 17 different systems with angular separations between ~0.5 and 50
mas. This results in a multiplicity fraction fm=17/29=0.59+/-0.09, and an
average number of companions fc=19/29=0.66+/-0.13. Those results are in
agreement with the results of the SMASH+ survey in the Southern Hemisphere.
Thirteen of these companions have been resolved for the first time, including
the companion responsible for the nonthermal emission in Cyg OB2-5 A and the
confirmation of the candidate companion of HD 47129 suggested by SMASH+. A
large survey on more than 120 northern O-type stars (H<7.5) is possible with
MIRC-X and will be fruitful. | Cyprien Lanthermann, Jean-Baptiste Le Bouquin, Hugues Sana, Antoine Mérand, John D. Monnier, Karine Perraut, Abigail J. Frost, Laurent Mahy, Eric Gosset, Michael De Becker, Stefan Kraus, Narsireddy Anugu, Claire L. Davies, Jacob Ennis, Tyler Gardner, Aaron Labdon, Benjamin Setterholm, Theo ten Brummelaar, Gail H. Schaefer | 2023-02-06T23:53:45Z | http://arxiv.org/abs/2302.03168v2 | # Multiplicity of northern bright O-type stars with optical long baseline interferometry
###### Abstract
Context:The study of the multiplicity of massive stars gives hints on their formation processes and their evolutionary paths, which are still not fully understood. Large separation binaries (\(>\)50 milliseconds of arc, mas) can be probed by adaptive-optics-assisted direct imaging and sparse aperture masking, while close binaries can be resolved by photometry and spectroscopy. However, optical long baseline interferometry is mandatory to establish the multiplicity of Galactic massive stars at the separation gap between 1 and 50 mas.
Aims:In this paper, we aim to demonstrate the capability of the new interferometric instrument MIRC-X, located at the CHARA Array, to study the multiplicity of O-type stars and therefore probe the full range of separation for more than 120 massive stars (\(H<7.5\) mag).
Methods:We initiated a pilot survey of bright O-type stars (\(H<6.5\) mag) observable with MIRC-X. We observed 29 O-type stars, including two systems in average atmospheric conditions around a magnitude of \(H=7.5\) mag. We systematically reduced the obtained data with the public reduction pipeline of the instrument. We analyzed the reduced data using the dedicated python software CANDID to detect companions.
Results:Out of these 29 systems, we resolved 19 companions in 17 different systems with angular separations between \(\sim 0.5\) and 50 mas. This results in a multiplicity fraction \(f_{\rm m}=17/29=0.59\pm 0.09\), and an average number of companions \(f_{\rm c}=19/29=0.66\pm 0.13\). Those results are in agreement with the results of the SMASH+ survey in the Southern Hemisphere. Thirteen of these companions have been resolved for the first time, including the companion responsible for the nonthermal emission in Cyg OB2-5 A and the confirmation of the candidate companion of HD 47129 suggested by SMASH+.
Conclusions:A large survey on more than 120 northern O-type stars (\(H<7.5\)) is possible with MIRC-X and will be fruitful.
## 1 Introduction
Massive stars are key components of the evolution of their host galaxy. They are the main producers of heavy elements, and the momentum and kinetic energy involved in their death have an influence on a large part of their galaxy (Zinnecker & Yorke 2007). They are also the progenitors of the compact objects that, when they merge, produce gravitational wave bursts that we can currently detect (Abbott et al. 2016).
However, their short lifetime (a few million years) and their rapid formation process (\(10^{5}\) years) make the observation of their early ages difficult (Tan et al. 2014). Indeed, their lifetime makes them rare, and so to observe a large number of massive stars (\(>100\)), one needs to look for them at significant distances, typically 1 to 3 kpc. In addition, the majority of young massive stars are still embedded in a cloud of gas and dust when they finish their formation process (Zinnecker 2006), making the observation of this formation step even harder. In consequence, the formation process of massive stars is still actively discussed.
Historically, the standard models of star formation could not explain the formation of stars with masses significantly higher than about 10 M\({}_{\odot}\). The main difficulty is overcoming the radiation barrier emitted by the protostar as soon as it starts burning nuclear fuel. So, specific formation models need to explain how massive stars can form. There are currently three main scenarios: 1) the core accretion (Terquem 2001; Yorke 2002), which uses a massive accretion disk to accrete more matter; 2) the competitive accretion (Larson 1978; Zinnecker 1982; Bonnell & Bate 2002; Bonnell et al. 2003) for which close protostar cores use the combined gravitational potential to attract matter from further away than with each individual gravitational potential; and 3)
the collision (Binney et al., 1988; Bonnell & Bate, 2005; Dale & Davies, 2006) in which intermediate-mass stars collide to merge and form a more massive star. The outcomes of these three different formation models predict different multiplicity parameters. Hence, the study of the multiplicity of massive stars, after their formation process, should provide relevant constraints on these formation models.
Another motivation for the investigation of the multiplicity of massive stars, especially in the range of periods considered in this study, is the investigation of physical processes driven by the colliding winds. In particular, such systems are well suited for particle acceleration, hence the class of particle-accelerating colliding-wind binaries (PACWBs, De Becker & Raucq (2013); De Becker et al. (2017)). Such systems, mainly revealed by synchrotron radio emission, are likely contributors to the population of lower energy Galactic cosmic rays. Appropriate knowledge of their orbit is required to interpret their behavior and model their shock physics.
To probe the full range of orbital separations of systems situated at a typical distance of 2 kpc, one needs to use different observational techniques. The close companions (up to 0.5 milliseconds of arc, mas, Mahy et al., 2020) can be probed by photometry (eclipsing binaries) or spectroscopy (radial velocity, Sana et al., 2012, 2013; Barba et al., 2017), while wide companions (separation \(>\) 50 mas) can be probed with techniques such as adaptive-optics-assisted direct imaging, aperture masking, speckle imaging, and coronography (Turner et al., 2008; Mason et al., 2009; Reggiani et al., 2022). For separations between 1 and 50 mas, the only technique we can use is optical long baseline interferometry (OLBI) (see Fig. 1 in Sana (2017)). But until recently, this technique was limited by its sensitivity and could only be applied to a modest sample of massive stars.
The advent of the PIONIER (Precision Integrated-Optics Near-infrared Imaging ExpeRiment) instrument at the very large telescope interferometer (VLTI) (Le Bouquin et al., 2011) enabled the southern massive stars at high angular resolution (SMASH+), the first systematic interferometric large survey on massive stars in the Southern Hemisphere (Sana et al., 2014), probing the missing range of separation to have a complete statistical study of the multiplicity of massive stars. With this survey, Sana et al. (2014) observed 96 southern O-type star systems, nearly reaching the 100 targets required to get a statistical error \(<\) 5% over the entire range of the multiplicity fraction. The observational constraints brought by the SMASH+ survey, especially the abundance of companions with a separation smaller than 100 AU, are qualitatively in overall agreement with the core accretion model leading to disk fragmentation. However, the statistic on subgroups of stars, such as masses or evolutionary stage, is too low to obtain a robust conclusion. Therefore, we aim to perform a similarly large survey (120 objects) in the Northern Hemisphere to double the total statistic.
To observe more than 100 O-type stars in the Northern Hemisphere, one needs to reach a limiting magnitude in the J-band (the spectral band available in the G0SC catalog, Maiz Apellaniz et al., 2013) of \(J=7.5\), as shown in Fig. 1. In this figure, the limit of declination \(>-20^{\circ}\) corresponds to the limit of observability of the CHARA (center for high angular resolution astronomy) Array. Thanks to the recent implementation of the MIRC-X (Michigan InfraRed Combiner-exeter) instrument (Kraus et al., 2018; Anugu et al., 2020; Lanthermann et al., 2019) at the CHARA Array (ten Brummelaar et al., 2016), located at Mount Wilson observatory, USA, this magnitude is now reachable in the H-band with the OLBI technique in the Northern Hemisphere. As O-type stars are hot stars and under typical reddening conditions, their magnitudes remain relatively the same in all the infrared spectral bands (Martins & Plez, 2006), meaning that the limit on the magnitude of 7.5 in the J-band required to observe \(>\)100 O-type objects remains valid in the H-band.
In this paper, we present the results of the pilot survey performed on 29 systems, with the goal to demonstrate the feasibility of a large survey of more than 100 O-type stars with the CHARA/MIRC-X instrument. We present the observations in Section 2, with the definition of the sample, the description of the observation campaign, and the data reduction process. We then describe the data analysis that we performed in Section 3. Section 4 presents our results. We perform a statistical analysis of the results in Section 5. We finally discuss the results in Section 6 and conclude in Section 7.
## 2 Observations
### Observation sample
We built our sample using the Galactic O-star catalog (GOSC Maiz Apellaniz et al., 2013). We selected every O-star with a declination DEC \(>-20^{\circ}\) and with a magnitude in the J-band registered in G0SC of \(J<7.0\) mag. We chose the J-band criterion for several reasons. The first one is that the magnitude criteria available in G0SC are either in the B-band or the J-band. As we used the MIRC-X instrument, working in the H-band, the J-band is the nearest one of the two available. We adopted the threshold value of 7.0 or brighter because, during this pilot study, the official limiting magnitude offered by MIRC-X was \(H=6.5\) mag. As only the Rayleigh-Jeans tail of the O-type stars is observed in the J- and H-bands, their magnitudes are comparable in both bands. We took an extra 0.5 magnitude as a margin to be sure that our sample contains all targets observable with MIRC-X. We then looked for the H-band magnitude of the selected systems in the 2MASS All-Sky Catalog of Point Sources (Cutri et al., 2003) and performed the last selection on stars with a magnitude \(H<6.5\) to comply with the MIRC-X limiting magnitude. Our input sample is therefore magnitude limited. We note that some of the targets in our sample overlap with targets already observed by SMASH+, which can be used to validate our results.
Figure 1: Cumulative histogram of the number of O-type stars as a function of their magnitude in the J-band. In blue, for the whole sky. In orange, for a declination greater than \(-20^{\circ}\) to be reachable with the CHARA Array. Data from the Galactic O-star catalog (GOSC) (Maiz Apellániz et al., 2013).
### Observation campaign
The MIRC-X beam combiner operates in the J- and H-band. The observations presented in this paper are performed only in the H-band (1.65 \(\mu\)m) because the J-band mode was still experimental during the pilot survey and only uses four telescopes. The six telescopes provide sufficient coverage of the uv-plane to constrain the multiplicity of a star in a single snapshot. This can also be done for data that combine only five telescopes when the conditions (weather, technical, operational) would not permit a six-telescope observation. The MIRC-X combiner allows different spectral resolutions. Our data have been taken with the PRISM-50 configuration, allowing a spectral resolving power of \(R\sim 50\). This configuration was chosen to optimize the sensitivity of the beam combiner and because it brings the Outer Working Angle (OWA) of MIRC-X to:
\[OWA=R\frac{\lambda}{B}=2.7\times 10^{-7}\ \mathrm{rad}\simeq 55\ \mathrm{mas} \tag{1}\]
where \(\lambda\) is the central wavelength which in the H-band is equal to \(1.6\times 10^{-6}\) m, and \(B\) is the length of the baseline, equal to 330 m for the longest baseline at CHARA. This OWA allowed us to fill the gap in angular separation that other techniques cannot reach. We note that detection of companions with a separation larger than the OWA is still possible, but the flux ratio will be biased by the bandwidth smearing, hence, hampering accurate measurement of the contrast (Hummel et al. 2016).
We note that the Inner Working Angle is the angular resolution of the instrument, meaning that a binary separated by less than this angle would not be resolved, and is defined by:
\[IWA=\frac{\lambda}{2B}\simeq 0.55\ \mathrm{mas}. \tag{2}\]
This angular resolution is about a factor of two to three smaller than the SMASH+ survey owing to the larger baseline \(B\) of CHARA compared to the VLTI.
For the calibration strategy, we alternated a calibrator with a science target. The calibrator was chosen with the tool search. (Chelli et al. 2016a), developed by the JMMC1. The selection criterion was the calibrators needed to be at most 1.5 magnitudes brighter and 0.5 magnitudes fainter than the science target it would calibrate, and situated at a maximum angular distance of 3 degrees on the sky. Most of our calibration stars are of spectral type KIII, for which a sufficiently accurate diameter (a few percent) can be estimated from the apparent photometry (Chelli et al. 2016b).
Footnote 1: [https://www.jmmc.fr/english/tools/proposal-preparation/search-cal/](https://www.jmmc.fr/english/tools/proposal-preparation/search-cal/)
The observations with MIRC-X have been carried out during five runs spread over three observation semesters from 2018 to 2019. The observation time was obtained through NOAO2 (now called NOIRLab) community access time (program IDs: 2018A-M12/NOAO2, 2018B-M17/NOAO4, 2019A-M16/NOAO4; PI: C. Lantermann). We also used one night that studied O-type stars during which O-type stars were observed as backup targets because the original program could not be executed, in December 2017 (PI: S. Kraus).
Footnote 2: [https://noirlab.edu](https://noirlab.edu)
During this campaign, we could obtain good quality data on 29 O-type stars, listed in Table 1 with information on the spectral type, distance, and the number of detected companions in various separation ranges, from the literature as well as those detected by this study. Figure 2 shows the histogram of O-type star systems for which we obtained good-quality data as a function of their magnitude in the H-band. As the official magnitude limit in typical conditions is \(H=6.5\) mag, the bulk of observed objects is around this limit. But as we advanced in time, the improved knowledge of the MIRC-X instrument acquired during commissioning allowed us to push for fainter targets, beyond the initial magnitude limit of the instrument. Ultimately, observing an O-type star with a magnitude up to \(H=8.1\) mag was possible in excellent seeing conditions, as well as a couple of systems around \(H=7.5\) mag in normal seeing conditions. These latter observations are important to demonstrate that one can observe a large number of O-type stars with MIRC-X. Indeed, as shown in Fig. 1, we can observe up to 120 systems if a magnitude limit of \(H=7.5\) mag can be reached.
Usually, to calibrate a science target with the OLBI technique, we use "CAL1-SCI-CAL2" sequences, meaning that we observe a first calibrator (CAL1), then the science target (SCI), and finally a second calibrator (CAL2). To choose CAL1 and CAL2, we take unresolved targets within a reasonable distance and a similar apparent brightness to the science target, as explained above. It appears that two consecutive science targets observed in this program are close enough in the sky and in magnitude that CAL2 of one science target can be used as CAL1 of the next science target. Therefore, we chose to follow an observing sequence such as "CAL1-SCI1-CAL2-SCI2...," using each best-suited calibrator for each science target. This allows us to reduce the error due to calibration. When the same calibrator was best suited for two science targets observed one after the other (ex: SCI1 and SCI2), we preferred to use another calibrator for the second science target instead of using the same calibrator for both science targets (CAL1-SCI1-CAL1-SCI2). This reduces the risk that the results might be affected by a bad calibrator. A list of the calibrators observed but discarded because suspected to be actually multiple systems can be found in Appendix A.
### Data reduction
We used the MIRC-X data reduction pipeline3(Anugu et al. 2020). We set the maximum integration time (max-integration-time keyword in the pipeline) for one calibrated file to 220 seconds. The max-integration-time set the maximum time one reduced file will cover, with one file giving one constituent of the
Figure 2: Histogram of the H-band magnitude of our observed sample in green, the original sample planned for this pilot survey in orange, and the planned sample for the large survey in blue.
calibrated parameters. This means that one file will give the average calibrated parameters over the given maximum integration time. This time allows us to bin our data recording sequences of about 10 minutes into three files with a similar signal-to-noise ratio (S/N). This is needed because the change of the uv-coordinates due to the rotation of the Earth affects the observed squared visibility (V2). V2 can indeed change significantly in a timescale of 300 seconds for binaries at the limit of the OWA (see Fig. 3 for an example).
To get rid of some outlier points due to bad S/N data, we applied a threshold on the S/N of four instead of the default value of 3. This limit allows us to select only data with high enough quality, while at the same time preserving as much data as possible. To increase the quality of the reduced data, one would like to coherently sum the data over as long a time as possible. However, the atmospheric conditions limit the amount of time one can coherently integrate, as the phase induced by the atmosphere blurs the fringes, therefore reducing the quality of the data. The optimum time interval over which we can sum data for V2 and closure phase (CP) can however be different. To find this optimum value, we reduced the data for different values of the coherent integration time (coherent keyword in the reduction pipeline), which is the number of frames we add together to increase photon S/N, a frame being one recorded image of the detector. The pipeline produces a measurement of the calibrated parameters for each of these coherently added frames, and it is these coherently added frames that are incoherently averaged in the final calibrated file. Then, for each baseline and each target, we plotted the mean value over the files of the V2 S/N and the CP error as a function of the coherent integration time. We finally merged the V2 of the files with the higher S/N on V2 and the CP of the files with the lower error on CP, for most of the baselines or triplets. For the V2 we chose the same coherent integration time throughout the night. For the CP we took the best coherent integration time for each target. The number of coherently added frames for every target is specified in Table 1, each frame integrating the flux for about 3 ms.
The data reduction pipeline provides two options to unbias the CP. We chose the option that computes an estimation of the bias as it does not require tuning any extra parameters (in opposition to the second method). This unbiasing method is explained in Appendix B of Anugu et al. (2020).
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline Name & Spectral type & \(H\) & \(d\) & SC & SIC & IC & WC & Total \\ & & (mag) & (kpc) & (\(<\)0.5 mas) & (0.5 - 50 mas) & (0.05\({}^{\prime\prime}\)-8\({}^{\prime\prime}\)) & \\ \hline Cyg OB2-5 A (BD+40 4220A) & O7lafer & 4,745 & 1.748 & 1 & 0 & 2 & 2 & 5 \\ Cyg OB2-9 (HIP 101419) & O4.51fr & 5.897 & 1.788 & 0 & 1 & 0 & 1 & 2 \\ Cyg OB2-10 (BD+41 3804) & 09.71ab\({}^{\dagger}\) & 5.839 & 1.376 & 0 & 0 & 1 & 3 & 4 \\ HD 17505 A & O6.51IIn(f) & 6.177 & 2.258 & 1 & 0 & 1 & 1 & 3 \\ HD 19820 (CC Cas) & O8.51III(n)((f)) & 6.003 & 1.135 & 1 & 0 & 2 & 0 & 3 \\ HD 24431 & O9III & 5.845 & 0.922 & 0 & 0 & 1 & 1 & 2 \\ HD 28446 A (1 Cam A) & O9.7IIn & 5.459 & 0.760 & 1 & 0 & 1 & 1 & 3 \\ HD 30614 (\(\alpha\) Cam) & O9Ia & 4.242 & 1.717 & 1 & 0 & 0 & 0 & 1 \\ HD 34078 (AE Aur) & O9.5V & 5.355 & 0.382 & 0 & 0 & 1 & 0 & 1 \\ HD 36861 (\(\lambda\) Ori A) & O8III((f)) & 3.769 & 0.438 & 0 & 0 & 1 & 1 & 2 \\ HD 45314 & O9:npe & 5.761 & 0.864 & 0 & 0 & 1 & 0 & 1 \\ HD 47129 & O8I+O7.5III\({}^{\ddagger}\) & 5.806 & 1.283 & 1 & 0 & 1 & 2 & 4 \\ HD 47432 (V689 Mon) & O9.7Ib & 5.949 & 1.596 & 0 & 0 & 0 & 1 & 1 \\ HD 47839 (15 Mon AaAb) & O7V+B1.5/2V\({}^{\star}\) & 5.322 & 3.974 & 0 & 0 & 0 & 1 & 1 \\ HD 167971 (MY Ser AaAb) & O8Iaf(n)+O4/5 & 5.315 & 1.339 & 1 & 0 & 1 & 0 & 2 \\ HD 188001 (9 Sge) & O7.51abf & 6.166 & 1.833 & 0 & 0 & 0 & 0 & 0 \\ HD 193322 AaAb & O9IV(n) & 5.688 & 1.001 & 1 & 0 & 1 & 3 & 5 \\ HD 195592 & O9.7Ia & 4.911 & 1.729 & 1 & 0 & 0 & 0 & 1 \\ HD 201345 & O89.2IV & 8.171 & 1.828 & 0 & 0 & 0 & 1 & 1 \\ HD 202214 AaAb & O9.5IV & 5.505 & 1.032 & 1 & 0 & 1 & 1 & 3 \\ HD 206183 & O9.5IV-V & 7.193 & 0.901 & 0 & 0 & 0 & 2 & 2 \\ HD 206267 AaAb & O6.5V((f))+O9/B0V & 5.254 & 0.789 & 1 & 0 & 0 & 4 & 5 \\ HD 207198 & O8.5II & 5.318 & 0.978 & 0 & 0 & 1 & 1 & 2 \\ HD 209975 (19 Cep) & O9Ib & 4.935 & 0.959 & 0 & 0 & 0 & 4 & 4 \\ HD 210809 & O9Iab & 7.401 & 3.661 & 0 & 0 & 0 & 0 & 0 \\ HD 210839 (\(\lambda\) Cep) & O6.5II(nfp) & 4.618 & 0.832 & 0 & 0 & 0 & 0 & 0 \\ HD 217086 & O7Vnn(f)z & 6.100 & 0.830 & 0 & 0 & 0 & 2 & 2 \\ HD 228779 & O9Iab\({}^{\dagger}\) & 5.834 & 1.653 & 0 & 0 & 0 & 0 & 0 \\ HD 229196 & O6II(f) & 6.079 & 1.720 & 0 & 0 & 1 & 0 & 1 \\ \hline \end{tabular} 1
[FOOTNOTE:1]Footnote 1: The first column provides the identifier of the star. The second column contains the spectral type, with the ones marked with a \(\uparrow\) coming from Maiz Apellánz et al. (2016), the one marked with a \(\ddagger\) coming from Mahy et al. (2011), and the one marked with a \(\star\) coming from Skiff (2013), the others coming from Sota et al. (2011). The third column displays the magnitude in the H-band as found in the 2MASS All-Sky Catalog of Point Sources (Cutri et al., 2003). The fourth column gives the distance that separates us from the system according to Bailer-Jones et al. (2021), except for HD 202214 for which the distance comes from Megier et al. (2009). The fifth to eighth columns give the number of already known spectroscopic companions (SC), spectroscopic companions resolved by interferometry (SIC), interferometric companion (IC), and wide companion (WC), respectively. The references for already known companions can be found in the star-by-star description in Sect. 4. The ninth column gives the total number of known companions after this study.
\end{table}
Table 1: List of O-type stars observed with good quality data during the pilot survey.
## 3 Data analysis
To analyze the reduced data, and look for companions in the interferometric signal, we used the CANDID4 software (Gallenne et al., 2015). CANDID is a tool developed to look for binarity signals in interferometric data as well as to determine the position and flux ratio of the companion(s). For nondetections, CANDID also provides upper limits on the contrast of potential companions. More information about the algorithms and methods can be found in Gallenne et al. (2015). To streamline the analysis of the data given the relatively large size of our sample, we established an automated procedure to analyze all the data consistently and uniformly.
Footnote 4: [https://github.com/amerand/CANDID](https://github.com/amerand/CANDID)
This procedure is summarized here. First, for each observation of a star, we input all the reduced data to CANDID. We then perform a first search for a companion, fixing the maximum separation to 50 mas, which is approximately our OWA (see Eq.1). We fixed the step of the search grid to 1.0 mas. This step size is fine enough to find the global minimum in the range of separation we are looking for, while it is large enough to preserve reasonable computation time (approximately an hour per observation). If a companion is detected (n\(\sigma\) \(>\)5, with n\(\sigma\) being the significance of the binary model compared to a uniform disk model fitting the data), we use the bootstrap function of CANDID around the position of the found companion. This gives us errors on the position and flux ratio of the companion that are more realistic than those computed by the initial grid search method. Then, if a first companion is detected, we analytically remove the signal of this first companion from the interferometric data, and we perform a search for a second companion, using the same parameters as for the first search. We note that the detection of a second companion is made with an indirect method assuming the signal of the first companion is perfectly analytically removed, with no residual. Therefore, these detections should be taken with caution and would need further observations to confirm them. Finally, we determine the upper limitingmagnitude of a companion detectable in the data. If no companion is detected, we compute this limit on the reduced data directly. It gives us the limit in \(\Delta\)magnitude for which we would detect a companion. If a first companion is detected, we perform this computation on the data with the signal of the first companion analytically removed. It then gives us the limit in \(\Delta\)magnitude for which we would detect a second companion. We note that we can only remove the signal of one companion, so, the results of this detection limit might be biased if the signal of a second companion is present in the data.
Given the distance of these systems, even optical interferometry with 330 m baselines is not resolving the diameter of the stars in those systems. Hence, we consider them as a point source in CANDID, reducing the number of free parameters in the search for companions.
## 4 Results
A summary of the results is shown in Table 2 for the first companion search and Table 3 for the search of a second companion, with parameters that characterize the systems.
### New detections
We here summarize the newly detected companions.
**Cyg OB2-5 A / BD+40 4220A / Schulte 5:** This system includes a short 6.6 d period binary, along with other components on wider orbits (Rauw et al., 1999, 2019). Two distant companions were already known at separations of 0.93\({}^{\prime\prime}\) and 5.55\({}^{\prime\prime}\)(Maiz Apellaniz, 2010; Caballero-Nieves et al., 2014, 2020), which is largely outside the OWA of our observations. Cyg OB2-5 A is a known PACWB, with the nonthermal radio emission mainly associated with a wide orbit with a period of about 6.7 years (Kennedy et al., 2010). We detect a companion in both observations of June 2018 and June 2019, at a mean separation of 12.18\(\pm\)0.35 mas and a mean \(\Delta H=1.63\pm 0.3\) mag, and a second companion in the observation of June 2019 at a separation of 6.51\(\pm\)0.26 mas and a \(\Delta H=4.13\pm 0.01\) mag. The computed detection limit for the search of a second detection for the observation of June 2018 is 4 magnitudes fainter than the primary component, while the detected second companion in June 2019 is 4.13 magnitudes fainter. This could explain why we did not detect it in the June 2018 data, but further observations are necessary to confirm this second companion. This is the first direct detection for both companions. The wind-wind collision between the combined wind of the inner system and the one of the newly detected companions is likely the cause of the syn
Figure 3: Example of change in the uv-plane modeled with the ASPRO2 software. Left: global view of the uv-plane. Different colors are for different pairs of telescopes (baselines). Right: zoom on the change in the uv-plane for the fastest moving baseline. Each straight black line indicates the uv-coordinate for this baseline for an instant snapshot, and each snapshot is separated by 5 minutes. The regularly spaced, orange lines display the V2 values modeled for a binary with a separation of 50 mas.
chrotron radio emission. Long-term monitoring should allow us to determine which is associated with the 7-year period.
**Cyg OB2-10 / 2MASS J20334610+4133010 / Schulte 10:** Three companions have already been observed previously at separations of 0.21\({}^{\prime\prime}\), 0.74\({}^{\prime\prime}\) and 4.16\({}^{\prime\prime}\) with \(\Delta K=2.80\pm 0.78\), \(5.24\pm 0.05\), and \(6.03\pm 0.07\) mag (Caballero-Nieves et al. 2020). In the observation of June 2018, we detected a companion at a separation of 7.35\(\pm\)0.20 mas and \(\Delta H=2.45\pm 0.03\) mag, but we do not detect it again in the observation of June 2019. More observations could be useful to confirm this new companion detection.
**HD 17505:** This object is known as a hierarchical triple system (Hillwig et al. 2006; Sota et al. 2014). The system includes a close binary with an orbital period of 8.571 days, separated from the primary component by 2.161\({}^{\prime\prime}\) (Maiz Apellaniz et al. 2019). We detect an additional companion at a separation of 15.43\(\pm\)0.02 mas from the primary component and \(\Delta H=0.35\pm 0.04\) mag in two different observations separated by 1 day. This companion is detected for the first time, making this system a quadruple one.
**HD 19820:** This system is a known spectroscopic binary (SB) with a period of 3.36 days (Hill et al. 1994). We detect a companion at a mean separation of 13.87\(\pm\)0.03 mas and \(\Delta H=2.57\pm 0.01\) mag in two different observations separated by 1 day, and another companion at a separation of 6.96\(\pm\)0.11 mas and \(\Delta H=4.16\pm 0.01\) mag in only one of these observations. The detection of this second companion is made with an indirect method assuming the signal of the first companion is perfectly analytically removed, with no residual. Therefore, this detection should be taken with caution and will necessitate further observations to be confirmed.
**HD 24431:** This system has a companion situated at 0.72\({}^{\prime\prime}\) (Mason et al. 1998; Turner et al. 2008). We detect a new companion with a separation of 9.74\(\pm\)0.02 mas and \(\Delta H=1.37\pm 0.01\) mag.
**HD 28446:** This system is supposed to be an SB2 (double line spectroscopic binary) with a period of 1.31 days in Mayer et al. (1994) but has not been confirmed since. A third component is known at a separation of 10\({}^{\prime\prime}\) (Eggleton & Tokovinin 2008). We detect a fourth companion in both our observations of February 2018 and June 2018 with a mean separation of 26.16\(\pm\)0.13 mas and a mean \(\Delta H=1.35\pm 0.08\) mag.
**HD 34078:** This star is a known runaway (Hoogerwerf et al. 2001). Candidate companions have been detected by direct imaging at a separation of 8.4\({}^{\prime\prime}\) (Mason et al. 1998) and 0.35\({}^{\prime\prime}\) (Turner et al. 2008), but both those detections are suspected to be field stars observed in the line of sight of this star. We detected a new companion in two different observations, at a separation of 6.85\(\pm\)0.07 mas and \(\Delta H=2.76\pm 0.02\) mag in December 2017 and a separation of 1.74\(\pm\)0.20 mas and \(\Delta H=3.29\pm 0.03\) mag in September 2018.
**HD 36861:** This star (\(\Delta\) Ori A) is known to be variable (Fullerton et al. 1996), and part of a wide binary (HD 36861J). The components A and B of the system \(\lambda\) Ori may not be physically bound (Lindroos 1985; Mason et al. 1998). \(\lambda\) Ori A had no detected companion yet (Mason et al. 2009), but we detect a companion in our observation of February 2018, with a separation of 10.13\(\pm\)0.05 mas and a \(\Delta H=3.30\pm 0.02\) mag. This companion has not been detected again in our observation of June 2018, despite the computed limiting magnitude of detection being \(\Delta H=5.03\) mag. The reason for this nondetection on the second observation is unknown. A new observation of this system could confirm this new companion.
**HD 47129:** This is Plaskett's star. It is a known SB2 with a period of 14.4 days (Linder et al. 2008). There are also two known visual companions at 0.78\({}^{\prime\prime}\) and 1.12\({}^{\prime\prime}\) (Turner et al. 2008). The SMASH+ survey resolved a faint companion at 36 mas with \(\Delta H\sim 4.0\) mag, with the NACO/SAM instrument, but it was too faint to be confirmed by PIONIER and the uncertainties on the separation found by NACO/SAM were large. We detect a companion at a separation of 32.29\(\pm\)0.06 mas and \(\Delta H=4.6\pm 0.01\) mag. This detected companion is a confirmation of the candidate by NACO/SAM, providing compelling evidence for the existence of this companion. Interestingly enough, this long-period companion is compliant with the likely membership of this system to the category of PACWBs, based on radio results published by Kurapati et al. (2017).
**HD 207198:** This system has a known companion at a separation of 17.64\({}^{\prime\prime}\) (Mason et al. 2004). We detect for this system a companion at a separation of 41.07\(\pm\)0.04 mas and \(\Delta H=4.68\pm 0.01\) mag. This companion is detected for the first time. We note that this detection is close to the OWA.
**HD 229196:** We detect for the first time in this system a companion, situated at a separation of 5.88\(\pm\)0.02 mas and a mean \(\Delta H=2.80\pm 0.04\) mag in two different observations separated by 1 day.
### Already detected
We here summarize the redetection of companions that were already known from other techniques or previous optical interferometric observations.
**Cyg OB2-9 / HIP 101419:** This PACWB is a known very eccentric SB2 (Blomme et al. 2013; Maiz Apellaniz et al. 2019; Caballero-Nieves et al. 2020) with a period estimated at 2.4 years and an eccentricity of 0.713 (Naze et al. 2010, 2012) with another companion at a separation of 21\({}^{\prime\prime}\) (Maiz Apellaniz 2010). We detect a companion at a mean separation of 0.77\(\pm\)0.01 mas and a mean \(\Delta H=0.42\pm 0.04\) mag in two observations separated by two days. With the orbital parameters in Blomme et al. (2013) we derive a minimum projected separation of \(\sim 2.04\) AU with the known SB2 companion. Taking the distance of the system and the angular separation we detect our companion, we compute a projected separation of \(\sim 1.38\) AU. Taking the uncertainties into account, our detection is probably the SB2 component observed close to its periastron. Further observations could confirm it.
**HD 47839 / 15 Mon:** This system has a known companion at \(\simeq 0.1^{\prime\prime}\) with a difference of magnitude of 1.6 in the visible (Hutter et al. 2021) and a third component at a wider (3 \({}^{\prime\prime}\)) separation (Mason et al. 1998; Sana et al. 2014). We detect a companion with a separation of at least 49.19\(\pm\)0.32 mas and \(\Delta H=1.81\pm 0.01\) mag. This detection is at the limit of the OWA, which means that it is probable that the companion detected is further out, as discussed in Sect. 6.1. This companion is probably the known companion around 100 mas, as the differences in magnitude are compatible and as the separation close to the OWA cannot rule out this known companion.
**HD 167971:** This system is a known hierarchical triple system (Leitherer et al. 1987; De Becker et al. 2012; Le Bouquin et al. 2017; Sanchez-Bermudez et al. 2019), that also turns out to be the brightest synchrotron-emitting O-type PACWB in the catalog (De Becker & Rauco 2013). The central binary has an orbital period of 3.3 days and the third component is orbiting the inner binary on a timescale of 21.4 years. We detect a companion at 19.89\(\pm\)0.01 mas and \(\Delta H=0.61\pm 0.01\) mag. The separation of the detected companion is compatible with the one of the outer
component of the system measured in De Becker et al. (2012) and Le Bouquin et al. (2017).
**HD 193322:** This is a complex multiple system. The A component consists of a single star Aa orbiting around a 312-day binary Ab in 35 years (ten Brummelaar et al. 2011). Three other components are also known, with a separation of 2.6'' for the closest (Turner et al. 2008). We detect a companion at a separation of at least 47.33 mas and maximum \(\Delta H=0.06^{+0.06}_{-0.05}\) mag in our three observations of this system in June 2018, in September 2018, and in June 2019. These detections are at the limit of the OWA, which means that it is possible that the companion is somewhat further out, as discussed in Sect. 6.1. The detected companion is most likely the already known pair Aa and Ab due to its separation being compatible with the known pair reported in ten Brummelaar et al. (2011).
**HD 202214:** One close companion is already known from spectroscopy, with an orbital period of 81.30 days (Mante 2002). The system also has two wider companions, one at a separation of 0.071'' (Mante 2002) and a \(\Delta V=0.6\) (Mason et al. 2009) mag, and one at 1.0'' with a \(\Delta V=0.3\) (Mason et al. 2009) mag. We detect a companion at a separation of 47.27\(\pm\)0.05 mas and \(\Delta H=0.03^{+0.03}_{-0.04}\) mag. This detection is at the limit of the OWA, which means that it is probable that the detected companion is actually further out, as discussed in Sect. 6.1. We, therefore, cannot rule out that the detected companion is the known one at 71 mas. The difference between our magnitude difference and the one from Mason et al. (2009) could then be explained by the separation potentially being larger than our OWA, introducing bias in our detection, and by the fact that our observing setup is not optimized for companions outside of the OWA. The derived contrast might therefore be systematically biased.
**HD 206267:** This system is a high order multiple system (Maiz Apellaniz et al. 2019; Maiz Apellaniz & Barba 2020). The central component (AaAb) is composed of an SB2 (Aa) with a period of 3.71 days (Raucq et al. 2018) and another companion (Ab) separated from Aa by 0.1'' and a difference in magnitude of 1.63\(\pm\)0.3 at a wavelength of \(\lambda=0.91\)\(\mu\)m (SDSS z filter5) (Maiz Apellaniz et al. 2020). A third component (B) is situated at a separation of 1.7'' with a difference of 5.72\(\pm\)0.13 in magnitude at \(\lambda=0.91\)\(\mu\)m. Two other companions (C and D) are situated within 25''. We detect a companion at a separation of at least 49.52\(\pm\)0.22 mas with a difference of 1.64\(\pm\)0.02 mag in the H-band. This detection is at the limit of the OWA, which means that it is probable that this detected companion is further out, as discussed in Sect. 6.1. We, therefore, cannot rule out that this detection can be the known companion at 0.1''; the magnitude differences are compatible one with each other.
Footnote 5: [https://skyserver.sdss.org/dr1/en/proj/advanced/color/sdssfilters.asp](https://skyserver.sdss.org/dr1/en/proj/advanced/color/sdssfilters.asp)
### No detection
We do not detect interferometric companions around HD 30614 (SB1 system with a period of 3.68 days; Zeinalov & Musaev 1986), HD 47432 (one known companion at 0.78''; Turner et al. 2008), HD 188001 (single runaway star; Trigueros Paez et al. 2021), HD 195592 (spectroscopic binary with a period of a few days; De Becker et al. 2010), HD 201345 (one known wide companion at 7.38''; Turner et al. 2008), HD 206183 (two known wide companions at separations of 6.6'' and 11.6''; Mason et al. 1998), HD 209975 (four known wide companions at 3.79'', 4.14'', 19.8'', and 60.4''; Mason et al. 1998; Turner et al. 2008), HD 217086 (2 known companions at 2.8'' and 3.1''; Mason et al. 1998; Turner et al. 2008), and HD 228779.
### Candidates
Some of our observations resulted in candidates with only a marginally significant detection criterion (\(3<n\sigma<5\)). Those systems are HD 45314 (Oe star Rauw et al. 2015), HD 210809, and HD 210839. More observations will be needed to validate or reject the presence of a companion. The position of those candidates can be found in Table 2.
## 5 Statistical analysis
Among the 29 systems observed with good-quality data, we confirm the detection of 19 companions for 17 multiple systems (see Sections 4.1 and 4.2). Out of these 19 companions, 13 are detected for the first time. This gives us a multiplicity fraction \(f_{\rm m}=17/29=0.59\pm 0.09\), and a companion fraction of \(f_{\rm c}=19/29=0.66\pm 0.13\) in the range of separations to which we are sensitive. The uncertainty for the multiplicity fraction is obtained with a binomial uncertainty and the companion fraction uncertainty is obtained with a Poisson uncertainty.
We note that 38% of our sample corresponds actually to multiple systems with at least three components and that in 28% of the sample, the interferometric companion constitutes the detection of the outer orbit in a hierarchical triple system. This proportion of hierarchical triple systems should allow us to study the Kozai-Lidov effect (Naoz 2016), by studying their orbits.
### Comparison with SMASH+
The multiplicity fraction in this study is marginally higher than the one in SMASH+ (0.41 \(\pm\) 0.05). This difference could be explained by the fact that our preliminary magnitude limit (\(H<6.5\) mag) was brighter than the one of SMASH+ (\(H<7.5\) mag). This would push the observational bias to observe a larger fraction of multiple systems.
By taking into account all known companions, summarized in Table 1, the total multiplicity fraction (\(0.86\pm 0.07\)) and the total companion fraction (\(2.10\pm 0.27\)), are consistent with the one of SMASH+ (respectively \(0.91\pm 0.03\) and \(2.1\pm 0.2\)). We also find that in our sample, \(38\pm 9\)% of the systems contain spectroscopic binaries, which is marginally lower than in the SMASH+ sample (\(49\pm 5\)%).
### Estimated mass ratio distribution
We define the mass ratio \(q\) as:
\[q=M_{\rm comp}/M_{\rm primary} \tag{3}\]
where \(M_{\rm comp}\) is the mass of the detected companion and \(M_{\rm primary}\) is the mass of the primary component of the system. This mass ratio can be estimated using the flux ratio between the two components. In this study, we use the same relation used in Le Bouquin et al. (2017) and derived from Martins et al. (2005), which gives a good approximation of the mass ratio for main-sequence stars:
\[q=(f_{H})^{0.7} \tag{4}\]
where \(f_{H}\) is the flux ratio in the H-band, given by the CANDID analysis. We note that if the central component is an unresolved
binary, this method of computing the mass ratio is using the combined flux of both components, hence the estimated mass ratio will be biased.
Figure 4 displays the distribution of the estimated mass ratio of the detected companions. In this figure, we can see that the distribution seems to be bi-modal, with a lack of companions between \(q=0.4\) and 0.6, and favoring a lower mass ratio. The bi-modality could be explained by the small statistic we are using here, with only 17 companions. However, we ran a Kuiper test to compare the estimated mass ratio distribution with a uniform distribution. The result of this test is a value of D = 0.41, and a probability of obtaining the value D from a uniform distribution
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Target’s name & DATE-OBS & n\(\sigma\) & sep & P.A. & emax & emin & P.A. emax & \(\Delta H\) & det. lim. \\ & MJD & & [mas] & [deg] & [mas] & [mas] & [deg] & [mag] & \(\Delta\)mag(H) \\ \hline Cyg OB2-5 A & 58279.353 & 6.46 & 13.35 & 97.45 & 0.35 & 0.11 & \(-3.03\) & \(1.91_{-0.00}^{+0.02}\) & – \\... & 58657.396 & 8.03 & 11.00 & 107.12 & 0.00 & 0.00 & 26.56 & \(1.38_{-0.00}^{+0.02}\) & – \\ Cyg OB2-9 & 58386.154 & 8.03 & 0.76 & 148.90 & 0.01 & 0.00 & \(-13.26\) & \(0.38_{-0.02}^{+0.02}\) & – \\... & 58388.197 & 8.03 & 0.78 & 152.99 & 0.01 & 0.01 & 81.50 & \(0.45_{-0.01}^{+0.01}\) & – \\ Cyg OB2-10 & 58281.491 & 5.75 & 7.35 & \(-21.47\) & 0.20 & 0.03 & 38.96 & \(2.45_{-0.03}^{+0.03}\) & – \\... & 58657.446 & 2.96 & – & – & – & – & – & – & 3.24 \\ HD 17505 & 58386.393 & 8.03 & 15.43 & \(-141.24\) & 0.02 & 0.01 & 40.24 & \(0.35_{-0.03}^{+0.04}\) & – \\... & 58387.358 & 8.03 & 15.43 & \(-141.22\) & 0.01 & 0.00 & 52.60 & \(0.34_{-0.03}^{+0.02}\) & – \\ HD 19820 & 58386.331 & 8.03 & 13.89 & 82.35 & 0.03 & 0.01 & 1.55 & \(2.57_{-0.01}^{+0.01}\) & – \\... & 58387.424 & 8.03 & 13.85 & 82.56 & 0.02 & 0.02 & 36.70 & \(2.56_{-0.01}^{+0.01}\) & – \\ HD 24431 & 58385.388 & 49.96 & 9.74 & 146.19 & 0.02 & 0.01 & \(-82.80\) & \(1.37_{-0.00}^{+0.01}\) & – \\ HD 28446 & 58157.339 & 47.82 & 25.06 & \(-45.64\) & 0.13 & 0.02 & \(-86.04\) & \(1.42_{-0.01}^{+0.02}\) & – \\... & 58385.363 & 49.96 & 27.25 & \(-56.15\) & 0.01 & 0.01 & \(-52.32\) & \(1.28_{-0.01}^{+0.01}\) & – \\ HD 30614 & 58386.439 & 1.34 & – & – & – & – & – & – & 4.32 \\ HD 34078 & 58100.407 & 5.62 & 6.85 & 172.83 & 0.07 & 0.06 & 77.06 & \(2.76_{-0.02}^{+0.02}\) & – \\... & 58384.400 & 5.29 & 1.74 & 171.17 & 0.20 & 0.04 & \(-61.50\) & \(3.29_{-0.03}^{+0.03}\) & – \\ HD 36861 & 58156.175 & 6.51 & 10.13 & \(-11.06\) & 0.05 & 0.04 & 59.92 & \(3.3_{-0.02}^{+0.02}\) & – \\... & 58387.479 & 0.96 & – & – & – & – & – & 5.03 \\ HD 45314 & 58386.497 & 3.25 & 27.80 & 167.42 & 0.09 & 0.04 & \(-21.81\) & \(4.24_{-0.01}^{+0.00}\) & – \\ HD 47129 & 58385.529 & 5.01 & 32.39 & 37.50 & 0.06 & 0.03 & \(-73.35\) & \(4.6_{-0.01}^{+0.01}\) & – \\ HD 47432 & 58387.523 & 0.66 & – & – & – & – & – & 4.91 \\... & 58388.512 & 0.00 & – & – & – & – & – & 4.99 \\ HD 47839 & 58386.535 & 17.76 & 49.19\({}^{*}\) & \(-72.92\) & 0.32 & 0.17 & 86.62 & \(1.81_{-0.01}^{+0.01}\) & – \\ HD 167971 & 58658.289 & 49.96 & 19.89 & \(-98.40\) & 0.01 & 0.00 & \(-20.18\) & \(0.61_{-0.01}^{+0.01}\) & – \\ HD 188001 & 58387.135 & 2.23 & – & – & – & – & – & 4.25 \\ HD 193322 & 58281.321 & 49.96 & 47.33\({}^{*}\) & 15.78 & 0.24 & 0.13 & 10.57 & \(0.06_{-0.05}^{+0.06}\) & – \\... & 58388.146 & 49.96 & 48.70\({}^{*}\) & 148.78 & 0.27 & 0.08 & \(-17.21\) & \(0.01_{-0.04}^{+0.1}\) & – \\... & 58658.367 & 49.96 & 49.18\({}^{*}\) & 148.12 & 0.11 & 0.05 & \(-25.95\) & \(-0.01_{-0.00}^{+0.10}\) & – \\ HD 195592 & 58280.337 & 1.89 & – & – & – & – & – & 3.63 \\ HD 201345 & 58661.420 & 2.82 & – & – & – & – & – & 4.38 \\ HD 202214 & 58658.412 & 49.96 & 47.27\({}^{*}\) & \(-129.68\) & 0.05 & 0.04 & 71.07 & \(0.03_{-0.04}^{+0.03}\) & – \\ HD 206183 & 58661.470 & 0.67 & – & – & – & – & – & 5.83 \\ HD 206267 & 58660.442 & 8.03 & 49.52\({}^{*}\) & \(-145.71\) & 0.22 & 0.06 & 0.38 & \(1.64_{-0.01}^{+0.02}\) & – \\ HD 207198 & 58658.449 & 7.10 & 41.07\({}^{*}\) & \(-32.93\) & 0.04 & 0.02 & \(-34.19\) & \(4.68_{-0.01}^{+0.00}\) & – \\ HD 209975 & 58278.401 & 0.76 & – & – & – & – & – & 5.16 \\ HD 210809 & 58660.490 & 3.14 & 4.45 & 70.72 & 0.09 & 0.04 & \(-64.87\) & \(4.02_{-0.00}^{+0.01}\) & – \\ HD 210839 & 58657.486 & 4.93 & 6.72 & 79.36 & 0.08 & 0.02 & \(-60.61\) & \(5.92_{-0.00}^{+0.00}\) & – \\ HD 217
of 2.7%. From this low probability, we can conclude that the actual mass ratio distribution is most probably not uniform.
The distribution favoring low mass ratios goes against the observational bias of the survey being magnitude limited. This bias should favor the inclusion of binaries with bright companions, therefore, with a flux ratio, hence a mass ratio, close to 1. So, this tendency seems to come from the intrinsic O-type stars' mass ratio distribution. The results of the survey of the massive stars in the Orion region (GRAVITY collaboration et al. 2018) show a similar trend, with a mass ratio distribution following a power law \(\propto q^{\alpha}\) with \(\alpha=1.7\).
### Projected separation distribution
In the absence of any estimate of the inclination of the orbit, the absolute physical separation cannot be determined. Rather, we determined the projected separation of the detected companions, using the distances published by Bailer-Jones et al. (2021). Figure 5 shows the distribution of the separation of the companions detected in this study. The distribution seems to favor the middle and high part of the probed separation, from 10 to 100 AU. We note that the companions detected close to the OWA may actually be located further out, meaning that the real distribution could have a tail at larger separations. This figure only shows the companions that were (re)detected in this study.
Figure 4: Histogram of detected companions as a function of the estimated mass ratio \(q\), when taking into account only the first detected companions (blue), and when adding the second companions detected (orange).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Target’s name & DATE-OBS & n\(\sigma\) & sep & P.A. & emax & emin & P.A. emax & \(\Delta\)H & det. lim. \\ & MJD & [mas] & [deg] & [mas] & [mas] & [deg] & [mag] & \(\Delta\)mag(H) \\ \hline Cyg OB2-5 A & 58279.353 & 0.74 & – & – & – & – & – & – & 4.06 \\... & 58657.396 & 8.03 & 6.51 & \(-\)85.06 & 0.26 & 0.04 & 85.05 & 4.15\({}^{+0.01}_{-0.01}\) & – \\ Cyg OB2-9 & 58386.154 & 0.76 & – & – & – & – & – & – & 4.98 \\... & 58388.197 & 0.67 & – & – & – & – & – & – & 5.88 \\ Cyg OB2-10 & 58281.491 & 1.62 & – & – & – & – & – & – & 3.51 \\ HD 17505 & 58386.393 & 3.72 & – & – & – & – & – & – & 4.19 \\... & 58387.358 & 0.7 & – & – & – & – & – & – & 1.56 \\ HD 19820 & 58386.331 & 8.03 & 6.96 & 91.45 & 0.11 & 0.05 & 89.91 & 4.16\({}^{+0.01}_{-0.01}\) & – \\... & 58387.424 & 0.99 & – & – & – & – & – & – & 4.29 \\ HD 24431 & 58385.388 & 1.83 & – & – & – & – & – & – & 4.77 \\ HD 28446 & 58157.339 & 0.81 & – & – & – & – & – & – & 3.28 \\... & 58385.363 & 1.81 & – & – & – & – & – & – & 4.72 \\ HD 34078 & 58100.407 & 4.74 & – & – & – & – & – & – & 3.33 \\... & 58384.4 & 2.13 & – & – & – & – & – & – & 4.31 \\ HD 36861 & 58156.175 & 1.18 & – & – & – & – & – & – & 4.83 \\ HD 45314 & 58386.497 & 2.0 & – & – & – & – & – & – & 4.74 \\ HD 47129 & 58385.529 & 0.78 & – & – & – & – & – & – & 5.28 \\ HD 47839 & 58386.535 & 6.66 & – & – & – & – & – & – & 3.34 \\ HD 167971 & 58658.289 & 1.17 & – & – & – & – & – & – & 3.78 \\ HD 193322 & 58281.321 & 4.52 & – & – & – & – & – & – & 3.03 \\... & 58388.146 & 2.33 & – & – & – & – & – & – & 3.16 \\... & 58658.367 & 6.72 & – & – & – & – & – & 3.07 \\ HD 202214 & 58658.412 & 15.7 & – & – & – & – & – & 2.86 \\ HD 206267 & 58660.442 & 4.62 & – & – & – & – & – & 3.08 \\ HD 207198 & 58658.449 & 0.82 & – & – & – & – & – & 5.78 \\ HD 210809 & 58660.49 & 1.03 & – & – & – & – & – & 5.05 \\ HD 210839 & 58657.486 & 2.98 & – & – & – & – & – & 5.51 \\ HD 229196 & 58385.195 & 3.07 & – & – & – & – & – & 4.38 \\... & 58388.244 & 1.24 & – & – & – & – & – & 4.42 \\ \hline \end{tabular} 1
\end{table}
Table 3: Same as Table 2 but for the second detection.
### Estimated mass ratio as a function of the projected separation
Figure 6 shows the estimated mass ratio \(q\) as a function of the projected separation of the companions detected in this study. Error bars on separation ending with an arrow pointing right are to distinguish the companions detected close to the OWA, meaning that the physical separation on the plot is only a lower limit. The black points are for the companions detected with other techniques for which the mass ratio and physical separation are known or can be determined from the literature.
We notice the higher number of companions with an estimated mass ratio \(q<0.6\) as described in Sect. 5.2. However, we cannot discern any correlations in this plot between the estimated mass ratio \(q\) and the physical separation for the companions detected in this study as the companions seem relatively homogeneously distributed in the figure.
Furthermore, taking into consideration the companions previously detected by other techniques, a correlation seems to appear, with \(q\) being anticorrelated with the separation. This anti-correlation would be in favor of the competitive accretion formation process (GRAVITY collaboration et al. 2018). But one should note that this result could be due to the difference of biases in the detection of companions by the different techniques. A further study taking into account the different biases is necessary to bring a stronger conclusion.
We caution that the plot is affected by some factors that deserve a few comments. First of all, we certainly need a greater statistical sample if we want to be sure that there is or is not a correlation between these two parameters for massive star systems. For subsequent developments, we stress that we should ideally base our discussion on absolute physical separations, and not on projected ones, which requires knowledge about the inclination. This can be obtained through a suitable interferometric follow-up to derive the relative astrometric orbit of the systems. Finally, the best test for a potential (lack of) correlation with the mass ratio should rely on an estimate of the semi-major axis, and not on the measured separation at one specific epoch. The available measurements have been obtained at any orbital phase, and in the case of a significantly eccentric orbit, the measured separation is not necessarily a good proxy for the semi-major axis.
## 6 Discussion
### Outer working angle
As we limited our search for companions to OWA, the results for the companions found close to this limit (separation > 40 mas, HD 47839, HD 193322, HD 202214, HD 207198, and HD 206267) might correspond to a local minimum of \(\chi^{2}\), while the global minimum may not be probed in our search. The detection of those companions is still valid, as the interferometric signal still favors the presence of a companion compared to a uniform disk, but the found separation should be considered as a lower limit. Further observations with a wider OWA could confirm the real separation of those companions, but this requires a higher spectral resolution, which is doable with MIRC-X, however, it would reduce the sensitivity of the instrument. It can be achieved only for the brightest systems. Alternative methods such as sparse aperture masking may thus be preferred.
### Candidates for orbital parameter measurements
While the statistical analysis will require the large survey data, the currently detected companions can already give us good candidates for follow-up orbital parameter measurements. To be a good candidate for follow-up with interferometry, we take here a limit on the orbital period of a maximum of 10 years, which is a reasonable time scale for follow-up observations.
To obtain an estimation of the orbital period of the detected companions, we use the Kepler's third law:
\[P^{2}=\frac{a^{3}}{M_{1}+M_{2}} \tag{5}\]
where \(P\) is the period in years, \(a\) is the semi-major axis of the orbital ellipse in AU, and \(M_{1}\) and \(M_{2}\) are the masses of the two objects orbiting each other, in solar mass.
For our estimation of \(P\), we approximate the semi-major axis \(a\) with the projected separation that we computed earlier. This
Figure 5: Histogram of detected companions as a function of the projected separation in Astronomical Unit (AU). The distribution in blue takes into account only the first companions. The distribution in orange considers all companions.
Figure 6: Estimated mass ratio as a function of the projected separation in AU of the detected companions. We note that some error bars are hidden by the size of the markers. Error bars on separation ending with an arrow pointing right are to distinguish the companions detected close to the OWA. The black points are for known companions detected with other observational techniques.
will lead to an overestimate of \(a\), hence \(P\), for eccentric systems as companions spend a larger fraction of the orbit at separations \(>a\) (Kepler's law). For the mass of the two objects, we use the \(m_{\rm evol}\) in table 4 of Weidner & Vink (2010), which gives a theoretical mass of O-type stars as a function of their spectral type. We used the spectral types of our objects listed in Table 1 to get an estimation of the mass for the central object, and we used the estimated mass ratio \(q\) for the detected companion mass.
With the flux ratio, separation, and estimated mass ratio, we could estimate also the shift of the photo-center of our binary systems. This estimated shift could give us candidates of systems for which we could determine the orbital parameters with Gaia, as discussed in Le Bouquin et al. (2017). From equation five in that paper we can determine that:
\[\mu=\frac{a(q-f)}{(f+1)(q+1)} \tag{6}\]
For our estimation of \(\mu\), we used the separation of our detected companions as an estimation of \(a\), the estimated mass ratio \(q\), and the detected flux ratio \(f\).
Figure 7 displays the estimated photo-center shift \(\mu\) in mas as a function of the estimated orbital period in years. The detection limit on \(\mu\) detectable by Gaia is set to 0.1 mas or higher. This value comes from the fact that Gaia needs a shift of at least three times the Gaia accuracy of a transit, which is 0.034 mas on bright targets (\(G<12\), see Sect. 4 of Perryman et al. (2014)), which is the case for all our targets. The conservative limit in the period for which Gaia can determine the orbital parameters is set to 6 years. This limit is the one for which Gaia will be able to determine orbital parameters from a detectable astrometric shift of the photo-center in almost any case. But this limit is for the nominal mission period of 5 years. Now Gaia has observed for 7.8 years and the anticipated mission lifetime is now at least 10 years. So we estimate the expected limit on Fig. 7 at 10 years, which corresponds also to the period we set for the interferometric survey.
First, we see that eight companions have an estimated orbital period of less than 10 years, which makes them suitable to be followed up by interferometry directly. Then, we see that Gaia should be able to obtain the orbital parameters of seven of our systems, which have an estimated orbital period of approximately 6 years or less and an estimated photo-center shift of more than 0.1 mas. For the periods larger than 10 years, the orbital parameters will not be well constrained by Gaia, but combined with the data from interferometry, it should be possible to constrain orbital parameters well. Furthermore, the orbital parameters with Gaia are interesting because, in combination with the information from interferometry, one could measure the individual masses of each component of a multiple system. Currently interferometry needs to be combined with spectroscopy to measure individual masses (e.g., Le Bouquin et al. 2017; Mahy et al. 2018; Fabry et al. 2021; Sanchez-Bermudez et al. 2022). The technique with Gaia would not need spectroscopy because it will provide the distance, which with the orbit size from interferometry would provide the total mass of the system, and by combining the orbit size and the orbit size of the photo-center provided by Gaia, one would obtain the mass ratio of the components of the system. Combining both total mass and mass ratio would provide the individual masses of these components. Gaia would also probe a different range of separation than the spectroscopy. These two techniques would then be complementary. In addition, one would be able to compare the results of the two methods and remove the potential biases that each of these methods could present, as the range of masses estimated by spectroscopy alone shows discrepancies with the measured masses by combining spectroscopy and interferometry (Le Bouquin et al. 2017).
## 7 Conclusion
From the results of this pilot study, we can conclude that a large survey to study the multiplicity of northern O-type stars can be performed with the instrument MIRC-X at the CHARA Array. Indeed, we demonstrated that we can constrain the multiplicity of O-star systems with a magnitude in the H-band around 7.5 mag, in good atmospheric conditions. This magnitude allows the observation of more than 120 northern O-type stars. From the experience gained with this pilot survey, we can observe six to eight science targets per night with normal conditions. Taking into account an average loss of 25% of the night due to bad atmospheric conditions weather or technical issues, we estimate that the large program will require approximately 25 nights to be completed.
This study also detected 19 companions in 17 different systems, including 13 companions detected for the first time, notably the companion responsible for the nonthermal emission in Cyg OB2-5 A, and the confirmation of the candidate companion of HD 47129 previously suggested by SMASH+. The preliminary statistical study gives us a multiplicity fraction \(f_{\rm m}=17/29=0.59\pm 0.09\), and a companion fraction of \(f_{\rm c}=19/29=0.66\pm 0.13\). Those results are consistent with the results of the southern large survey already performed, SMASH+.
We also demonstrated that a number of the detected systems are suitable for follow-up studies for orbital parameter measurement, either by interferometry (8) and/or with Gaia (7). The results obtained in this study are promising in terms of scientific returns of a more ambitious project focusing on a large sample, and involving repeated observations spread over several years.
###### Acknowledgements.
We would like to thank Carine Babusiaux, from IPAG, for her help in determining the capability of Gaia to help in the constraint of orbital parameters and Latetin Rodei in her explanation of the relevance of the study of the Kozai-Lidov cycle. We also thank the referee for their constructive comments that resulted in a better paper. L.M. and E.G. acknowledge the European Space Agency (ESA) and the Belgium Federal Science Policy Office
Figure 7: Estimated photo-center shift \(\mu\) in mas as a function of the estimated orbital period in years. The red lines show the limit of capability of Gaia to measure the orbital parameters of a system, with the hatched area being not accessible by Gaia.
(BELSPO) for their support in the framework of the prodex programme. This work is based upon observations obtained with the Georgia State University Center for High Angular Resolution Astronomy Array at Mount Wilson Observatory. The CHARA Array is supported by the National Science Foundation under Grant No. AST-1636624 and AST-2034336. Institutional support has been provided from the GSU College of Arts and Sciences and the GSU Office of the Vice President for Research and Economic Development. Time at the CHARA Array was granted through the NOMILab community access program (OMRL-Lab 2018A-0158, 2018B-0123, 2019A-008b; PI: C. Lanthermann). S.K., N.A., and C.D.L. acknowledge support from an ERC Starting Grant (Grant Agreement No. 639889), ERC Consolidator Grant (Grant Agreement ID 101003096), and STFC Consolidated Grant (ST/V000721/1). A.L. received funding from STFC studentship to S00008203. J.D.M. acknowledges funding for the development of MIRC-X (NASA-A&A) NNK1604203G, NSF-AST 1909165) and MYSTIC (NSF-ATI 1506540, NSF-AST 1901965). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. DL7-72225: MULTIPLES). This research has made use of the Jean-Marie Mariotti Center Aspro service 1. This work was supported by the Programme National de Physique Sticaire (PIPS) of CNRS/INSU co-funded by CEA and CNES. This work has been partially supported by the LabEx FOCUS ANR-11-LABX-0013. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
Footnote 1: Available at [http://www.jmmc.fr/aspro](http://www.jmmc.fr/aspro)
|
2308.16258 | Robust Principles: Architectural Design Principles for Adversarially
Robust CNNs | Our research aims to unify existing works' diverging opinions on how
architectural components affect the adversarial robustness of CNNs. To
accomplish our goal, we synthesize a suite of three generalizable robust
architectural design principles: (a) optimal range for depth and width
configurations, (b) preferring convolutional over patchify stem stage, and (c)
robust residual block design through adopting squeeze and excitation blocks and
non-parametric smooth activation functions. Through extensive experiments
across a wide spectrum of dataset scales, adversarial training methods, model
parameters, and network design spaces, our principles consistently and markedly
improve AutoAttack accuracy: 1-3 percentage points (pp) on CIFAR-10 and
CIFAR-100, and 4-9 pp on ImageNet. The code is publicly available at
https://github.com/poloclub/robust-principles. | ShengYun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau | 2023-08-30T18:31:51Z | http://arxiv.org/abs/2308.16258v2 | # Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
###### Abstract
We aim to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs. To achieve our goal, we synthesize a suite of generalizable robust architectural design principles: (a) optimal range for _depth_ and _width_ configurations, (b) preferring _convolutional_ over _patchify_ stem stage, and (c) robust residual block design by adopting squeeze and excitation blocks, and non-parametric smooth activation functions. Through extensive experiments across a wide spectrum of _dataset scales_, _adversarial training methods_, _model parameters_, and _network design spaces_, our principles consistently and markedly improve AutoAttack accuracy: 1-3 percentage points (pp) on CIFAR-10 and CIFAR-100, and 4-9 pp on ImageNet. The code is publicly available at [https://github.com/poloclub/robust-principles](https://github.com/poloclub/robust-principles).
## 1 Introduction
Convolutional neural networks (CNNs) and Transformers are staples in computer vision research [1, 2], but they are vulnerable to adversarial attacks [1, 2, 3], and adversarial
Introduction
The study of wireless networks has attracted a great deal of attention in the field of wireless networks, and has attracted a lot of attention in the field of wireless networks. The wireless network is a _network_,
1. **A suite of generalizable robust architectural design principles for CNNs** (Fig. 1): **(a)** Optimal Range for Depth and Width Configurations. Despite the popularity of 4-stage residual networks on ImageNet, existing exploration has been constrained to 3-stage designs [12, 13, 14]. We discover a flexible depth and width scaling rule that does not place a restriction on the total number of stages, and we verify its generalizability and optimality through extensive experiments. (Sec. 4.1) **(b)**_Convolutional_ over _Patchify_ Stem Stage. _Convolutional_ stem and _patchify_ stem are commonly used in CNNs and Transformers to downsample input images due to the redundancy inherent in natural images. _Convolutional_ stem uses overlapped convolution kernel to slide the input image, while _patchify_ stem patchifies the input image into \(p\times p\) non-overlapping patches. We discover that convolutional stem, especially with a postponed downsampling design, outperforms patchify stem due to its less-aggressive stride-two downsampling and overlapped convolution kernels. (Sec. 4.2) **(c)** Robust Residual Block Design. Our investigations on how SE block and activations affect robustness present new findings on ImageNet that differ from previous research on CIFAR. Through a hyperparameter sweep, we find the reduction ratio \(r\) in SE block is negatively correlated with robustness, a new discovery not previously reported as prior work was based on a fixed \(r\)[14]. We also confirm that non-parametric smooth activations consistently improve robustness on CIFAR-10, CIFAR-100, and ImageNet. (Sec. 4.3)
2. **Consistent and marked improvements on adversarial robustness.** We verify the generalization of the three design principles across a wide spectrum of _dataset scales_ (CIFAR-10, CIFAR-100 [14], ImageNet [14]), _AT methods_ (standard adversarial training (SAT) [14], TRADES [14], Fast-AT [14], MART [14], and diffusion-augmented AT [14]), _model parameters_ (from 26M to 267M), and _network design spaces_ (variants of WRN [14] and ResNet [14]). Our experiments demonstrate that all design principles consistently and markedly improve AutoAttack (AA) accuracy by 1-3 percentage points (pp) on CIFAR-10 and CIFAR-100 -- boosting even the SOTA diffusion-augmented AT [14] by such amounts -- and 4-9 pp on ImageNet. In particular, our robustified WRN-70-16 boosts the AA accuracy by 1.31 pp (65.02% \(\to\) 66.33%) on CIFAR-10, and 0.96 pp (37.77% \(\to\) 38.73%) on CIFAR-100. On ImageNet, the AA accuracy is boosted by 6.48 pp (39.78% \(\to\) 46.26%) through robustified ResNet-101, and 6.94 pp (42.00% \(\to\) 48.94%) through robustified WRN-101-2. Our findings unify prior works' diverging opinions on how architectural components affect robustness and highlight the benefits of exploring intrinsically robust architectural components.
## 2 Related Work
**Adversarial training (AT).** AT is an effective approach to defending against adversarial attacks [14]. Madry _et al_. [14] formulated SAT as a min-max optimization framework. Given dataset samples \((x_{i},y_{i})\), network \(f_{\theta}\) and loss function \(\mathcal{L}\), the optimization is formulated as:
\[\operatorname*{argmin}_{\theta}\mathbb{E}_{(x_{i},y_{i})\sim\mathbb{D}}\left[ \max_{x^{\prime}}\mathcal{L}\left(f_{\theta},x^{\prime},y\right)\right], \tag{1}\]
The inner adversarial example \(x^{\prime}\) aims to find the perturbation of a given data point \(x\) that achieves a high loss and is generated on the fly during the training process. Since then,
multiple variants of SAT are proposed [1, 2, 1, 2, 3]. Our architectural research is complementary to these works on AT methods. We train on diverse AT methods,, SAT [2], Fast-AT [1, 2], TRADES [2, 3], MART [3], and diffusion-augmented AT [3] to verify that our design principles unanimously improve robustness agnostic to the training recipe.
**Robust Architectures.** Here we provide a brief overview of related research on robust architectures, and the extended version is in Sec. A in supplementary materials. Our key advancement over existing works is a suite of three robust architectural design principles verified on diverse dataset scales, AT methods, model parameters, and design spaces. A few research studied the impact of architectural designs on adversarial robustness [1, 2, 3, 4, 5, 6, 7]. For macro network design, Huang and Huang led the exploration of the correlations between robustness improvement and scaling depth and width, but Mok [3] suggested such a relationship was unclear. For micro block design, smooth [1, 2] and parameterized [2] activation functions can largely improve robustness on CIFAR, but Huang found the performance was dependent on AT settings. There was no clear consensus on how architectural components affect adversarial robustness. More importantly, most conclusions are drawn on CIFAR with WRN's basic block design. It was unclear whether such existing robust architectures generalize to large-scale datasets or other network design spaces. Our work provides conclusive evidence that unifies prior works' diverging opinions on how architectural components affect robustness.
## 3 Preliminaries
This section describes the setups for comparing CNNs and Transformers in terms of architectural design space, training techniques, and adversarial attacks.
**Architectural Design Skeleton.** Fig. 1 provides the CNN skeleton that supports modifications of different architectural components in our study. Specifically, the skeleton consists of a stem stage, \(n\) body stages, and a classification head. A typical body stage has multiple residual blocks, and the block type is either basic (two \(3\times 3\) convolutions) or bottleneck (\(1\times 1\), \(3\times 3\), and \(1\times 1\) convolutions). For stage \(i\), denote _depth_\(D_{i}\) as number of blocks, and _width_\(W_{i}\) as channels in \(3\times 3\) convolution. The downsampling factor is \(1\) in the first block of stage \(1\), and \(2\) in the first block of stage \(2\) to \(n\). Unless otherwise specified, the default operations in each block are rectified linear unit (ReLU) and batch normalization (BN).
**Training Techniques.** We use five AT recipes: Fast-AT [1, 2], SAT [2], TRADES [2, 3], MART [1, 3], and diffusion-augmented AT [1, 3]. The architectural exploration is conducted on ImageNet [1, 2], and we apply the same training method so that the performance difference between models can only be attributed to the difference in architectures. Due to the size of ImageNet and the slow speed of AT, we use Fast-AT [1, 2] with a cyclic learning rate [1, 3] and mixed-precision arithmetic [2]. After finalizing the architectural design principles, we train all models with diverse AT methods on CIFAR-10, CIFAR-100 [2] and ImageNet [1] to verify that our designs can consistently improve robustness regardless of the training recipe.
**Adversarial Attacks.** Projected gradient descent (PGD) [2] and AA [1] are used to evaluate adversarial robustness. PGD is a white-box attack with complete access to network architectures and parameters. For PGD, we provide a comprehensive investigation on the full ImageNet validation set with attack budgets \(\epsilon\in\{2,4,8\}/255\) and max steps \(i=\{10,50,100\}\), denoted as PGD\({}^{i}\)-\(\epsilon\). AA is an ensemble of one black-box and three white-box attacks. The attack budget is \(\epsilon=4/255\) on ImageNet [1], and \(\epsilon=8/255\) on CIFAR-10 and CIFAR-100 [1, 2] for AA. All attacks are \(\ell_{\infty}\) bounded. When exploring individual architectural
components, we use 10-step PGD ( PGD10-\(\varepsilon\)) for fast evaluations.
## 4 Robust Architectural Design Principles
Our strategy to explore the four robust architectural components is through comparing SOTA CNNs and Transformers. These components span a network's macro and micro designs: depth and width (Sec. 4.1), stem stage (Sec. 4.2), squeeze and excitation block (Sec. 4.3.1), and activation (Sec. 4.3.2), as shown in Fig. 1. To exclude the effect of model complexity [] and provide a fair comparison [], we focus our investigation on the regime of ResNet-50 with \(\sim\)26M parameters in this section. In the next section (Sec. 5), we verify the generalization of these principles through extensive experiments over the wide spectrum of dataset scales, AT methods, parameter budgets, and network design spaces.
### Optimal Range for Depth and Width Configurations
Macro network design involves the distribution of depth and width in each stage. Shifted windows Transformer (Swin Transformer) regulates the stage compute ratio as \(1:1:3:1\) or \(1:1:9:1\). On the CNN side, ConvNeXt reshuffles depths in all stages according to Swin Transformer and finds this design improves clean accuracy. RegNet [] introduces a linear parameterization to assign network depth and width. As network depth and width are competing for resources when the parameter budget is fixed, it is important to study the impact of depth and width on adversarial robustness. We draw inspiration from prior robustness research to develop a more generalized scaling rule. Huang _et al_. [] found reducing depth or width at the last stage of WRN reduces the Lipschitz constant, thus improving robustness. Huang _et al_. [] proposed a fixed scaling ratio for WRN [] on CIFAR-10.
Existing explorations are constrained to 3-stage networks, whereas 4-stage residual networks are more commonly used on ImageNet. Thus, instead of studying a fixed depth and width configuration in each stage, we aim to provide a flexible compound scaling rule of the relationship between robustness and total depths and widths. Define the width-depth (WD) ratio of a \(n\)-stage network as the average of comparing \(W_{i}\) to \(D_{i}\) in stage \(i\):
\[\text{WD ratio}=\frac{1}{n-1}\sum_{i=1}^{n-1}\frac{W_{i}}{D_{i}} \tag{2}\]
The last stage is excluded since reducing its capacity improves robustness. WD ratio has three parameters: total stages \(n\), depth \(D_{i}\), and width \(W_{i}\). To obtain valid models, we randomly sample from \(n\in\{3,4,5,6\}\), \(D_{i}\leq 60\), and \(W_{i}\leq 1000\). Fig. 1(a)-left shows the AT results of all samples. All clean and PGD accuracies show negative correlations with the WD ratio. Note when the WD ratio is close to 0, the AT is unstable and also leads to inferior robustness. We intersect the WD ratio of top 10% networks from each attack budget and find the optimal range of WD ratio is \([7.5,13.5]\). We use error distribution function (EDF) (Fig. 1(a)-right) to provide the characteristics of models from within and outside of the optimal range. A marked robustness gain is achieved by simply re-distributing depths and widths to satisfy the optimal range. Compare to ResNet-50's WD ratio of 32, a robust network has lower WD ratio, which echoes previous research that deep and narrow networks are better than shallow and wide networks []. Furthermore, our WD ratio is not limited to 3-stage networks as ResNet-50 shown here already has 4 stages. Sec. 5 shows generalization to other networks.
### Convolutional over _Patchify_ stem stage
As a preprocessor, a common stem stage "aggressively" downsamples input images due to the redundancy inherent in natural images. The stem stage in ResNet-50 [] consists of a stride-two \(7\times 7\) convolution and a stride-two max-pooling. Built on ResNet-50, RegNet [] replaces the max-pooling with a stride-two residual shortcut in the first block of the first body stage, dubbed _postponed downsampling_. On the Transformer side, the stem stage patchifies the image into \(p\times p\) non-overlapping patches. It is implemented by a stride-\(p\)\(p\times p\) convolution, with \(p=14/16\) in vision Transformer (ViT) [], and \(p=4\) in Swin Transformer []. Both _patchify stem_ and ResNet-style _convolutional stem_ are applicable to CNNs and Transformers. As a pure CNN, ConvNeXt [] borrowed the patchify stem from Transformer, and Xiao _et al_. [] successfully employed the convolutional stem in ViT.
Inspired by these findings, we compare how patchify and convolutional stem affect adversarial robustness. We set \(p=4\) following Swin Transformer and ConvNeXt for patchify stem (Patch 4, Stride 4) and use postponed downsampling for convolutional stem. From the results in Fig. 1(b), we observe both designs show better robustness than the baseline ResNet-50, but postponed downsampling is significantly higher than patchify stem. There are two differences between these two designs: a smaller stride and the overlapping region between two convolution kernels in the convolutional stem. We verify whether these two distinctions are beneficial to robustness. First, we reduce the patch size from \(4\times 4\) to \(2\times 2\), and add the stride-two residual shortcut as in postponed downsampling to maintain the total downsampling ratio. This modification (Patch 2, Stride 2) improves PGD [10]-4 and clean accuracies of \(4\times 4\) patch by 0.23 and 0.98 pp. Next, we gradually increase the overlapping area between the \(4\times 4\) patch by decreasing the stride from 3 to 1. We observe a consistent increment while decreasing the stride, and the \(4\times 4\) patch with the largest overlapping between neighboring patches (Patch 4, Stride 1) performs almost on par with the postponed downsampling. Finally, we additionally experiment on different output channel widths since it barely changes
Figure 2: **(a) Clean and PGD accuracies are negatively correlated with the WD ratio. Each dot is a configuration of #stage, depth, and width. Intersecting each attack budget’s top 10% most accurate configurations, we find the optimal range of the WD ratio is \([7.5,13.5]\) (with color backgrounds) and verify the significance by the EDFs of models within range (steeper dark lines) and outside of range (gentle light lines). (b) Performance of different configurations in stem stage and residual blocks. Differences are compared with baseline ResNet-50. All models trained with Fast-AT and evaluated on full ImageNet validation set. Different PGD attack budgets show a similar accuracy trend, and the full results are shown in supplementary material Sec. B.1.
total parameters. Decreasing the width from 64 (ResNet-50) to 32 lowers the accuracy due to fewer model parameters. However, increasing the width from 64 to 96 boosts PGD[10]-4 and clean accuracies by 1.47 and 1.24 pp with a negligible 0.01M increase in total parameters. In summary, the convolution stem with postponed downsampling design outperforms patchify stem due to its less aggressive stride-two downsampling and overlapped convolution kernels. Besides, widening the output channels significantly improves robustness almost at no cost.
### Robust Residual Block Design
#### 4.3.1 Squeeze & Excitation
Squeeze and excitation (SE) block is a simple but effective add-on for the original residual block. It was proposed to adaptively recalibrate channel-wise feature responses by modeling interdependencies between channels []. RegNet [] also verifies the effectiveness of SE on improving clean accuracy. However, Huang [] found that a straightforward application of SE hurts robustness on CIFAR-10, and added an extra skip connection around the SE. Driven by the discrepancy in clean and adversarial accuracies, we examine the SE block with ResNet-50. Note that the SE block will introduce additional parameters. Thus, we follow RegNet that sets \(r=4\) and appends the SE block directly after the \(3\times 3\) convolutional layer since it possesses fewer channels. Our experiments on ImageNet show a different picture, where the original SE block markedly increases all PGD and clean accuracies compared to ResNet-50 (Fig. 2b). We hypothesize that the different effects on CIFAR and ImageNet are caused by the reduction ratio \(r\) since Huang [] only tested on \(r=16\). To provide a fair comparison, we train WRN with a sweep of hyperparameter \(r=\{2,4,8,16,32,64\}\). Our experiments show that adversarial robustness is negatively correlated with \(r\), and when \(r\geq 32\), the accuracy is inferior to the baseline WRN. More details are in supplementary material Sec. B.2. Therefore, SE block is a unified architecture component that improves both clean and adversarial accuracies, and we set \(r=4\) in all designs that include the SE block.
#### 4.3.2 Non-parametric Smooth Activation Functions
Both the _number_ and the _function_ of activation layers are different between natural language processing (NLP) and vision architectures. In terms of the number, Transformer only has one activation in the multilayer perceptron (MLP) block. In comparison, the activation is normally appended to all convolutions in a CNN block []. ConvNeXt first observed this phenomenon and reduced the total activations from three to one in all blocks, which leads to higher clean accuracy. Inspired by ConvNeXt, We combinatorially analyzed all possible locations of the activation in a single block. The average PGD[10]-4 accuracies are 30.43%, 29.00%, and 24.84% when total activations are set to 3, 2, and 1, which are all inferior to ResNet-50. Thus, reducing activation layers is not beneficial to adversarial robustness, and we preserve the activation along with all convolutions.
In terms of the activation function, it is common practice to use ReLU in CNNs due to its efficiency and simplicity. However, advanced Transformers,, BERT [], GPT-2 [], and hierarchical Transformers adopted smoother variants of ReLU,, gaussian error linear unit (GELU) [] and sigmoid linear unit (SiLU) []. Recent research shows replacing ReLU with its smooth approximations facilitates better gradient updates and leads to higher robustness []. For example, solely replacing all ReLUs with GELUs significantly improves the PGD accuracy of a standard ResNet-50 []. Dai [] proposed paramet
ric activation functions by adding learnable parameters to non-parametric versions, which showed mixed performance on CIFAR-10 for different design spaces. Compared to SiLU, parametric SiLU (PSiLU) and parametric shifted SiLU (PSSiLU) show higher robustness when applied to WRN-28-10, but show degraded performance when applied to ResNet-18. Therefore, we examine and compare the robustness gain brought by GELU, SiLU, parametric ReLU (PReLU), PSiLU, and PSSiLU. We find that: (1) non-parametric smooth activations, _e.g_., SiLU and GELU are significantly higher than ReLU (ResNet-50); and (2) for parametric activations, PReLU performs on par with ReLU, and both PSiLU and PSSiLU are inferior to SiLU. Synthesizing all the above findings, we recommend non-parametric smooth activations due to consistency and simplicity, thus replacing all ReLUs with SiLUs.
## 5 Adversarial Robustness Evaluation
We have completed the exploration of how individual architectural components affect adversarial robustness. It is encouraging to uncover how these components affect robustness, but it is not completely convincing unless the following two questions are addressed: 1) _Can these components consistently improve robustness when grouped together?_ 2) _Are these design principles generalizable?_ Sec. 5.1 provides a roadmap that robustifies a CNN with all three design principles, dubbed Robust architecture (Ra). For a model robustified with our design principles, we tag it with \(\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\
\(D_{1,2,3,4}=5,8,13,1\) and \(W_{1,2,3,4}=36,72,140,270\) to control the WD ratio within the optimal range. The convolutional stem stage incorporates the postponed downsampling operation and widens the output channels to 96. Finally, we append the SE block (\(r=4\)) to the \(3\times 3\) convolution layer and replace ReLU with SiLU. The clean and PGD accuracies of Ra ResNet-50 consistently improve for each modification, which verifies that our design principles work well both individually and collectively.
### Evaluations on CIFAR-10 & CIFAR-100
We apply all three principles to models within the design spaces of ResNet and WRN. Detail specifications of each robustified architecture are in Sec. E in supplementary materials. Table 2 presents the comprehensive evalution results on CIFAR-10 and CIFAR-100 against AA and 20-step PGD (PGD\({}^{20}\)) attacks with the same maximum perturbation \(\ell_{\infty},\epsilon=8/255\). The training methods are SAT [], TRADES [], MART [], and diffusion-augmented AT []. The diffusion-augmented AT is the SOTA training recipe proposed by Wang _et al._[], which augments the original CIFAR only with images generated by elucidating diffusion model (EDM) [], so that no external datasets are needed. Since this training recipe incurs extreme computational costs [], we train on the 1M generated dataset using batch size 512 and epoch 400, abbreviated as "Diff. 1M" in Table 2. In general, the gains are consistent across AT methods, parameter budgets, and design spaces on CIFAR-10 and CIFAR-100. Importantly, our design principles augment the improvements in network robustness achieved through better training schemes, boosting even the robustness of the
\begin{table}
\begin{tabular}{l l l|l l l|l l l} \hline \multirow{2}{*}{\#Param. Method} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} \\ & & & Clean (\%) & AA (\%) & PGD\({}^{20}\) (\%) & Clean (\%) & AA (\%) & PGD\({}^{20}\) (\%) \\ \hline \multirow{4}{*}{26M} & \multirow{2}{*}{SAT} & ResNet-50 & 84.05 & 49.97 & 54.37 & 55.86 & 23.78 & 27.48 \\ & & ResNet-50 & **84.91** & 0.86 & **50.94** & 0.97 & **55.19** & 0.82 & **56.38** & 0.52 & **24.99** & +1.21 & **28.84** & +1.36 \\ \cline{2-10} & \multirow{2}{*}{TRADES} & ResNet-50 & 82.26 & 49.91 & 54.50 & 56.00 & 25.05 & 29.91 & & \\ & & ResNet-50 & **82.80** & 0.54 & **51.23** & **55.44** & 0.94 & **56.29** & 0.29 & **25.83** & +0.78 & **31.87** & +1.96 \\ \cline{2-10} & \multirow{2}{*}{MART} & ResNet-50 & 77.98 & 47.17 & 52.70 & 51.18 & 25.35 & 30.79 & & \\ & & ResNet-50 & **79.60** & +1.62 & **49.19** & +2.02 & **56.47** & +3.77 & **53.68** & +0.50 & **26.97** & +1.62 & **32.81** & +2.02 \\ \hline \multirow{4}{*}{37M} & \multirow{2}{*}{SAT} & WRN-28-10 & 85.44 & 48.45 & 53.13 & **60.49** & 23.64 & 27.47 & & & & \\ & & WRN-28-10 & **85.52** & +0.08 & **51.96** & +3.51 & **56.22** & +3.09 & 59.09 & +1.40 & **25.14** & +1.50 & **29.27** & +1.80 \\ \cline{2-10} & \multirow{2}{*}{TRADES} & WRN-28-10 & **83.86** & 51.79 & 55.69 & 55.51 & 25.47 & 25.47 & & & & & \\ & & WRN-28-10 & 83.29 & -0.57 & **52.10** & +0.31 & **56.31** & +0.62 & **53.88** & +0.71 & **25.68** & +0.21 & **29.41** & +0.07 \\ \cline{2-10} & \multirow{2}{*}{MART} & WRN-28-10 & 82.83 & 50.30 & 57.00 & 51.31 & 25.78 & 30.06 & & & & & \\ & & WRN-28-10 & **82.85** & +0.02 & **59.81** & +0.51 & **57.38** & +0.35 & **51.61** & +0.30 & **26.11** & +0.33 & **30.82** & +0.76 \\ \cline{2-10} & \multirow{2}{*}{Diff. 1M} & WRN-28-10 & 90.61 & 61.66 & 66.63 & 67.26 & 34.26 & 39.29 & & & & & \\ & & WRN-28-10 & **91.32** & +0.71 & **64.11** & +3.45 & **68.93** & +2.50 & **69.03** & +1.77 & **32.4** & +2.98 & **41.59** & +2.30 \\ \hline \multirow{4}{*}{67M} & \multirow{2}{*}{SAT} & WRN-34-12 & 85.92 & 49.35 & 53.05 & 59.08 & 23.69 & 27.05 & & & & & \\ & & WRN-34-12 & **86.50** & +0.58 & **51.78** & +2.43 & **56.04** & +2.99 & **59.46** & +0.38 & **25.18** & +1.49 & **29.49** & +2.44 \\ \cline{2-10} & \multirow{2}{*}{Diff. 1M} & WRN-34-12 & 91.11 & 62.83 & 67.53 & 68.40 & 35.67 & 40.33 & & & & & \\ & & WRN-34-12 & **91.75** & +0.64 & **65.71** & +2.88 & **69.67** & +2.14 & **69.75** & +1.35 & **37.73** & +2.06 & **42.16** & +1.83 \\ \hline \multirow{4}{*}{267M} & \multirow{2}{*}{SAT} & WRN-70-16 & 86.26 & 50.19 & 53.74 & 60.26 & 23.99 & & & & & \\ & & WRN-70-16 & **86.72** & +0.46 & **52.13** & +1.94 & **56.49** & +2.75 & & & & & \\ \cline{2-10} & \multirow{2}{*}{Diff. 1M} & WRN-70-16 & 91.82 & 65.02 & 69.10 & 70.10 & 37.77 & 41.95 & & & & & \\ \cline{2-10} & & WRN-70-16 & **92.16** & +0.34 & **66.33** & +1.31 & **70.37** & +1.27 & **70.25** & +0.15 & **38.73** & +0.96 & **42.61** & +0.66 \\ \cline{2-10} & \multirow{2}{*}{Diff. 50M} & WRN-70-16 & 93.25 & 70.69 & 73.89 & & & & & & & \\ \cline{2-10} & & WRN-70-16 & **93.27** & +0.02 & **71.07** & +0.38 & **75.28** & +1.39 & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Adversarial robustness on CIFAR-10 and CIFAR-100 against AA and 20-step PGD (PGD\({}^{20}\)) with the same maximum perturbation \(\ell_{\infty},\epsilon=8/255\). Applying our principles leads to a consistent 1-3 pp robustness gain across AT methods, parameter budgets, and design spaces, boosting even the SOTA “Diff. 1M” and “Diff. 50M” AT methods proposed by Wang _et al._[]. Sec. C in supplementary materials provides a systematic comparison with Transformers and neural architecture search (NAS)-based architectures.
SOTA Diff. 1M method by 1-3 pp. Our best model, Ra WRN-70-16, achieves 66.33% and 38.73% AA accuracy on CIFAR-10 and CIFAR-100, respectively. Due to limited computational resources, we only train "Diff. 50M" on CIFAR-10. Our Ra WRN-70-16 consistently outperforms WRN-70-16 and achieves the best performance on RobustBench [11].
### Evaluations on ImageNet
Similarly, we also apply all three principles to models within the design spaces of ResNet and WRN. The training methods are Fast-AT [10] and SAT [12]. Table 3 presents the comprehensive SAT evaluation results on ImageNet against AA and 100-step PGD (PGD\({}^{100}\)). The maximum perturbation for AA is \(\ell_{\infty},\epsilon=4/255\), and for PGD are \(\ell_{\infty},\epsilon=\{2,4,8\}/255\). We observe a consistent 4-9 pp robustness gain across different model parameters and design spaces. Specifically, our Ra WRN-101-2, even with fewer parameters than WRN-101-2, improves the AA accuracy by 6.94 pp when trained from scratch with SAT.
## 6 Conclusion
We synthesize a suite of three generalizable robust architectural design principles: (a) optimal range for _depth_ and _width_ configurations, (b) preferring _convolutional_ over _patchify_ stem stage, and (c) robust residual block design through adopting squeeze and excitation blocks and non-parametric smooth activation functions. Through extensive experiments across a wide spectrum of _dataset scales_, _adversarial training methods_, _model parameters_, and _network design spaces_, our principles consistently and markedly improve adversarial robustness on CIFAR-10, CIFAR-100, and ImageNet.
## 7 Acknowledgement
This work was supported in part by the Defense Advanced Research Projects Agency (DARPA). Use, duplication, or disclosure is subject to the restrictions as stated in Agreement number HR00112030001 between the Government and the Performer. This work was also supported in part by gifts from Avast, Fiddler Labs, Bosch, Facebook, Intel, NVIDIA, Google, Symantec, and Amazon.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model & \#Param. & Clean (\%) & AA (\%) & PGD\({}^{100}\)-2 (\%) & PGD\({}^{100}\)-4 (\%) & PGD\({}^{100}\)-8 (\%) \\ \hline ResNet-50 & 26M & 63.87 & 34.96 & 52.15 & 38.96 & 15.83 \\ (Ra ResNet-50 & 26M & **70.17**\(+\)6.30 & **44.14**\(+\)9.18 & **60.06**\(+\)7.91 & **47.77**\(+\)8.81 & **21.77**\(+\)5.94 \\ \hline ResNet-101 & 45M & 67.06 & 39.78 & 56.26 & 43.17 & 18.31 \\ (Ra ResNet-101 & 46M & **71.88**\(+\)4.82 & **46.26**\(+\)6.48 & **61.89**\(+\)5.63 & **49.30**\(+\)6.13 & **23.01**\(+\)4.70 \\ \hline WRN-101-2 & 127M & 69.30 & 42.00 & 58.71 & 45.27 & 19.95 \\ (Ra WRN-101-2 & 104M & **73.44**\(+\)4.14 & **48.94**\(+\)6.94 & **63.49**\(+\)4.78 & **51.03**\(+\)5.76 & **25.31**\(+\)5.36 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Adversarial robustness on ImageNet against AA and 100-step PGD (PGD\({}^{100}\)). The maximum perturbation for AA is \(\ell_{\infty},\epsilon=4/255\), and for PGD are \(\ell_{\infty},\epsilon=\{2,4,8\}/255\). All models trained from random initializations with SAT [12]. We observe a consistent 4-9 pp robustness gain. Full comparisons including Transformers and Fast-AT results are in supplementary materials Sec. D. |
2310.02073 | The powerful class of groups | Pro-$p$ groups of finite powerful class are studied. We prove that these are
$p$-adic analytic, and further describe their structure when their powerful
class is small. It is also shown that there are only finitely many finite
$p$-groups of fixed coclass and powerful class. | Primoz Moravec | 2023-10-03T14:14:05Z | http://arxiv.org/abs/2310.02073v1 | # The powerful class of groups
###### Abstract.
Pro-\(p\) groups of finite powerful class are studied. We prove that these are \(p\)-adic analytic, and further describe their structure when their powerful class is small. It is also shown that there are only finitely many finite \(p\)-groups of fixed coclass and powerful class.
Key words and phrases:Powerful class, pro-\(p\) group, finite \(p\)-group 2020 Mathematics Subject Classification: 20E18, 20D15 ORCID: [https://orcid.org/0000-0001-8594-0699](https://orcid.org/0000-0001-8594-0699). The author acknowledges the financial support from the Slovenian Research Agency, research core funding No. P1-0222, and projects No. J1-3004, N1-0217, J1-2453, J1-1691. He would also like to thank the referee for simplifying some of the arguments.
Gonzalez-Sanchez [1] showed that torsion-free PF-groups are precisely the \(p\)-saturable groups. These groups naturally admit a Lie algebra structure that turns the group into a \(p\)-saturable Lie algebra. If \(G\) is a finitely generated pro-\(p\) group of small powerful class, the above result shows that \(G\) is \(p\)-saturable. We show that the corresponding Lie algebra also has small powerful class. On the other hand, we exhibit an example showing that Kirillov's orbit method cannot be applied in general to derive the irreducible representations of a torsion-free pro-\(p\) group of small powerful class.
If \(G\) is a finite \(p\)-group of order \(p^{n}\) and \(c\) is its nilpotency class, then \(n-c\) is called the _coclass_ of \(G\). Coclass theory [1] works towards understanding the structure of finite \(p\)-groups according to coclass. We show:
**Theorem**.: _Given \(p\), \(r\) and \(k\), there are only finitely many finite \(p\)-groups of coclass \(r\) and powerful class at most \(k\)._
The proof uses Shalev's detailed description of the uniserial structure of large finite \(p\)-groups of given coclass, _cf._[1]. A similar method shows that there are only finitely many PF \(p\)-groups of fixed coclass.
## 2. Powerful class
A normal subgroup \(N\) of a finite \(p\)-group \(G\) is _powerfully embedded_ in \(G\) if \([N,G]\leq N^{p}\). Similarly, if \(G\) is a pro-\(p\) group and \(N\) a closed normal subgroup of \(G\), then \(N\) is powerfully embedded in \(G\) if \([N,G]\leq N^{p}\). Here \(N^{p}\) stands for the closure of the abstract group \(N^{p}\), we omit the closure operator throughout the text. It is easy to see that, in the pro-\(p\) setting, \(N\) is powerfully embedded in \(G\) if and only \(NK/K\) is powerfully embedded in \(G/K\) for all open normal subgroups \(K\) of \(G\). If \(G\) is powerfully embedded in itself, we say that \(G\) is a _powerful group_. The definition is slightly different when \(p=2\), as the condition of being powefully embedded is stated as \([N,G]\leq N^{4}\). But since we always assume that \(p>2\), we will not use that.
Let \(G\) be a finite \(p\)-group. Denote by \(\eta(G)\) the largest powefully embedded subgroup of \(G\). Note that \(\eta(G)\) is the product of all powerfully embedded subgroups of \(G\). Clearly we have that \(Z(G)\) is contained in \(\eta(G)\).
We recall the notion of powerful class introduced by Mann [14]. The _upper \(\eta\)-series of \(G\)_ is defined by \(\eta_{0}(G)=1\) and
\[\eta_{i+1}(G)/\eta_{i}(G)=\eta(G/\eta_{i}(G))\]
for \(i\geq 0\). The smallest \(k\) with \(\eta_{k}(G)=G\) is called the _powerful class_ of \(G\). We use the notation \(\operatorname{pwc}(G)=k\). Ocasionally we use the shorthand notation \(\eta_{i}\) for \(\eta_{i}(G)\). An ascending series \(1=N_{0}\leq N_{1}\leq N_{2}\leq\cdots\) of normal subgroups of \(G\) is said to be an _\(\eta\)-series_ if \(N_{i+1}/N_{i}\) is powerfully embedded in \(G/N_{i}\) for all \(i\). The shortest length \(k\) of an \(\eta\)-series with \(N_{k}=N\) is called the _powerful height_ of \(N\). It is denoted by \(\operatorname{pwh}(N)\). A group \(G\) is said to have _small powerful class_ if \(\operatorname{pwc}(G)<p\). Similarly, a normal subgroup \(N\) of \(G\) has _small powerful height_ if \(\operatorname{pwh}(N)<p\).
It is easily seen that the upper \(\eta\)-series is the fastest growing \(\eta\)-series in a group:
**Proposition 2.1**.: _Let \(G\) be a finite \(p\)-group. Let \(1=N_{0}\leq N_{1}\leq\cdots\leq N_{k}=G\) be an \(\eta\)-series in \(G\). Then \(N_{i}\subseteq\eta_{i}(G)\)._
Proof.: The claim is true for \(i=0,1\). Suppose it holds for some \(i\geq 1\). The group \(N_{i+1}/N_{i}\) is powerfully embedded in \(G/N_{i}\). By induction assumption, we have \(N_{i}\subseteq\eta_{i}\). Therefore \(N_{i+1}\eta_{i}/\eta_{i}\) is powerfully embedded in \(G/\eta_{i}\). It follows from here that \(N_{i+1}\eta_{i}/\eta_{i}\subseteq\eta(G/\eta_{i})=\eta_{i+1}/\eta_{i}\). Hence we get \(N_{i+1}\subseteq\eta_{i+1}\), as required.
The notion of powerful class can be extended to the pro-\(p\) setting. We say that a pro-\(p\) group \(G\) has _finite powerful class_ if it has an \(\eta\)-series of closed subgroups of finite length that ends in \(G\). Given a pro-\(p\) group \(G\), define \(\eta(G)\) be the product of all closed normal subgroups of \(G\) that are powerfully embedded in \(G\). Then \(\eta(G)\) is a closed subgroup of \(G\) containing all powerfully embedded subgroups of \(G\). The upper \(\eta\)-series of \(G\) can be defined as in the finite case. Then \(G\) has finite powerful class if and only there exists \(k\) such that \(\eta_{k}(G)=G\). The smallest such \(k\) is the _powerful class_ of \(G\). If a pro-\(p\) group \(G\) has powerful class \(\leq k\), then it is an inverse limit of finite \(p\)-groups of powerful class \(\leq k\).
We first collect some properties of the upper \(\eta\)-series. These will be used throughout the text without further reference.
**Lemma 2.2**.: _let \(G\) be a finitely generated pro-\(p\) group. Then_
\[\eta(G)/\eta(G)^{p}=Z(G/\eta(G)^{p}).\]
Proof.: The claim follows from a more general formula
\[\eta_{k+1}/\eta_{k+1}^{p}\eta_{k}=Z(G/\eta_{k+1}^{p}\eta_{k}),\]
which holds for all \(k\geq 0\). Namely, denote \(N/\eta_{k+1}^{p}\eta_{k}=Z(G/\eta_{k+1}^{p}\eta_{k})\). As \([\eta_{k+1},G]\leq\eta_{k+1}^{p}\eta_{k}\) holds by definition, we have \(\eta_{k+1}\subseteq N\). Conversely, \([N,G]\leq\eta_{k+1}^{p}\eta_{k}\leq N^{p}\eta_{k}\) shows that \(N/\eta_{k}\) is powerfully embedded in \(G/\eta_{k}\). Therefore, \(N/\eta_{k}\leq\eta(G/\eta_{k})=\eta_{k+1}/\eta_{k}\), which concludes the proof.
**Lemma 2.3**.: _Let \(G\) be a finitely generated pro-\(p\) group. Then the following hold:_
1. \(Z_{i}(G)\leq\eta_{i}(G)\) _for all_ \(i\geq 0\)_._
2. _If_ \(G\) _is a nilpotent group, then_ \(\operatorname{pwc}(G)\leq\operatorname{cl}(G)\)_, where_ \(\operatorname{cl}(G)\) _is the nilpotency class of_ \(G\)_._
3. \(\eta_{i}(G/\eta_{j}(G))=\eta_{i+j}(G)/\eta_{j}(G)\)_._
4. \(\operatorname{pwh}(\eta_{i}(G))\leq i\)_._
5. \(\operatorname{pwc}(\eta_{i}(G))\leq i\)_._
6. \([\eta_{i}(G),{}_{i}G]\leq\eta_{i}(G)^{p}\)_._
7. \(\eta_{i}(G/\eta(G)^{p})=\eta_{i}(G)/\eta(G)^{p}\) _for all_ \(i\geq 1\)_._
Proof.: Denote \(Z_{i}=Z_{i}(G)\). The property (1) obviously holds for \(i=0,1\). Suppose the assertion holds for some \(i\geq 1\). As \([Z_{i+1},G]\leq Z_{i}\leq\eta_{i}\), it follows that \(Z_{i+1}\eta_{i}/\eta_{i}\) is powerfully embedded in \(G/\eta_{i}\). Thus \(Z_{i+1}\eta_{i}/\eta_{i}\leq\eta(G/\eta_{i})=\eta_{i+1}/\eta_{i}\), and the assertion is proved for \(i+1\) as well. In particular, (2) follows directly from here.
We prove (3) by induction on \(i\). We have the required equality for \(i=0,1\), so we may assume it holds for some \(i\geq 1\). Denote \(N/\eta_{j}=\eta_{i+1}(G/\eta_{j})\). It
follows that \([N/\eta_{j},G/\eta_{j}]\leq(N/\eta_{j})^{p}\eta_{i}(G/\eta_{j})\). By induction assumption, this gives \([N,G]\leq N^{p}\eta_{i+j}\). This implies that \(N\eta_{i+j}/\eta_{i+j}\) is powerfully embedded in \(G/\eta_{i+j}\), therefore \(N\eta_{i+j}/\eta_{i+j}\leq\eta(G/\eta_{i+j})=\eta_{i+j+1}/\eta_{i+j}\). We conclude that \(N\leq\eta_{i+j+1}\). Conversely, we have that \([\eta_{i+j+1},G]\leq\eta_{i+j+1}^{p}\eta_{i+j}\) by definition. This can be restated as the fact that the quotient group \((\eta_{i+j+1}/\eta_{j})/(\eta_{i}(G/\eta_{j}))\) is powerfully embedded in \((G/\eta_{j})/\eta_{i}(G/\eta_{j})\). Therefore we have that \(\eta_{i+j+1}/\eta_{j}\) is contained in \(\eta_{i+1}(G/\eta_{j})=N/\eta_{j}\).
(4) is obvious by definition. To prove (5), we use induction on \(i\). We may assume that the inequality holds for \(i\geq 1\) and for all groups \(G\). Let \(P=\eta(\eta_{i+1})\). Then we obviously have that \(\eta(G)\) is contained in \(P\). Therefore
\[\operatorname{pwc}(\eta_{i+1}) =\operatorname{pwc}(\eta_{i+1}/P)+1\] \[\leq\operatorname{pwc}(\eta_{i+1}/\eta_{1})+1\] \[=\operatorname{pwc}(\eta_{i}(G/\eta_{1}))+1\leq i+1.\]
Note that (6) holds for \(i=0,1\). Assume it holds for \(i\geq 1\). Then \([\eta_{i+1},{}_{i+1}G]\leq[\eta_{i+1}^{p}\eta_{i},{}_{i}G]=[\eta_{i+1}^{p},{} _{i}G][\eta_{i},{}_{i}G]\leq\eta_{i+1}^{p}\eta_{i}^{p}=\eta_{i+1}^{p}\).
Let us prove (7). Denote \(N_{i}/\eta^{p}=\eta_{i}(G/\eta^{p})\) for \(i\geq 1\), and \(N_{0}=\eta^{p}\). We have that \([N_{i},G]\leq N_{i}^{p}N_{i-1}\eta^{p}=N_{i}^{p}N_{i-1}\) for all \(i\geq 1\). We proceed by induction. In the case when \(i=1\), notice that Lemma 2.2 gives \(\eta/\eta^{p}=Z(G/\eta^{p})\leq\eta(G/\eta^{p})\), therefore \(\eta\leq N_{1}\). Conversely, the fact that \([N_{1},G]\leq N_{1}^{p}\) shows that \(N_{1}\) is contained in \(\eta(G)\). Asssume the claim holds for some \(i\geq 1\). First, we have that \([N_{i+1},G]\leq N_{i+1}^{p}N_{i}=N_{i+1}^{p}\eta_{i}\). Thus \(N_{i+1}\eta_{i}/\eta_{i}\) is powerfully embedded in \(G/\eta_{i}\). This shows that \(N_{i+1}\eta_{i}/\eta_{i}\) is contained in \(\eta(G/\eta_{i})=\eta_{i+1}/\eta_{i}\), therefore \(N_{i+1}\leq\eta_{i+1}\). On the other hand, \([\eta_{i+1}/\eta^{p},G/\eta^{p}]=[\eta_{i+1},G]\eta^{p}/\eta^{p}\leq\eta_{i+ 1}^{p}\eta_{i}/\eta^{p}=(\eta_{i+1}/\eta^{p})^{p}\eta_{i}(G/\eta^{p})\). This demonstrates that \(\eta_{i+1}/\eta^{p}\) is contained in \(\eta_{i+1}(G/\eta^{p})\), hence \(\eta_{i+1}\leq N_{i+1}\).
The next lemma gives some information on pro-\(p\) groups with powerful class two:
**Lemma 2.4**.: _Let \(G\) be a finitely generated pro-\(p\) group and suppose that \(G/\eta(G)\) is powerful. Then \(G/\eta(G)\) is elementary abelian._
Proof.: Denote \(Q=G/\eta(G)^{p}\). By Lemma 2.2 we conclude that \(Q/Z(Q)\) is powerful. From [21, Proposition 3.6] (the pro-\(p\) version has the same proof) it follows that \(Q^{p}Z(Q)\) is powerfully embedded in \(Q\). We quickly deduce that \(G^{p}\eta(G)\) is powerfully embedded in \(G\), therefore \(G^{p}\leq\eta(G)\). The quotient \(G/\eta(G)\) is thus a powerful group of exponent \(p\), hence it is abelian.
**Corollary 2.5**.: _Let \(G\) be a finitely generated pro-\(p\) group of powerful class \(k\). Then \(\eta_{k-1}(G)\) is open in \(G\)._
Proof.: Note that \(G/\eta_{k-1}\) is powerful. This implies that \((G/\eta_{k-2})/\eta(G/\eta_{k-2})\) is powerful. By Lemma 2.4 we have that \(\Phi(G/\eta_{k-2})\leq\eta(G/\eta_{k-2})\). Thus \(\Phi(G)\leq\eta_{k-1}\), and this concludes the proof.
We are ready to prove the first half of our first main result mentioned in the introduction:
**Proposition 2.6**.: _Let \(G\) be a finitely generated pro-\(p\) group of finite powerful class. Then \(G\) is \(p\)-adic analytic._
Proof.: Let \(\operatorname{pwc}(G)=k\). We prove the result by induction on \(k\). Clearly, the result holds true for \(k=0,1\). Assume it holds for groups of powerful class \(\leq k-1\). By Corollary 2.5, we have that \(\eta_{k-1}\) is open in \(G\), therefore it is finitely generated. As \(\operatorname{pwc}(\eta_{k-1})\leq k-1\), we have that \(\eta_{k-1}\) is \(p\)-adic analytic. Therefore \(G\) is \(p\)-adic analytic.
It is straightforward to see that if \(G\) is a finitely generated pro-\(p\) group with an open powerfully embedded subgroup, then \(G\) has finite powerful class. When \(G\) is nilpotent, the converse also holds:
**Proposition 2.7**.: _Let \(G\) be a finitely generated nilpotent pro-\(p\) group. Then \(\eta(G)\) is open in \(G\)._
Proof.: Note that \(Z(G/\eta(G)^{p})=\eta(G)/\eta(G)^{p}\) is a finite \(p\)-group, since \(\eta(G)\) is finitely generated. As \(G/\eta(G)^{p}\) is nilpotent, we get from here that it is finite [12, 5.2.22]. Thus the result follows.
_Example 2.8_.: Let \(n>2\) and let \(S\) be a Sylow pro-\(p\) subgroup of \(\operatorname{SL}_{n}(\mathbb{Z}_{p})\). Then \(S\) can be seen as an inverse limit of upper unitriangular groups \(\operatorname{UT}_{n}(\mathbb{Z}/p^{m}\mathbb{Z})\)[13, p. 35]. Thus \(S\) is nilpotent of class \(n-1\). It follows that \(\operatorname{pwc}(S)\leq n-1\) and \(\eta(S)\) is open in \(S\). One can verify that \(\eta(S)\) consists precisely of all those upper unitriangular matrices \((a_{ij})\) with \(a_{i,i+\ell}\in p^{n-\ell-1}\mathbb{Z}_{p}\).
We end this section by mentioning the relationship with capability of groups. We say that a group \(G\) is _capable_ if there exists a group \(Q\) with \(Q/Z(Q)\cong G\). It is well known that non-trivial cyclic groups are not capable. Baer [1] classified finite abelian groups that are capable. We define a finite \(p\)-group \(G\) to be \(\eta\)_-capable_ if there exists a finite \(p\)-group \(P\) with \(P/\eta(P)\cong G\). Again, it is easy to see that a non-trivial cyclic group cannot be \(\eta\)-capable, see, for instance, [13, p. 45]. Note that if a finite \(p\)-group \(G\) is \(\eta\)-capable with \(P/\eta(P)\cong G\), then \((P/\eta(P)^{p})/Z(P/\eta(P)^{p})\cong G\) by Lemma 2.2. This shows that \(\eta\)-capability implies the usual capability. The converse does not hold. The group \(C_{p^{2}}\times C_{p^{2}}\) is capable by [1], yet Lemma 2.4 shows that it is not \(\eta\)-capable, as all abelian \(\eta\)-capable \(p\)-groups are elementary abelian.
## 3. Pro-\(p\) groups of small powerful class
Recall that a pro-\(p\) group \(G\) is said to have small powerful class if \(\operatorname{pwc}(G)<p\). Note that if a stronger condition \(\operatorname{pwc}(G)<p-1\) holds, then \(G\) satisfies the condition \(\gamma_{p-1}(G)\leq G^{p}\). Groups satisfying this property are called _potent_ and are thoroughly described by Gonzalez-Sanchez and Jaikin-Zapirain [13]. We are thus more or less only interested in the case \(\operatorname{pwc}(G)=p-1\).
Mann's results on finite \(p\)-groups of small powerful class are summarized below. One may verify that similar properties hold for pro-\(p\) groups of small powerful class and corresponding closed normal subgroups of small powerful height:
**Proposition 3.1** ([14]).: _Let \(G\) be a finite \(p\)-group._
1. _If_ \(G\) _has small powerful class, then_ \(G^{p}\) _is powerful, and_ \(G^{p}=\{x^{p}\mid x\in G\}\)_._
2. _If_ \(G=\langle a,b\rangle\) _has small powerful class and_ \(a^{p^{c}}=b^{p^{c}}=1\)_, then_ \(G^{p^{c}}=1\)_._
3. _If_ \(N\) _is a normal subgroup of_ \(G\) _with small powerful height, then_ \([N^{p^{k}},G]=[N,G]^{p^{k}}\leq[N,G^{p^{k}}]\)_._
Let \(G\) be a pro-\(p\) group. A closed normal subgroup \(N\) of \(G\) is _PF-embedded_ in \(G\)[10] if there exists a \(G\)-central series \(N=N_{1}\geq N_{2}\geq\cdots\) with trivial intersection \(\cap_{i\in\mathbb{N}}N_{i}\) and \([N_{i},\,p_{-1}G]\leq N_{i+1}^{p}\) for all \(i\). Such a series is called a _potent filtration_ of \(N\) in \(G\). We also say that \(G\) is a _PF-group_ if it is PF-embedded in itself. It is clear that \(N\) is PF-embedded in \(G\) if and only if \(NK/K\) is PF-embedded in \(G/K\) for all open normal subgroups \(K\) in \(G\).
**Proposition 3.2**.: _Let \(G\) be a finitely generated pro-\(p\)-group and \(N\) a normal subgroup of \(G\). If \(N\) has small powerful height, it is PF-embedded in \(G\)._
Proof.: In the course of the proof, we use Proposition 3.1 (3) without further explicit reference. Let \(1=N_{0}\leq N_{1}\leq\cdots\leq N_{p-1}=N\) be an \(\eta\)-series of \(N\) in \(G\). Denote \(N_{j}=1\) for \(j<0\) and \(N_{\ell}=N\) for \(\ell\geq p\). Define
\[M_{1} =N,\] \[M_{i+1} =M_{i}^{p}N_{p-i-1}.\]
Note that all \(M_{i}\) have small powerful height [12, Lemma 2.5]. Induction shows that we have a descending series \(N=M_{1}\geq M_{2}\geq\cdots\). Let us first prove that this is a central series. Note that \([M_{1},G]=[N,G]=[N_{p-1},G]\leq N_{p-1}^{p}N_{p-2}=M_{2}\). Suppose that we have \([M_{i},G]\leq M_{i+1}\). Then \([M_{i+1},G]=[M_{i}^{p}N_{p-i-1},G]=[M_{i}^{p},G][N_{p-i-1},G]\leq[M_{i},G]^{p }N_{p-i-1}^{p}N_{p-i-2}=M_{i+2}\), as \(N_{p-i-1}^{p}\leq M_{i+1}^{p}\).
Now we show that \([M_{i},\,p_{-1}G]\leq M_{i+1}^{p}\). This holds for \(i=1\), as the fact that we have a central series implies that \([M_{1},\,p_{-1}G]\leq M_{p}=M_{p-1}^{p}N_{-1}=M_{p-1}^{p}\leq M_{2}^{p}\). For induction step, we may assume that \(M_{i+2}^{p}=1\). From \([N_{p-i-1},G]\leq N_{p-i-1}^{p}N_{p-i-2}\) we readily obtain \([N_{p-i-1},G,G]\leq[N_{p-i-1},G]^{p}[N_{p-i-2},G]\leq(N_{p-i-1}^{p}N_{p-i-2})^ {p}N_{p-i-2}^{p}N_{p-i-3}=N_{p-i-3}\). Induction on \(k\) shows that, under the above assumption, we have \([N_{p-i-1},\,kG]\leq N_{p-i-k-1}\) for all \(k\geq 2\). Finally, note that the above implies \([M_{i+1},\,p_{-1}G]=[M_{i}^{p}N_{p-i-1},\,p_{-1}G]=[M_{i},\,p_{-1}G]^{p}[N_{p- i-1},\,p_{-1}G]\leq(M_{i+1}^{p})^{p}[N_{p-i-1},\,p_{-1}G]=[N_{p-i-1},\,p_{-1}G] \leq N_{-i}=1\), as required.
Since \(M_{p+k}=M_{p-1}^{p^{k+1}}\) for all \(k\geq 0\), we quickly conclude that the intersection of all \(M_{i}\) is trivial. This finishes the proof.
**Corollary 3.3**.: _Every finitely generated pro-\(p\) group of small powerful class is a PF-group._
Every torsion-free pro-\(p\) group of small powerful class is therefore \(p\)-saturable in the sense of Lazard [13]. The latter have a natural \(\mathbb{Z}_{p}\)-lattice structure, first discovered by Lazard (_op. cit._) and further developed by Gonzalez-Sanchez [10]. If \(G\) is a \(p\)-saturable group, then the following operations
turn it into a \(p\)-saturable Lie algebra \(\mathcal{G}=G\):
\[x+y =\lim_{n\to\infty}(x^{p^{n}}y^{p^{n}})^{p^{-n}},\] \[\lambda x =x^{\lambda},\] \[[x,y]_{\mathrm{Lie}} =\lim_{n\to\infty}[x^{p^{n}},y^{p^{n}}]^{p^{-2n}}.\]
Conversely, every \(p\)-saturable Lie algebra becomes a \(p\)-saturable group with multiplication given via the Baker-Campbell-Hausdorff formula
\[\Phi(x,y)=\log(\exp x\cdot\exp y)=x+y+\sum_{i=2}^{\infty}u_{i}(x,y),\]
where \(u_{i}(x,y)\) are Lie polynomials in \(x\) and \(y\) of degree \(i\) with coefficients in \(\mathbb{Q}\), see [14, Theorem 6.28] for further details.
If \(\mathcal{L}\) is a \(\mathbb{Z}_{p}\)-Lie algebra, then a subalgebra \(\mathcal{K}\) is _powerfully embedded_ in \(\mathbb{L}\) if \([\mathcal{K},\mathcal{L}]_{\mathrm{Lie}}\leq p\mathcal{K}\). Analogously, one extends the notion of PF-embedded subgroups to PF-embedded Lie subalgebras [13]. Furthermore, we can define the powerful class for \(\mathbb{Z}_{p}\)-Lie algebras as follows. A series \(0=\mathcal{L}_{0}\leq\mathcal{L}_{1}\leq\cdots\) of ideals of a \(\mathbb{Z}_{p}\)-Lie algebra \(\mathcal{L}\) is an \(\eta\)_-series_ if \(\mathcal{L}_{i+1}/\mathcal{L}_{i}\) is powerfully embedded in \(\mathcal{L}/\mathcal{L}_{i}\) for all \(i\). If there is an \(\eta\)-series of \(\mathcal{L}\) that reaches \(\mathcal{L}\) in finitely many steps, we say that \(\mathcal{L}\) has _finite powerful class_. In this case, the length of shortest \(\eta\)-series of \(\mathcal{L}\) is called the _powerful class_\(\operatorname{pwc}(\mathcal{L})\) of \(\mathcal{L}\). Denote by \(\eta(\mathcal{L})\) the sum of all powerfully embedded ideals in \(\mathcal{L}\). Then we can define the upper \(\eta\)-series of a Lie algebra exactly the same as in the group case. It is also clear that the upper \(\eta\)-series is the fastest growing \(\eta\)-series of the Lie algebra \(\mathcal{L}\).
**Corollary 3.4**.: _A finitely generated torsion-free pro-\(p\) group \(G\) has small powerful class if and only if the corresponding Lie algebra \(\mathcal{G}\) has small powerful class. In this case, \(\operatorname{pwc}(G)=\operatorname{pwc}(\mathcal{G})\) and \(\eta_{i}(G)=\eta_{i}(\mathcal{G})\) for all \(i\geq 0\)._
Proof.: Suppose \(G\) has small powerful class \(k\). The group \(G\) is \(p\)-saturable, therefore the corresponding Lie algebra \(\mathcal{G}\) is \(p\)-saturable [13, Theorem 4.2]. Let \(1=N_{0}\leq N_{1}\leq\cdots\leq N_{k}=G\) be an \(\eta\)-series of \(G\) with \(k<p\). Then all subgroups \(N_{i}\) are PF-embedded in \(G\) by Proposition 3.2. By [13, Theorem 4.5], we have a corresponding series of PF-embedded ideals of \(\mathcal{G}\) given as \(0=\mathcal{N}_{0}\leq\mathcal{N}_{1}\leq\cdots\leq\mathcal{N}_{k}=\mathcal{G}\), and
\[[\mathcal{N}_{i+1},\mathcal{G}]_{\mathrm{Lie}}=[N_{i+1},G]\leq N_{i+1}^{p}N_{ i}=p\mathcal{N}_{i+1}+\mathcal{N}_{i}\]
for all \(i\). This shows that \((\mathcal{N}_{i})_{i}\) is an \(\eta\)-series of \(\mathcal{G}\), hence \(\operatorname{pwc}(\mathcal{G})\leq k\).
The converse follows from the fact that if \((\mathcal{N}_{i})_{i}\) is an \(\eta\)-series of \(\mathcal{G}\), then an analogous argument as in the proof of Proposition 3.2 shows that all \(\mathcal{N}_{i}\) are PF-embedded in \(\mathcal{G}\). Then the argument proceeds along the similar lines as in the previous paragraph.
The equality of the upper \(\eta\)-series of \(G\) and \(\mathcal{G}\) now follows from [13, Theorem 4.5].
Kazhdan [16] showed that Kirillov's orbit method provides a correspondence between the irreducible characters of finite \(p\)-groups of class \(<p\) and the orbits of the action of that group on the dual space of the corresponding Lie algebra. In [13], Gonzalez-Sanchez showed that the orbit method
also works for some classes of \(p\)-saturable groups, such as torsion-free potent groups. However, the orbit method no longer works for \(p\)-saturable groups of small powerful class:
_Example 3.5_ (Example 1 of [1]).: Let \(M=\langle x_{1},x_{2}\ldots,x_{p}\rangle\cong\mathbb{Z}_{p}^{p}\) and form \(G=\langle\alpha\rangle\ltimes M\cong\mathbb{Z}_{p}\ltimes\mathbb{Z}_{p}^{p}\), where the action of \(\alpha\) on \(M\) is given by \([x_{i},\alpha]=x_{i+1}\) for \(i\leq p-2\), and \([x_{p-1},\alpha]=x_{p}^{p}\) and \([x_{p},\alpha]=1\). Then we readily get that
\[\eta_{i}(G)=\langle\alpha^{p^{p-i-1}},x_{1}^{p^{k_{i1}}},x_{2}^{p^{k_{i2}}}, \ldots,x_{p-2}^{p^{k_{i,p-2}}},x_{p-1},x_{p}\rangle,\]
where \(k_{ij}=\max\{p-i-j,0\}\), hence \(\operatorname{pwc}(G)=p-1\). The group \(G\) therefore has small powerful class, yet the orbit method does not yield all of its irreducible representations [1].
Pro-\(p\) groups whose powerful class is not small may not be PF-groups, as the following example shows:
_Example 3.6_.: We exhibit a finite \(p\)-group of powerful class equal to \(p\) that is not a PF-group. Let \(M\) be an elementary abelian \(p\)-group with generators \(x_{1},x_{2},\ldots,x_{p}\). Form \(G=\langle\alpha\rangle\ltimes M\), where \(\alpha\) has order \(p^{2}\) and acts on \(M\) as follows: \([x_{i},\alpha]=x_{i+1}\) for \(i=1,2,\ldots,p-1\), and \([x_{p},\alpha]=1\). The group \(G\) has order \(p^{p+2}\) and nilpotency class \(p\). Note that
\[(\alpha x_{1})^{p}=\alpha^{p}x_{1}^{p}x_{2}^{\binom{p}{2}}x_{3}^{\binom{p}{3} }\cdots x_{p}^{\binom{p}{p}}=\alpha^{p}x_{p},\]
therefore \(x_{p}\in G^{p}\). On the other hand, \(x_{p}\) is not a \(p\)-th power of some element of \(G\). This shows that \(G\) is not a PF-group by [1, Theorem 3.4]. By Corollary 3.3 we must have that \(\operatorname{pwc}(G)=p\).
On the other hand, there are PF-groups, even torsion-free and potent, which do not have finite powerful class:
_Example 3.7_.: In the following we construct a finitely generated torsion-free potent pro-\(p\) group \(G\), which does not have finite powerful class. Let \(p>3\) and let \(n\) be a positive integer. Let \(G_{n}=\langle\alpha\rangle\ltimes M\), where \(M=\langle x_{1},x_{2},\ldots,x_{p-2}\rangle\) is an abelian group, and \(|x_{1}|=p^{n+1}\) and \(|x_{2}|=|x_{3}|=\cdots=|x_{p-2}|=|\alpha|=p^{n}\). The action of \(\alpha\) on \(M\) is given by \([x_{i},\alpha]=x_{i+1}\) for \(i=1,2,\ldots,p-3\), and \([x_{p-2},\alpha]=x_{1}^{p}\). We clearly have that \(\gamma_{p-1}(G_{n})\leq G_{n}^{p}\), hence \(G_{n}\) is a two-generator potent \(p\)-group. From the relations it follows that \(Z(G_{n})=\langle x_{1}^{p^{n}}\rangle\). One can easily verify that this is the largest powerfully embedded subgroup of \(G_{n}\), therefore \(\eta(G_{n})=Z(G_{n})\). By taking successive quotients, one can see that \(\eta_{i}(G_{n})=Z_{i}(G_{n})\) for all \(i\geq 1\). As the nilpotency class of \(G_{n}\) is precisely \(n(p-2)+1\), we conclude that \(\operatorname{pwc}G_{n}=n(p-2)+1\).
The groups \(G_{n}\) clearly form an inverse system. Their inverse limit \(G\cong\mathbb{Z}_{p}\ltimes\mathbb{Z}_{p}^{p-2}\) is topologically generated by two generators, it is torsion-free and potent. As \(\operatorname{pwc}(G_{n})\) are not bounded, the group \(G\) does not have finite powerful class.
## 4. Elements of finite order and powerful class
In this section we look at the elements of finite order in pro-\(p\) groups of finite powerful class \(k\). For \(i\geq 0\), denote \(\Omega_{i}(G)=\langle x\in G\mid x^{p^{i}}=1\rangle\). At first we bound the exponent of \(\Omega_{i}(G)\) in terms of \(i\) and \(k\):
**Theorem 4.1**.: _Let \(G\) be a finitely generated pro-\(p\) group of powerful class \(k\). Suppose \(k\leq\ell(p-1)\). Then \(\Omega_{i}(G)^{p^{i+\ell}}=1\)._
Proof.: We may assume that \(G\) is finite. We prove by induction on \(\ell\) that \(\gamma_{\ell(p-1)}(G)\) is contained in some PF-embedded subgroup of \(G\). If \(\ell=1\), then \(G\) has small powerful class, therefore it is a PF-group by Corollary 3.3. For induction step, note that \(G/\eta_{p-1}\) has powerful class \(\leq(\ell-1)(p-1)\). Therefore there exists a normal subgroup \(N\) of \(G\) such that \(N/\eta_{p-1}\) is PF-embedded in \(G/\eta_{p-1}\) and \(\gamma_{(\ell-1)(p-1)}(G)\leq N\). Choose a potent filtration \(N/\eta_{p-1}=N_{1}/\eta_{p-1}\geq N_{2}/\eta_{p-1}\geq\cdots\geq N_{r}/\eta_{ p-1}=1\) of \(N/\eta_{p-1}\) in \(G/\eta_{p-1}\). Then we have \([N_{i},G]\leq N_{i+1}\) and \([N_{i},{}_{p-1}G]\leq N_{i+1}^{p}\eta_{p-1}\) for all \(i\geq 1\). The group \(\eta_{p-1}\) is PF-embedded in \(G\) by Proposition 3.2. Choose a potent filtration \(\eta_{p-1}=M_{1}\geq M_{2}\geq\cdots\geq M_{s}=1\) of \(\eta_{p-1}\) in \(G\). We claim that
\[N^{p}\eta_{p-1}=N_{1}^{p}\eta_{p-1}\geq N_{2}^{p}\eta_{p-1}\geq\cdots\geq N_{r }^{p}\eta_{p-1}=\eta_{p-1}\geq M_{2}\geq\cdots\geq M_{s}=1\]
is a potent filtration of \(N^{p}\eta_{p-1}\) in \(G\). The series is central, since \([N_{i}^{p}\eta_{p-1},G]\leq[N_{i},G]^{p}[N_{i},{}_{p}G]\eta_{p-1}\leq N_{i+1}^ {p}\eta_{p-1}\). The proof will be concluded once we have shown that \([N_{i}^{p}\eta_{p-1},{}_{p-1}G]\leq(N_{i+1}^{p}\eta_{p-1})^{p}\). To this end, we may assume that \((N_{i+1}^{p}\eta_{p-1})^{p}=1\) and \([N_{i}^{p}\eta_{p-1},{}_{j}G]=1\) for all \(j\geq p\). At first note that \([\eta_{p-1},{}_{p-1}G]\leq\eta_{p-1}^{p}=1\). We thus have that \([N_{i}^{p}\eta_{p-1},{}_{p-1}G]=[N_{i}^{p},{}_{p-1}G]=[N_{i},{}_{p-1}G]^{p}\leq (N_{i+1}^{p}\eta_{p-1})^{p}=1\). This proves the claim. We have therefore shown that \(N^{p}\eta_{p-1}\) is PF-embedded in \(G\). Observe that \(\gamma_{\ell(p-1)}(G)\leq[N,{}_{p-1}G]\leq N_{2}^{p}\eta_{p-1}\leq N^{p}\eta_ {p-1}\).
Our result now directly follows from [1, Theorem 4.1].
We note here that Mann [11] constructed a finite \(p\)-group \(G\) of small powerful class with \(\exp\Omega_{1}(G)>p\), therefore the bound given in Theorem 4.1 is close to being sharp. The bound can also be compared with Eeasterfield's bound for the exponent of \(\Omega_{i}(G)\) in terms of \(p\), \(i\) and the nilpotency class of the group \(G\), _cf._[10].
An immediate consequence is the following:
**Corollary 4.2**.: _Let \(G\) be a finitely generated pro-\(p\) group of finite powerful class. Then the set of all torsion elements of \(G\) forms a finite subgroup of \(G\)._
## 5. Powerful class and coclass
If \(G\) is a finite \(p\)-group of class \(c\) and order \(p^{n}\), then \(c<n\). The number \(r=n-c\) is called the _coclass_ of \(G\). Determining the structure of finite \(p\)-groups according to coclass has been very fruitful. We refer to [1].
One of the important features of large \(p\)-groups of given coclass is that they act uniserially on certain parts of their lower central series by conjugation. Recall that a finite \(p\)-group \(G\) acts _uniserially_ on a finite \(p\)-group \(N\) if \(|H:[H,G]|=p\) for every non-trivial \(G\)-invariant subgroup \(H\) of \(N\). The following result due to Shalev is one of the fundamental results of the coclass theory:
**Lemma 5.1** ([1], Theorem 6.3.9).: _Suppose \(p>2\). Let \(G\) be a finite \(p\)-group of coclass \(r\) and \(|G|=p^{n}\geq p^{2p^{r}+r}\). Let \(m=p^{r}-p^{r-1}\). Then there exists \(0\leq s\leq r-1\) such that \(G\) acts uniserially on \(\gamma_{m}(G)\), and \(\gamma_{i}(G)^{p}=\gamma_{i+d}(G)\) for all \(i\geq m\), where \(d=(p-1)p^{s}\)._
**Theorem 5.2**.: _Given \(p\), \(r\) and \(k\), there are only finitely many finite \(p\)-groups of coclass \(r\) and powerful class at most \(k\)._
Proof.: Let \(G\) be a finite \(p\)-group of coclass \(r\) and powerful class \(k\). Denote \(|G|=p^{n}\) and suppose without loss of generality that \(n\geq 2p^{r}+r\). The nilpotency class of \(G\) is equal to \(c=n-r\geq 2p^{r}\). Let \(m\) and \(d\) be as in Lemma 5.1. We have that \(\operatorname{pwh}\gamma_{m}(G)\leq k\) by [11, Lemma 2.5]. Consider an \(\eta\)-series \(1=N_{0}\leq N_{1}\leq\cdots\leq N_{\ell}=\gamma_{m}(G)\) of \(\gamma_{m}(G)\) in \(G\), where \(\ell\leq k\). As \(G\) acts uniserially on \(\gamma_{m}(G)\), we have \(N_{i}=\gamma_{m_{i}}(G)\) for some \(m_{i}\geq m\) with \(c+1=m_{0}\geq m_{1}\geq\cdots\geq m_{\ell}=m\), see [1, Lemma 4.1.3]. We thus have that \(\gamma_{m_{i+1}}(G)/\gamma_{m_{i}}(G)\) is powerfully embedded in \(G/\gamma_{m_{i}}(G)\) for all \(i\). Therefore we see that \(\gamma_{1+m_{i+1}}(G)\leq\gamma_{m_{i+1}}(G)^{p}\gamma_{m_{i}}(G)=\gamma_{d+m _{i+1}}(G)\gamma_{m_{i}}(G)\) holds for all \(i\). Since \(d>1\) and \(m_{i}>m_{i+1}\), this is possible only if \(m_{i}=m_{i+1}+1\). Then the above \(\eta\)-series of \(\gamma_{m}(G)\) is uniserial, and we thus have that \(|\gamma_{m}(G)|\leq p^{k}\). On the other hand, \(G/\gamma_{m}(G)\) has coclass \(\leq r\) and class \(\leq m-1\), thus \(|G:\gamma_{m}(G)|\leq p^{r+m-1}\). We conclude that \(|G|\leq p^{k+r+m-1}\), and this finishes the proof.
**Corollary 5.3**.: _There is no infinite pro-\(p\) group of finite coclass and finite powerful class._
We mention here an independent result that can be proved along similar lines:
**Proposition 5.4**.: _Given \(p\) and \(r\), there are only finitely many finite \(p\)-groups of coclass \(r\) that are PF-groups._
Proof.: The proof follows along similar lines like the one of Theorem 5.2. Let \(G\) be a PF-group of order \(p^{n}\) and coclass \(r\). Again, assume \(n\geq 2p^{r}+r\), denote \(c=n-r\geq 2p^{r}\), and let \(m\) and \(d\) be as in Lemma 5.1. The group \(\gamma_{m}(G)\) is PF-embedded in \(G\), see [13, Proposition 3.2]. As \(G\) acts uniserially on \(\gamma_{m}(G)\), there is a potent filtration of \(\gamma_{m}(G)\) in \(G\) that has the form \(\gamma_{m}(G)=\gamma_{m_{1}}(G)\geq\gamma_{m_{2}}(G)\geq\cdots\geq\gamma_{c+1 }(G)=1\) for \(m=m_{1}\appr m_{2}\appr\cdots\). By definition, \(\gamma_{m+p-1}(G)\leq\gamma_{m_{2}}(G)^{p}=\gamma_{m_{2}+d}(G)\). This gives that \(m_{2}-m\leq p-1-d\leq 0\), a contradiction. Thus \(G\) is not a PF-group.
**Corollary 5.5**.: _There is no infinite pro-\(p\) group of finite coclass that is also a PF-group._
A finite \(p\) group of order \(p^{n}\) and nilpotency class equal to \(n-1\), where \(n\geq 4\), is said to be of _maximal class_. We find here the upper \(\eta\)-series of finite \(p\)-groups of maximal class. At first we state the following:
**Proposition 5.6**.: _Let \(G\) be a nonabelian group of order \(p^{3}\)._
1. _If_ \(\exp G=p\)_, then_ \(\eta(G)=Z(G)\)_._
2. _If_ \(\exp G=p^{2}\)_, then_ \(G\) _is powerful and thus_ \(\eta(G)=G\)_._
**Proposition 5.7**.: _Let \(G\) be a finite \(p\)-group of maximal class. Then \(\eta(G)=Z(G)\)._
Proof.: Denote \(|G|=p^{n}\) and \(|G:\eta(G)|=p^{r}\). Note that \(r\geq 2\). By [1, Proposition 3.1.2], we have that \(\eta(G)=\gamma_{r}(G)\). The subgroups \(\gamma_{k}(G)\) are then powerfully embedded in \(G\) for all \(k\geq r\). As \(Z(G)=\gamma_{n-1}(G)\), we also have that \(r\leq n-1\).
Suppose that \(r<n-1\). If \(n\leq p+1\), then both \(G/\gamma_{n-1}(G)\) and \(\gamma_{2}(G)\) have exponent \(p\)[1, Proposition 3.3.2]. It follows that \(1\neq\gamma_{r+1}(G)=[\gamma_{r}(G),G]\leq\gamma_{r}(G)^{p}=1\), a contradiction. Thus \(n>p+1\). Suppose further that \(r>n-p+1\). Then [1, Corollary 3.3.6] yields that \(1\neq\gamma_{r+1}(G)\leq\gamma_{r}(G)^{p}=\gamma_{r+p-1}(G)\), which cannot happen. We conclude that \(n-p+1<r<n-1\). Then \(\gamma_{r}(G)^{p}\leq\gamma_{n-p+1}(G)^{p}=\gamma_{n}(G)=1\), which implies \(\gamma_{r+1}(G)=1\). This is a contradiction. We thus have \(r=n-1\), hence the result.
**Corollary 5.8**.: _Let \(G\) be a finite \(p\)-group of maximal class. Then \(\eta_{i}(G)=Z_{i}(G)\) for all \(i\geq 0\)._
Proof.: Let \(|G|=p^{n}\). The upper central series
\[1=Z_{0}(G)\leq Z_{1}(G)\leq\cdots\leq Z_{n-1}(G)=G\]
has all sections, except for the last one, of order \(p\). Using Proposition 5.7 and induction, we conclude that \(\eta_{i}(G)=Z_{i}(G)\) for \(i\leq n-3\). The group \(G/\eta_{n-3}(G)\) has order \(p^{3}\). Using the notations of [1, p. 56], we have that \(G=\langle s,s_{1}\rangle\). Combination of Proposition 3.3.2, Proposition 3.3.3 and Lemma 3.3.7 of [1] shows that \(s^{p}\) and \(s_{1}^{p}\) both belong to \(\gamma_{3}(G)=\eta_{n-3}(G)\). As \(G/\gamma_{3}(G)\) has class \(2\) and \(p>2\), it follows that \(G^{p}\leq\eta_{n-3}(G)\). Proposition 5.6 now shows that \(\eta_{n-2}(G)/\eta_{n-3}(G)=\eta(G/\eta_{n-3}(G))=Z(G/Z_{n-3}(G))=Z_{n-2}(G)/ \eta_{n-3}(G)\). Therefore \(\eta_{n-2}(G)=Z_{n-2}(G)\) and \(\eta_{n-1}(G)=Z_{n-1}(G)=G\).
Therefore, if \(G\) is a finite \(p\)-group of coclass \(1\), then \(\operatorname{pwc}(G)\) is equal to the nilpotency class of \(G\). On the other hand, there are several \(p\)-groups of coclass two with powerful class strictly smaller than the nilpotency class. For example, there are four powerful \(p\)-groups of order \(p^{4}\) and nilpotency class equal to \(2\).
|
2304.00923 | Planar site percolation via tree embeddings | We prove that for any any infinite, connected, planar graph $G$ properly
embedded in $\RR^2$ with minimal vertex degree at least 7, the
i.i.d.~Bernoulli($p$) site percolation on $G$ a.s.~has infinitely many infinite
1-clusters and for any $p\in (p_c^{site},1-p_c^{site})$. Moreover,
$p_c^{site}<\frac{1}{2}$, so the above interval is non-empty. This confirms a
conjecture of Benjamini and Schramm in 1996 (Conjecture 7 in \cite{bs96}). The
proof is based on a novel construction of embedded trees on such graphs, which
not only proves the existence of infinitely many infinite clusters when $p$ is
in a neighborhood of $\frac{1}{2}$, but also proves the exponential decay of
point-to-point connection probabilities by constructing infinitely many trees
separating two vertices as the distance of the two vertices goes to infinity. | Zhongyang Li | 2023-04-03T12:28:10Z | http://arxiv.org/abs/2304.00923v3 | # Planar site percolation via tree embeddings
###### Abstract.
We prove that for any any infinite, connected, planar graph \(G\) properly embedded in \(\mathbb{R}^{2}\) with minimal vertex degree at least \(7\) and uniformly bounded face degree for finite faces, the i.i.d. Bernoulli\((p)\) site percolation on \(G\) a.s. has infinitely many infinite \(1\)-clusters and for any \(p\in(p_{c}^{site},1-p_{c}^{site})\). Moreover, \(p_{c}^{site}<\frac{1}{2}\), so the above interval is non-empty. This confirms Conjecture 7 in [3]. The proof is based on a novel construction of embedded trees on such graphs.
## 1. Introduction
### Overview and Main Results
Introduced by Broadbent and Hammersley in 1957 (see [6]) to study the random spread of a fluid through a medium, percolation has been a celebrated model illustrating the phase transition, magnetization, or the spread of pandemic diseases; see [10, 8] for recent accounts of the theory.
Let \(G=(V,E)\) be a graph. We write \(e=\langle u,v\rangle\) for an edge with endpoints \(u\) and \(v\); where \(u,v\in V\) and \(e\in E\). The (_vertex_-)_degree_ of a vertex \(v\in V\) is the number of edges incident to \(v\); i.e. edges one of whose endpoints is \(v\). We say a graph is locally finite if each vertex has finite degree.
Assume \(G=(V,E)\) is an infinite, locally finite, connected graph. A site percolation configuration \(\omega\in\{0,1\}^{V}\) is a an assignment to each vertex in \(G\) of either state \(0\) or state \(1\). A cluster in \(\omega\) is a maximal connected set of vertices in which each vertex has the same state in \(\omega\). A cluster may be a \(0\)-cluster or a \(1\)-cluster depending on the common state of vertices in the cluster. A cluster may be finite or infinite depending on the total number of vertices in it. We say that percolation occurs in \(\omega\) if there exists an infinite \(1\)-cluster in \(\omega\).
In this paper we focus on site percolation on planar graphs; i.e. graphs that can be "nicely drawn" in the plane. More precisely, we have the following definition.
**Definition 1.1**.: _A planar graph \(G\) is a graph that can be drawn in the plane \(\mathbb{R}^{2}\), with vertices represented by points and edges represented by curves, such that edges can intersect only at vertices._
_We call a drawing of the graph into the plane \(\mathbb{R}^{2}\) a proper embedding in \(\mathbb{R}^{2}\) if any compact subset \(K\) in \(\mathbb{R}^{2}\) intersects at most finitely many edges and vertices._
See also Section 2 of [5] for the definition of proper embeddings. From Definition 1.1 we see that if a graph \(G\) can be properly embedded into \(\mathbb{R}^{2}\), then it is locally finite. Since
\(\mathbb{R}^{2}\) is homeomorphic to any simply-connected open subset of \(\mathbb{R}^{2}\), a graph can be properly embedded into \(\mathbb{R}^{2}\) if and only if it can be properly embedded into any simply-connected open subsets of \(\mathbb{R}^{2}\), in particular the unit disk (with appropriate metric this gives the hyperbolic plane \(\mathbb{H}^{2}\)); see [7, 5, 21, 11, 12, 13] for graphs embedded into the hyperbolic plane \(\mathbb{H}^{2}\) and statistical mechanical models on such graphs. However, in general a graph cannot be circle packed in both \(\mathbb{R}^{2}\) and \(\mathbb{H}^{2}\)([17, 18]).
Of particular interest is the i.i.d. Bernoulli site percolation on a graph. In such a model, an independent Bernoulli random variable, which takes value 1 with probability \(p\in[0,1]\), is associated to each vertex. We use \(\mathbb{P}_{p}\) to denote the probability measure for i.i.d. Bernoulli\((p)\) site percolation on \(G\).
For the i.i.d. Bernoulli site percolation, define
\[p_{c}^{site}(G): = \inf\{p\in[0,1]:\text{Bernoulli}(p)\text{ site percolation on }G\text{ has an infinite }1-\text{cluster a.s.}\} \tag{1.2}\] \[p_{u}^{site}(G): = \inf\{p\in[0,1]:\text{Bernoulli}(p)\text{ site percolation on }G\text{ has a unique infinite }1-\text{cluster a.s.}\} \tag{1.1}\]
It follows immediately that \(p_{c}^{site}(G)\leq p_{u}^{site}(G)\). If strict inequalities hold, then for certain \(p\), there is a strictly positive probability that in the i.i.d. Bernoulli\((p)\) percolation at least two infinite 1-clusters exist. A number of problems related to the uniqueness and non-uniqueness of infinite percolation clusters were formulated by Benjamini and Schramm in their influential paper [3], including the following one.
**Conjecture 1.2**.: _(Conjecture 7 in [3]). Consider site percolation on an infinite, connected, planar graph G with minimal degree at least 7. Then, for any \(p\in(p_{c}^{site},1-p_{c}^{site})\), we have \(\mathbb{P}_{p}\)-a.s. there are infinitely many infinite 1-clusters. in the i.i.d. Bernoulli\((p)\) site percolation on \(G\). Moreover, it is the case that \(p_{c}^{site}<\frac{1}{2}\), so the above interval is invariably non-empty._
A graph is called vertex-transitive (resp. quasi-transitive) when there is a unique orbit (at most finitely many orbits) of vertices under the action of its automorphism group. Invariant percolation processes on quasi-transitive graphs have been studied extensively, in which the symmetry property associated with the quasi-transitivity makes the analysis more convenient, and interesting techniques were developed, for example, the mass-transport principle (MTP); see [1, 2]. Conjecture 1.2 was proved in [21] when the graph is a transitive triangulation of the plane with vertex degree at least 7; and in [12] when the graph \(G\) is infinite, locally finite, planar, 2-connected, simple and quasi-transitive. Furthermore, it is proved in [13] that if the graph \(G\) is infinite, locally finite, planar, 2-connected, simple, transitive and not a planar triangulation, then \(\mathbb{P}_{1-p_{c}^{site}}\) a.s. there are infinitely many infinite 1-clusters. Without the quasi-transitivity assumption, many existing techniques, e.g. ergodicity of measures, MTP, etc., do not work any more.
One approach to overcome this difficulty is based on the "uniform percolation" defined in [26]. In the absence of quasi-transitivity, the "uniform percolation" gives "similarities"
to different vertices which make the analysis work. One important consequence of the "uniform percolation" is the "stability of infinite clusters", which states that for any \(0\leq p_{1}<p_{2}\leq 1\), if there is uniform percolation at level \(p_{1}\), then every infinite 1-cluster in the i.i.d. Bernoulli\((p_{2})\) percolation a.s. contains at least one infinite 1-cluster in the i.i.d. Bernoulli\((p_{1})\) percolation. See also Theorem 1.10 in [19].
One class of graphs satisfying the uniform percolation condition are the"semi-transitive graphs" defined in [14], which are more general than quasi-transitive graphs but still quite restrictive. Moreover, there are examples of bounded-degree planar graphs which do not have uniform percolation near \(p_{c}^{site}(G)\) yet the conclusions of Conjecture 1.2 still hold. We shall discuss more on the approach of uniform percolation in a companion paper [22].
The main goal of this paper is to investigate the conjecture for planar graphs without the quasi-transitive assumption or the uniform percolation assumption. In the meanwhile, we hope to develop new techniques to study percolation on general locally finite graphs.
The \(p_{c}^{site}<\frac{1}{2}\) part of Conjecture 1.2 was proved in [16]. As we shall see, the techniques developed in this paper lead to a different proof of the fact that \(p_{c}^{site}<\frac{1}{2}\).
Matching graphs were introduced by Sykes and Essam [28] and explored further by Kesten [20]. Matching pairs of graphs may be considered as site-percolation analogs of dual pairs of graphs in bond-percolation, and play an essential role in the proofs the main results of the paper.
**Definition 1.3**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite, simple, planar graph. Fix an embedding to \(G\) into the plane. The **matching graph**\(G_{*}=(V,E_{*})\) is the graph whose vertex set is the same as that of \(G\); and for \(u,v\in V\) and \(u\neq v\), \(\langle u,v\rangle\in E_{*}\) if and only if one of the following two conditions holds_
1. \(\langle u,v\rangle\in E\)_; or_
2. \(\langle u,v\rangle\notin E\)_, but_ \(u\) _and_ \(v\) _share a finite face in_ \(G\)_._
**Definition 1.4**.: _The number of ends of a connected graph is the supreme over its finite subgraphs of the number of infinite components that remain after removing the subgraph._
The graphs considered in this paper may have more than one end; in other words, we allow infinite faces in our graphs. In Definition 1.3, \(E_{*}\setminus E\) contains only edges joining two vertices sharing finite faces; so that \(G_{*}\) is locally finite. If the proper embedding of graph \(G\) into \(\mathbb{R}^{2}\) has only finite faces, then Definition 1.3 coincides with the usual definition of matching graphs. It is known from Lemma 4.1 of [24] that any one-ended, locally finite planar graph in which every face is finite can be properly embedded into the plane.
Here is the main result proved in this paper:
**Theorem 1.5**.: _Let \(G\) be an infinite, connected graph properly embedded in \(\mathbb{R}^{2}\) such that the minimal vertex degree at least 7. Assume \(G\) has uniformly bounded face degree for finite faces. Then, for any \(p\in(p_{c}^{site}(G),1-p_{c}^{site}(G_{*}))\), a.s. there are infinitely many infinite 1-clusters in the i.i.d. Bernoulli\((p)\) site percolation on \(G\). Moreover, it is the case that \(p_{c}^{site}(G)<\frac{1}{2}\), so the above interval is invariably non-empty._
Since \(p_{c}^{site}(G_{*})<p_{c}^{site}(G)\), Theorem 1.5 implies the following corollary.
**Corollary 1.6**.: _Let \(G\) be an infinite, connected graph properly embedded in \(\mathbb{R}^{2}\) with minimal vertex degree at least 7 and uniformly bounded face degree for finite faces. Then the conclusions of conjecture 1.2 hold._
The proof of Theorem 1.5 is based on a novel construction of trees with vertex degrees 2 or 3 embedded to a graph satisfying the assumptions of Theorem 1.5. It is known from [4] that every locally finite graph with Cheeger constant at least \(n\) (where \(n\) is a positive integer) has a spanning forest in which every tree is isomorphic to \(T_{n+1}\); where \(T_{n+1}\) is the tree whose root has degree \(n\), and other vertices have degree \(n+2\). In [4] such spanning forests are constructed by flows on graphs, and the geometries of the embedded trees are not immediately clear from the construction. For planar graphs satisfying the assumptions of Theorem 1.5, we construct embedded trees in a somewhat explicit way, which can be used to prove the exponential decay of the two-point-correlations as the distance of the the two points goes to infinity.
Here we sketch the main steps to prove Theorem 1.5 as well as the organization of the paper.
1. Explicitly construct a tree \(T\) embedded in \(G\) as subgraph, and show that \(p_{c}^{site}(G)\leq p_{c}^{site}(T)<\frac{1}{2}\), and \(\mathbb{P}_{\frac{1}{2}}\) a.s. there are infinitely many infinite 1-clusters; see Section 2.
2. Show that when \(p<1-p_{c}^{site}(T)\), the connectivity probability in the matching graph \(G_{*}\) (hence in \(G\) as well) decays exponentially; see Section 4.
3. With the help of step (2), show that when \(p_{c}^{site}(G_{*})<p<1-p_{c}^{site}(T)\) (resp. \(p_{c}^{site}(G)<p<1-p_{c}^{site}(T)\)), \(\mathbb{P}_{p}\) a.s. there are infinitely many infinite 1-*-clusters (resp. infinitely many infinite 1-clusters).
4. It follows from step (3) and planar duality that when \(p_{c}^{site}(T)<p<1-p_{c}^{site}(G_{*})\), \(\mathbb{P}_{p}\) a.s. infinitely 1-clusters have infinitely many ends; see Section 3.
5. Show that when \(p_{c}^{site}(T)<p<1-p_{c}^{site}(G_{*})\), \(\mathbb{P}_{p}\) a.s. there are infinitely many infinite 1-clusters by 1. Since \(p_{c}^{site}(T)<p<1-p_{c}^{site}(G_{*})\), \(\mathbb{P}_{p}\)-a.s. there are infinitely many infinite 0-*-clusters, if one infinite 0-*-cluster have infinitely many ends, or infinitely many infinite 0-*-clusters have at least 2 ends; then by planar duality there are infinitely many infinite 1-clusters. 2. Consider the case that with strictly positive probability at most finitely many infinite 0-*-clusters have at least 2 ends, then with strictly positive probability every infinite 0-*-cluster is 1-ended. In this case we show that each infinite 0-*-cluster \(\xi\) has \(p_{c}^{site}(\xi)=1\). This is a contradiction to the assumption that \(1-p>p_{c}^{site}(G_{*})\). See Section 5.
### Background and Notation
Let \(G=(V,E)\) be an infinite, connected graph. Let \(\hat{G}=(V,\hat{E})\) be the graph obtained from \(G\) by
* removing any edge of \(G\) joining the same vertex;
* if multiple edges joining the same pair of vertices exist, keeping exactly such edge for each pair of vertices but removing all the others.
It is straightforward to check that \(p_{c}^{site}(G)=p_{c}^{site}(\hat{G})\). Hence without loss of generality, all graphs in this paper are assumed simple in the sense that
* each edge has two distinct endpoints; and
* there exists at most one edge joining two distinct vertices.
A walk of \(G\) is an alternating, finite or infinite sequence \((\dots,v_{0},e_{0},v_{1},e_{1},\dots)\) with \(e_{i}=\langle v_{i},v_{i+1}\rangle\) for all \(i\). Since our graphs are assumed simple, we may refer to a walk by its vertex-sequence only.
A walk is _closed_ if it can be written as \((v_{0},v_{1},\dots,v_{n})\) with \(v_{0}=v_{n}\). In this case we say the total number of steps in this closed walk is \(n\). A walk is _self-avoiding_ if it visits no vertex twice or more, and a self-avoiding walk is called a _path_. A _cycle_, or a _simple cycle_\(C=(v_{0},v_{1},\dots,v_{n},v_{0})\) is a path \((v_{0},v_{1},\dots,v_{n})\) such that \(v_{0}\sim v_{n}\) together with the edge \(\langle v_{0},v_{n}\rangle\).
Let \(G=(V,E)\) be a graph. Once \(G\) is suitably embedded in the space \(\mathbb{R}^{2}\), one defines a _face_ of \(G\) to be a maximal connected subset of \(\mathbb{R}^{2}\setminus G\). Note that faces are open sets, and may be either bounded or unbounded. While it may be helpful to think of a face as being bounded by a cycle of G, the reality can be more complicated in that faces are not invariably simply connected (if \(G\) is disconnected) and their boundaries are not generally self-avoiding cycles or paths (if \(G\) is not \(2\)-connected).
The boundary of a face is a closed walk; the _degree_ of a face \(f\), denoted by \(|f|\), is the total number of steps of the closed walk representing its boundary. The degree \(|f|\) of a face is bounded below by the number of distinct vertices on its boundary and by the number of distinct edges on its boundary; since a vertex or an edge on the boundary may be visited multiple times by the walk. Although in general, the boundary of a face can be more complicated than a cycle, as we shall show in Lemma 2.9, for graphs under the assumptions of the paper (especially the negative curvature assumption of each vertex), the boundary of each face is a cycle.
For \(u,w\in V\), we write \(u\sim w\) if \(u\) and \(w\) are joined by an edge.
For \(\omega\in\Omega\), we write \(u\leftrightarrow v\) if there exists an open path of \(G\) with endpoints \(u\) and \(v\), and \(x\stackrel{{ A}}{{\hookrightarrow}}v\) if such a path exists within the set \(A\subseteq V\). A similar notation is used for the existence of infinite open paths. When such open paths exist in the matching graph \(G_{*}\), we use the relation \(\stackrel{{*}}{{\hookrightarrow}}\).
## 2. Embedded Trees
The goal of this section is to show that every infinite, connected, planar graph with sufficiently large vertex/face degrees possesses a subgraph that is a tree in which every vertex other than the root vertex has degree \(3\) or \(4\). We start with geometric properties of planar graphs.
**Definition 2.1**.: _Let \(G=(V,E)\) be a planar graph. Let \(f\) be a face of \(G\). Let \(v_{0},v_{1},v_{2},\ldots,v_{|f|}(=v_{0})\) be the closed walk representing the the boundary of \(f\). Define_
\[\mathcal{A}(f):=(|f|-2)\pi-\sum_{i=1}^{|f|}\operatorname{internal\ angle\ of}\ f\ \text{at}\ v_{i} \tag{2.1}\]
_Let \(C\) be a simple cycle of \(G\) consisting of finitely many edges of \(G\), and let \(R_{C}\) be the planar region bounded by \(C\). Let \(f_{1},\ldots,f_{m}\) be all the faces enclosed by \(C\). Define_
\[\mathcal{A}(R_{C})=\sum_{i=1}^{m}\mathcal{A}(f_{i}).\]
**Definition 2.2**.: _Let \(G=(V,E)\) be a locally finite planar graph. Let \(v\in V\) and \(e_{1},\ldots,e_{d}\) be all the incident edges of \(v\) in cyclic order. For \(1\leq i\leq d-1\), let \(f_{i}\) be the face shared by \(e_{i}\) and \(e_{i+1}\). Let \(f_{d}\) be the face shared by \(f_{d}\) and \(f_{1}\). Let \(f:f\sim v\) denote all the faces \(f_{1},\ldots,f_{d}\); note that one face may appear multiple times in \(f_{1},\ldots,f_{d}\); in that case, it also appears multiple times in \(f:f\sim v\). Define the curvature \(\kappa(v)\) at a vertex \(v\in V\) to be_
\[2\pi-\sum_{f:f\sim v}\frac{|f|-2}{|f|}\pi.\]
**Lemma 2.3**.: _Let \(G=(V,E)\) be a planar graph, properly embedded into \(\mathbb{R}^{2}\). Let \(C\) be a cycle of \(G\) with \(n\) vertices \(v_{1},v_{2},\ldots,v_{n}\) on its boundary. Let \(f_{1},\ldots,f_{m}\) be all the faces enclosed by \(C\). Then_
\[\sum_{i=1}^{m}\mathcal{A}(f_{i})+\sum_{j=1}^{n}\operatorname{internal\ angle\ of}\ C\ \text{at}\ v_{i}=(n-2)\pi \tag{2.2}\]
Proof.: Note that
\[\sum_{i=1}^{m}\mathcal{A}(f_{i})=\sum_{i=1}^{m}\left[(|f_{i}|-2)\pi-\sum_{k=1 }^{|f_{i}|}\operatorname{internal\ angle\ of}\ \mathcal{A}(f_{i})\ at\ w_{j}\right];\]
where \(w_{1},\ldots,w_{|f_{i}|}\) are all the vertices along the boundary of \(f_{i}\). The following cases might occur
* if \(w_{k}\) is a vertex in the interior of the planar open region bounded by \(C\); then the sum of internal angles of faces at \(w_{k}\) is \(2\pi\);
* if \(w_{k}\) is a vertex along \(C\); then the sum of internal angles of faces at \(w_{k}\) is the internal angle of \(C\) at \(w_{k}\);
Let \(s\) (resp. \(t\)) be the total number of vertices (resp. edges) in the interior of the planar open region bounded by \(C\); then we obtain
\[\sum_{i=1}^{m}\mathcal{A}(f_{i})=-2m\pi-2s\pi-\sum_{j=1}^{n}\operatorname{ internal\ angle\ of}\ C\ \text{at}\ v_{i}+2t\pi+n\pi\]
By the Euler's formula
\[s+n+m-(t+n)=1\]
Then the lemma follows.
**Lemma 2.4**.: _Let \(G=(V,E)\) be a planar graph, properly embedded into \(\mathbb{R}^{2}\). Let \(C\) be a cycle of \(G\) consisting of \(n\) vertices \(v_{1},v_{2},\ldots,v_{n}\). Let \(f_{1},\ldots,f_{m}\) be all the faces enclosed by \(C\). Then_
\[\sum_{i=1}^{m}\mathcal{A}(f_{i})+\sum_{j=1}^{n}\text{internal angle of }C\text{ at }v_{i}=\sum_{z\in C}\sum_{f\in R_{C}:f\sim z}\frac{|f|-2}{|f|}\pi-\sum_{z\in R _{C}^{\circ}}\kappa(z) \tag{2.3}\]
_where \(R_{C}^{\circ}\) is the interior of the region of \(R_{C}\); i.e. \(R_{C}^{\circ}=R_{C}\setminus C\). If we further assume that for each \(v\in V\), \(\kappa(v)\leq 0\), then_
\[\sum_{z\in C}\sum_{f\in R_{C}:f\sim z}\frac{|f|-2}{|f|}\pi\leq(n-2)\pi \tag{2.4}\]
Proof.: From (2.1) we obtain for any face \(f\)
\[\mathcal{A}(f)=\sum_{v\in V_{0}\cap f}\left(\frac{[|f|-2]\pi}{|f|}-\text{ internal angle at }v\right).\]
Hence
\[\sum_{f\in R_{C}}\mathcal{A}(f)+\sum_{i\in[n]}\text{internal angle of }\text{P}_{0}\text{ at }v_{i}\] \[=\sum_{f\in R_{C}}\sum_{v\in V\cap f}\left(\frac{[|f|-2]\pi}{|f|} -\text{internal angle of }f\text{ at }v\right)+\sum_{i\in[n]}\text{internal angle of }C\text{ at }v_{i}\]
By splitting the 1st sum to the sum over faces incident to boundary vertices and faces incident to interior vertices of \(R_{C}\), we obtain
\[\sum_{f\in R_{C}}\mathcal{A}(f)+\sum_{i\in[n]}\text{internal angle of }\text{P}_{0}\text{ at }v_{i}\] \[=\sum_{z\in C}\sum_{f\in R_{C}:f\sim z}\left(\frac{[|f|-2]\pi}{|f |}-\text{internal angle of }f\text{ at }v\right)+\sum_{i\in[n]}\text{internal angle of }C\text{ at }v_{i}\] \[+\sum_{z\in R_{C}^{\circ}}\sum_{f\in P_{0}:f\sim z}\left(\frac{[ |f|-2]\pi}{|f|}-\text{internal angle of }f\text{ at }v\right)\] \[=\sum_{z\in C}\sum_{f\in R_{C}:f\sim z}\frac{|f|-2}{|f|}\pi-\sum_ {z\in R_{C}^{\circ}}\kappa(z),\]
where in the last identity, we use Definition 2.2 and note that \(\sum_{i\in[n]}\)internal angle of \(C\) at \(v_{i}\) cancel out in the sum. Then (2.3) follows. (2.4) follows from (2.3), (2.2) and the non-positive curvature assumption.
**Lemma 2.5**.: _Let \(G=(V,E)\) be a properly embedded planar graph. Assume that for each \(v\in V\), \(\kappa(v)\leq 0\). Consider the following walk on \(G\):_
* _If the walk visits vertex_ \(v_{n}\) _at the_ \(n\)_th step. Let_ \(F_{1,n}\) _(resp._ \(F_{2,n}\)_) be the collection of faces incident to_ \(v_{n}\) _on the left (resp. right) of the walk; then_ (2.5) \[\min_{i\in\{1,2\}}\sum_{f:f\in F_{i,n}}\frac{|f|-2}{|f|}\pi\geq\pi\]
_Then the walk is self-avoiding._
Proof.: Assume the walk form an \(n\)-cycle; (i.e., an \((n+1)\)-step walk whose initial vertex and ending vertex coincide). Then we note that under (2.5), (2.4) cannot hold. The contradiction implies the lemma.
Lemma 2.5 has the following straightforward corollaries.
**Corollary 2.6**.: _Let \(G=(V,E)\) be a planar graph properly embedded into \(\mathbb{R}^{2}\) such that each vertex degree is at least 6 and each face degree is at least 3. For each \(v\in V_{0}\), label the incident edges of \(v\) by 0, 1,..., \(\deg(v)-1\) in counterclockwise order. (Note that one edge may have two different labels by looking at its two distinct endpoints). Consider the following walk on \(G\):_
* _If the walk visits vertex_ \(v_{n}\) _at the_ \(n\)_th step, and the edge visited immediately before_ \(v_{n}\) _is the edge labelled by_ \(a\)_, then the edge visited immediately after_ \(v_{n}\) _is the edge labelled by_ \([(a+3)\mod\deg(v)]\)_._
_Then the walk is self-avoiding._
_Similarly, the following walk is also self-avoiding:_
* _If the walk visits vertex_ \(v_{n}\) _at the_ \(n\)_th step, and the edge visited immediately before_ \(v_{n}\) _is the edge labelled by_ \(a\)_, then the edge visited immediately after_ \(v_{n}\) _is the edge labelled by_ \([(a-3)\mod\deg(v)]\)_._
**Corollary 2.7**.: _Let \(G=(V,E)\) be a planar graph, properly embedded into \(\mathbb{R}^{2}\) such that each vertex degree is at least 4 and each face degree is at least 4. For each \(v\in V_{0}\), label the incident edges of \(v\) by 0, 1,..., \(\deg(v)-1\) in counterclockwise order. (Note that one edge may have two different labels by looking at its two distinct endpoints). Consider the following walk on \(G\):_
* _If the walk visits vertex_ \(v_{n}\) _at the_ \(n\)_th step, and the edge visited immediately before_ \(v_{n}\) _is the edge labelled by_ \(a\)_, then the edge visited immediately after_ \(v_{n}\) _is the edge labelled by_ \((a+2)\mod\deg(v)\)_._
_Then the walk is self-avoiding._
**Lemma 2.8**.: _Let \(G=(V,E)\) be an infinite, connected, planar graph, properly embedded into \(\mathbb{R}^{2}\) such that the minimal vertex degree is at least 7. Then each cycle consisting of 3 edges bound a degree-3 face._
Proof.: Let \(C\) be a cycle consisting of 3 edges and 3 vertices \(a,b,c\in V\). It suffices to show that in the bounded region \(R_{C}\) bounded by \(C\) there are no vertices and no edges of \(G\).
Assume that in \(R_{C}\) there are vertices or edges of \(G\). Since the graph is connected, at least one of \(a,b,c\) is incident to at least one edge in \(R_{C}\) other than \(\langle a,b\rangle\), \(\langle a,c\rangle\) and \(\langle b,c\rangle\). Since each face has degree at least 3, we have
\[\sum_{z\in C}\sum_{f\in R_{C}:f\sim z}\frac{|f|-2}{|f|}\pi\geq\frac{2\pi}{3}+ \frac{\pi}{3}+\frac{\pi}{3}>\pi\]
which is a contradiction to (2.4).
**Lemma 2.9**.: _Let \(G=(V,E)\) be an infinite, connected, planar graph, properly embedded into \(\mathbb{R}^{2}\) such that one of the following conditions holds_
1. _the minimal vertex degree is at least 7._
2. _the minimal vertex degree is at least 5 and the minimal face degree is at least 4._
_Then the boundary of every finite face is a cycle._
Proof.: Let \(f\) be an arbitrary finite face of \(G\). Let \(U_{f}\) be the unbounded component of \(\mathbb{R}^{2}\setminus\partial f\). Then \(C:=\partial U_{f}\) is a cycle. If \(\partial f\neq\partial U_{f}\), then \(|f|>|C|\). Hence we have
\[\sum_{z\in C}\sum_{f\in R_{C}:f\sim z}\frac{|f|-2}{|f|}\pi=\frac{|C|(|f|-2)\pi }{|f|}>(|C|-2)\pi,\]
which contradicts to (2.4).
Given Lemma 2.9, the degree \(|f|\) of a face \(f\) is equal to the number of vertices on the boundary of the face which is the same as the number of edges on the boundary of the face.
**Lemma 2.10**.: _Let \(G=(V,E)\) be an infinite, connected, planar graph, properly embedded into \(\mathbb{R}^{2}\) such that one of the following 2 conditions holds:_
1. _the minimal vertex degree is at least 7; or_
2. _the minimal vertex degree is at least 5; and the minimal face degree is at least 4._
_Then there exists a tree \(T=(V_{T},E_{T})\) embedded into \(G\) such that_
* _the root vertex of_ \(T\) _has degree 2; all the other vertices of_ \(T\) _has degree 3 or 4;_
* \(V_{T}\subset V\) _and_ \(E_{T}\subset E\)_._
See Figure 2.1 for an example of a tree embedded into the order 7 triangular tilings of the hyperbolic plane (i.e., a vertex transitive graph \(G\) drawn in \(\mathbb{H}^{2}\) such that each vertex has degree 7 and each face has degree 3) satisfying the conditions of Lemma 2.10.
Proof of Lemma 2.10 under condition (1).: Let \(G\) be a graph satisfying condition (1) of Lemma 2.10. We will find a tree as a subgraph of \(G\) recursively.
Let \(v\in V\). Let \(v_{0}\), \(v_{1}\) be two vertices adjacent to \(v\) in \(G\) such that \(v\), \(v_{0}\), \(v_{1}\) share a face. Starting from \(v,v_{0}\) construct a walk
\[\pi_{0}:=v,v_{0},v_{00},v_{000},\ldots,\]
Starting from \(v,v_{1}\) construct a walk
\[\pi_{1}:=v,v_{1},v_{11},v_{111},\ldots,\]
such that
* moving along \(v_{0},v,v_{1}\) in order, the face shared by \(v_{0},v,v_{1}\) is on the right; and
* moving along the walk \(\pi_{0}\) starting from \(v\), at each vertex \(v_{0^{k}}\) (\(k\geq 1\)), there are exactly 3 incident faces on the right of \(\pi_{0}\); and
* moving along the walk \(\pi_{1}\) starting from \(v\), at each vertex \(v_{1^{k}}\) (\(k\geq 1\)), there are exactly 3 incident faces on the left of \(\pi_{1}\).
By Corollary 2.6, both \(\pi_{0}\) and \(\pi_{1}\) are infinite and self-avoiding.
Let
\[\pi_{1,1}:=\pi_{1}\setminus\{v\}=v_{1},v_{11},v_{111},\ldots\]
There exists \(v_{01}\in V\) such that
* \(v_{01}\) is adjacent to \(v_{0}\); and
* \(v_{0},v_{00},v_{01}\) share a face on the left of the walk \(\pi_{0}\).
Similarly, there exist \(v_{10},v_{1,\frac{1}{2}}\in V\) such that
* both \(v_{10}\) and \(v_{1,\frac{1}{2}}\) are adjacent to \(v_{1}\); and
* \(v_{1},v_{1,\frac{1}{2}},v_{11}\) share a face on the right of the walk \(\pi_{1}\); and
* \(v_{1},v_{10},v_{1,\frac{1}{2}}\) share a face; moving along \(v_{10},v_{1},v_{1,\frac{1}{2}}\) in order, the face is on the right.
Note that \(v_{01}\neq v\) and \(v_{10}\neq v\), \(v_{1,\frac{1}{2}}\neq v\) since each vertex in \(G\) has degree at least 7.
Starting from \(v_{0},v_{01}\), construct a walk
\[\pi_{01}:=v_{0},v_{01},v_{011},v_{0111},\ldots\]
Figure 2.1. Tree embedding in a order-7 triangular tiling of the hyperbolic plane: the order-7 triangular tiling is represented by black lines, and the embedded tree is represented by red lines.
Starting from \(v_{1},v_{10}\), construct a walk
\[\pi_{10}:=v_{1},v_{10},v_{100},v_{1000},\ldots\]
Assume that
* moving along \(v_{00},v_{0},v_{01}\) in order, the face shared by \(v_{00},v_{0},v_{01}\) is on the right; and
* moving along \(v_{1,\frac{1}{2}},v_{1},v_{11}\) in order, the face shared by these vertices is on the right; and
* moving along the walk \(\pi_{01}\) starting from \(v_{0}\), at each vertex \(v_{01^{k}}\) (\(k\geq 1\)), there are exactly 3 incident faces on the left of \(\pi_{01}\); and
* moving along the walk \(\pi_{10}\) starting from \(v\), at each vertex \(v_{10^{k}}\) (\(k\geq 1\)), there are exactly 3 incident faces on the right of \(\pi_{10}\).
By Corollary 2.6, both walks are infinite and self-avoiding. Furthermore, let
\[\tilde{\pi}_{01}:=v,\pi_{01};\qquad\tilde{\pi}_{10}:=v,\pi_{10}\]
By Corollary 2.6, \(\tilde{\pi}_{01}\) is self-avoiding.
We claim that \(\tilde{\pi}_{10}\) is self-avoiding. Assume \(\tilde{\pi}_{10}\) is not self-avoiding; we shall obtain a contradiction. Since \(\pi_{10}\) is self-avoiding, if \(\tilde{\pi}_{10}\) is not self-avoiding, we can find a polygon \(P_{0}\) consisting of vertices of \(\tilde{\pi}_{10}\) including \(v\). Assume the polygon \(P_{0}\) has exactly \(m\) vertices on the boundary \(\partial P_{0}\) denoted by \(w_{1},\ldots,w_{m}\); then we must have (2.2) holds with \(P\) replaced by \(P_{0}\), \(n\) replaced by \(m\) and \(V_{i}\) replaced by \(w_{i}\).
Under the assumption that each vertex has degree at least 7 and each face has degree at least 3, \(\kappa(z)<0\); and
\[\frac{|f|-2}{|f|}\geq\frac{1}{3} \tag{2.6}\]
Note that each vertex along \(\partial P_{0}\) except \(v\) and \(v_{1}\) are incident to at least 3 faces in \(P_{0}\); \(v\) is incident to at least 1 face in \(P_{0}\) and \(v_{1}\) is incident to at least 2 faces in \(P_{0}\). Then we have
\[\sum_{z\in\partial P_{0}}\sum_{f\in P_{0}:f\sim z}\frac{|f|-2}{|f|}\pi\geq(m-2 )\pi+\sum_{f\in P_{0}:f\sim v,\text{or }f\sim v_{1}}\frac{|f|-2}{|f|}\pi\geq(m-2)\pi+\pi>(m-2)\pi,\]
which contradicts (2.4), and therefore \(\tilde{\pi}_{10}\) is self-avoiding.
We claim that \(\pi_{01}\) and \(\pi_{10}\) never intersect each other. Otherwise let \(w\in V\cap\pi_{01}\cap\pi_{10}\) be the first intersection vertex of \(\pi_{01}\) and \(\pi_{10}\), respectively, where \(\pi_{01}\) and \(\pi_{10}\) start from \(v_{0}\) and \(v_{1}\), respectively. Then the portion of \(\pi_{01}\) between \(v_{0}\) and \(w\), the portion of \(\pi_{10}\) between \(v_{1}\) and \(w\) and the edges \(v_{0}v\), \(v_{1}v\) form a polygon \(P\) in the plane. Assume the polygon \(P\) has exactly \(n\) vertices on the boundary; then we must have (2.2) holds. Under the assumption that \(v_{0}\) and \(v_{1}\) have degrees at least 7, \(v_{0}\) is incident to at least 3 faces in \(P\) and \(v_{1}\) is incident to at least 2 faces in \(P\).
Under the assumption that each vertex has degree at least 7 and each face has degree at least 3, \(\kappa(z)<0\); we have (2.6).
Note that each vertex along \(\partial P\) except \(v\), \(w\) and \(v_{1}\) are incident to at least \(3\) faces in \(P\); \(v\) and \(w\) are incident to at least \(1\) face in \(P\) and \(v_{1}\) is incident to at least \(2\) faces in \(P\). Then we have
\[\sum_{z\in\partial P}\sum_{f\in P:f\sim z}\frac{|f|-2}{|f|}\pi\geq(n-3)\pi+\sum _{f\in P:f\sim v,\text{or }f\sim w,\text{ or }f\sim v_{1}}\frac{|f|-2}{|f|}\pi\geq(n-3)\pi+\frac{4\pi}{3}>(n-2)\pi.\]
Hence (2.4) never holds, and therefore \(\pi_{01}\) and \(\pi_{10}\) are disjoint. We repeat the same construction with \((v_{0},v,v_{1})\) replaced by \((v_{00},v_{0},v_{01})\).
Starting \(v_{1},v_{1,\frac{1}{2}}\), we construct a walk
\[\pi_{1,\frac{1}{2}}:=v_{1},v_{1,\frac{1}{2}},v_{1,\frac{1}{2},1},v_{1,\frac{1 }{2},1,1},\ldots\]
such that
* moving along the walk \(\pi_{1,\frac{1}{2}}\) staring from \(v_{1}\), at each vertex \(\pi_{1,\frac{1}{2},1^{k}}\) (\(k\geq 0\)), there are exactly \(3\) incident faces on the left.
Let
\[\tilde{\pi}_{1,\frac{1}{2}}:=v,\pi_{1,\frac{1}{2}}\] \[\pi_{1,\frac{1}{2},1}:=\pi_{1,\frac{1}{2}}\setminus\{v_{1}\}=v_{1,\frac{1}{2}},v_{2,\frac{1}{2},1},v_{2,\frac{1}{2},1,1},\ldots\]
By Corollary 2.6, \(\tilde{\pi}_{1,\frac{1}{2}}\) is infinite and self-avoiding. Moreover, using Lemma 2.5, one can prove that
* The intersection of any two paths in \(\pi_{1},\pi_{10},\pi_{1,\frac{1}{2}}\) is \(\{v_{1}\}\).
* \(\pi_{1,\frac{1}{2}}\cap\pi_{0}=\emptyset\) and \(\pi_{1,\frac{1}{2}}\cap\pi_{01}=\emptyset\)
Let \(v\) be the level-\(0\) vertex, \(v_{0},v_{1}\) be level-\(1\) vertices, and \(v_{00},v_{01},v_{10},v_{1,\frac{1}{2}},v_{11}\) be the level-\(2\) vertices. In general For \(k\geq 2\), define the set \(S_{k}\) of level-\(k\) vertices as follows
\[S_{k}:=\left\{v_{b}:b=(b_{1},\ldots,b_{k})\in\left\{0,\frac{1}{2},1\right\}^ {k};\text{if }b_{j}=\frac{1}{2},\text{ then }j\geq 2,\text{ and }b_{j-1}=1.\right\}. \tag{2.7}\]
Assume we defined all the level-\(k\) vertices. For each \(v_{b}\in S_{k}\), the following cases might occur
* \(b_{k}=0\): in this case we define \(2\) paths \(\pi_{b,0}\), \(\pi_{b,1}\) as defining \(\pi_{0}\) and \(\pi_{1}\) with \(v_{b}\) replaced by \(v\).
* \(b_{k}=1\): in this case we define \(3\) paths \(\pi_{b,0}\), \(\pi_{b,\frac{1}{2}}\)\(\pi_{b,1}\) as defining \(\pi_{10}\)\(\pi_{1,\frac{1}{2}}\) and \(\pi_{11}\) with \(v_{b}\) replaced by \(v_{1}\).
* \(b_{k}=\frac{1}{2}\): in this case we define \(2\) paths \(\pi_{b,0}\), \(\pi_{b,1}\) as defining \(\pi_{0}\) and \(\pi_{1}\) with \(v_{b}\) replaced by \(v\).
Then we find a tree \(T\) whose vertex set consists of \(\{v,v_{0},v_{1}\}\cup_{k\geq 2}S_{k}\) ane edge set consists of all the edges along a path \(\pi_{b}\) such that for some \(k\geq 1\)\(b=(b_{1},\ldots,b_{k})\in\left\{0,\frac{1}{2},1\right\}^{k};\) if \(b_{j}=\frac{1}{2},\text{ then }j\geq 2,\text{ and }b_{j-1}=1\) as a subgraph of \(G\). Then Part (1) of Lemma 2.10 follows. \(\square\)
Proof of Lemma 2.10 under condition (2) is similar; see Section A.
**Proposition 2.11**.: _Let \(G=(V,E)\) be an infinite, connected, planar graph, properly embedded into \(\mathbb{R}^{2}\). Suppose that one of the following 2 conditions holds:_
1. _the minimal vertex degree is at least 7; or_
2. _the minimal vertex degree is at least 5; and the minimal face degree is at least 4;_
_Then the in Bernoulli(\(\frac{1}{2}\)) site percolation on \(G\), a.s. has infinitely many infinite 1-clusters and infinitely many infinite 0-clusters._
Proof.: Let \(T\) be the tree constructed as in Lemma 2.10.
Then
\[p_{c}^{site}(G)\leq p_{c}^{site}(T)\leq\frac{1}{2^{\frac{2}{3}}3^{\frac{1}{3}}} <\frac{1}{2}.\]
See [23] for the computations of the critical percolation probabilities on trees.
Now we have an infinite tree embedded in \(G\) as a subgraph. Let \(r\) be an arbitrary finite length binary word. Suppose that the vertices \(v_{r}\), \(v_{r0}\), \(v_{r00}\), \(v_{r1}\), \(v_{r10}\) are closed, and each of \(v_{r00}\), \(v_{r10}\) percolates (in closed vertices) in the subtree below it. This implies that the open clusters intersecting the subgraph below \(v_{r01}\) will be disjoint from those below \(v_{r11}\), which gives non-uniqueness, because each of these subgraphs is sure to contain infinite open clusters; see Figure 4.2 By the Borel-Contelli lemma, such \(r\) occurs infinitely often with probability 1.
## 3. Characterization of Critical Percolation Probability
In a powerful refinement of a classical argument of Hammersley [15], Duminil-Copin and Tassion [9] showed, for transitive \(G\), that the critical value \(p_{c}(G)\) may be characterized in terms of the mean number of points on the surface of a box that are connected to its root.
Figure 2.2. Infinite open clusters in the tree rooted at \(v_{r01}\) is separated from the infinite open clusters in the tree rooted at \(v_{r11}\) by the infinite closed cluster occupying \(v_{r}\), \(v_{r0},v_{r1}\), \(v_{r00},v_{r10}\)
This work is extended in this section to general locally finite graphs without the transitive or quasi-transitive assumptions.
Let \(G=(V,E)\) be a graph. For each \(p\in(0,1)\), let \(\mathbb{P}_{p}\) be the probability measure of the i.i.d. Bernoulli\((p)\) site percolation on \(G\). For each \(S\subset V\), let \(S^{\circ}\) consist of all the interior vertices of \(S\), i.e., vertices all of whose neighbors are in \(S\) as well. For each \(S\subseteq V\), \(v\in S\), define
\[\varphi_{p}^{v}(S):=\begin{cases}\sum_{y\in S:[\partial_{V}y]\cap S^{c} \not\to\emptyset}\mathbb{P}_{p}(v\stackrel{{ S^{\circ}}}{{ \longleftrightarrow}}\partial_{V}y)&\text{if }v\in S^{\circ}\\ 1&\text{if }v\in S\setminus S^{\circ}\end{cases}\]
where
* \(v\stackrel{{ S^{\circ}}}{{\longleftrightarrow}}x\) is the event that the vertex \(v\) is joined to the vertex \(x\) by an open paths visiting only interior vertices in \(S\);
* let \(A\subseteq V\); \(v\stackrel{{ S^{\circ}}}{{\longleftrightarrow}}A\) if and only if there exists \(x\in A\) such that \(v\stackrel{{ S^{\circ}}}{{\longleftrightarrow}}x\);
* \(\partial_{V}y\) consists of all the vertices adjacent to \(y\).
**Lemma 3.1**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. The critical site percolation probability on \(G\) is given by_
\[\tilde{p}_{c}=\sup\{p\geq 0:\exists\epsilon_{0}>0,\mathrm{s.t.}\forall v\in V, \exists S_{v}\subseteq V\text{ satisfying }|S_{v}|<\infty\text{ and }v\in S_{v}^{\circ},\varphi_{p}^{v}(S_{v})\leq 1- \epsilon_{0}\}.\]
_Moreover_
1. _If_ \(p>\tilde{p}_{c}\)_, a.s. there exists an infinite 1-cluster; moreover, for any_ \(\epsilon>0\) _there exists a vertex_ \(w\)_, satisfying_ (3.1) \[\mathbb{P}_{p}(w\leftrightarrow\infty)\geq 1-\left(\frac{1-p}{1-\tilde{p}_{c}} \right)^{1-\epsilon}\]
2. _If_ \(p<\tilde{p}_{c}\)_, then for any vertex_ \(v\in V\)__ (3.2) \[\mathbb{P}_{p}(v\leftrightarrow\infty)=0.\]
_In particular, (1) and (2) implies that \(p_{c}^{site}(G)=\tilde{p}_{c}\)_
**Lemma 3.2**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. For all \(p>0\) and \(\Lambda\subset V\) finite,_
\[\frac{d}{dp}\mathbb{P}_{p}(v\leftrightarrow\Lambda^{c})\geq\frac{1}{1-p} \left[\inf_{S:v\in S,|S|<\infty}\varphi_{p}^{v}(S)\right](1-\mathbb{P}_{p}(v \leftrightarrow\Lambda^{c}))\]
Proof.: Define the random subset \(\mathcal{H}\) of \(\Lambda\):
\[\mathcal{H}:=\{x\in\Lambda.\text{ s.t. }x\nleftrightarrow\Lambda^{c}\}\]
Recall that for each \(y\in V\), if \(\omega^{y}\in\{v\leftrightarrow\Lambda^{c}\}\) and \(\omega_{y}\notin\{v\leftrightarrow\Lambda^{c}\}\), then \(y\) is pivotal for the event \(v\leftrightarrow\Lambda^{c}\). Here for each \(\omega\in\{0,1\}^{V}\),
\[\omega^{y}(u):=\begin{cases}\omega(u);&\text{If }u\neq y;\\ 1&\text{If }u=y.\end{cases}\qquad\omega_{y}(u):=\begin{cases}\omega(u);&\text{If }u \neq y;\\ 0&\text{If }u=y.\end{cases}\]
By Russo's formula ([25])
\[\frac{d}{dp}\mathbb{P}_{p}[v\leftrightarrow\Lambda^{c}] =\sum_{y\in V}\mathbb{P}_{p}(y\text{ is pivotal for }v\leftrightarrow\Lambda^{c})\] \[\geq\frac{1}{1-p}\sum_{y\in V}\mathbb{P}_{p}(y\text{ is pivotal for }v \leftrightarrow\Lambda^{c}\text{ and }v\nleftrightarrow\Lambda^{c})\] \[\geq\frac{1}{1-p}\sum_{S:v\in S}\sum_{y\in V}\mathbb{P}_{p}(y \text{ is pivotal for }v\leftrightarrow\Lambda^{c}\text{ and }\mathcal{H}=S) \tag{3.3}\]
Note that if \(v\in S^{\circ}\) the event that \(y\) is pivotal for \(v\leftrightarrow\Lambda^{c}\) and \(\mathcal{H}=S\) is nonempty only if
* \(y\in S\) and \(y\) is adjacent to a vertex \(x\notin S\); and
* \(y\) is closed; and
* \(y\) is adjacent to a vertex \(z\), s.t. \(z\xleftrightarrow{S^{\circ}}v\)
Then for each \(v\in S^{\circ}\), \(y\in S\) and \(\partial_{V}y\cap S^{c}\neq\emptyset\) we have
\[\mathbb{P}_{p}(y\text{ is pivotal for }v\leftrightarrow\Lambda^{c}\text{ and } \mathcal{H}=S)=\mathbb{P}(\partial_{V}y\xleftrightarrow{S^{\circ}}v)\mathbb{P}( \mathcal{H}=S); \tag{3.4}\]
since the event \(\partial_{V}y\xleftrightarrow{S^{\circ}}v\) depends only on vertices in \(S\) with all neighbors in \(S\) while the event \(\mathcal{H}=S\) depends only on vertices in \(S\) that has at least one neighbor not in \(S\).
Moreover, if \(v\in S\setminus S^{\circ}\) the event that \(y\) is pivotal for \(v\leftrightarrow\Lambda^{c}\) and \(\mathcal{H}=S\) is nonempty only if \(y=v\). In this case for each \(v\in S\setminus S^{\circ}\), \(y\in S\) and \(\partial_{V}y\cap S^{c}\neq\emptyset\)
\[\mathbb{P}_{p}(y\text{ is pivotal for }v\leftrightarrow\Lambda^{c}\text{ and } \mathcal{H}=S)=\mathbf{1}_{y=v}\mathbb{P}(\mathcal{H}=S); \tag{3.5}\]
Plugging (3.4) and (3.5) into (3.3), we obtain
\[\frac{d}{dp}\mathbb{P}_{p}[v\leftrightarrow\Lambda^{c}] \geq\frac{1}{1-p}\sum_{S:v\in S^{\circ}}\sum_{y\in S:\partial_{V} y\cap S^{c}\neq\emptyset}\mathbb{P}_{p}(\partial_{V}y\xleftrightarrow{S^{ \circ}}v)\mathbb{P}_{p}(\mathcal{H}=S)\] \[+\frac{1}{1-p}\sum_{S:v\in S\setminus S^{\circ}}\mathbb{P}_{p}( \mathcal{H}=S)\] \[\geq\frac{1}{1-p}\left[\inf_{S:v\in S,|S|<\infty}\varphi_{p}^{v} (S)\right]\sum_{S:v\in S}\mathbb{P}_{p}(\mathcal{H}=S).\]
Then the lemma follows from the fact that
\[\sum_{S:v\in S}\mathbb{P}_{p}(\mathcal{H}=S)=\mathbb{P}_{p}(v\nleftrightarrow \Lambda^{c})=1-\mathbb{P}_{p}(v\leftrightarrow\Lambda^{c})\]
**Lemma 3.3**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. Let \(p>0\), \(u\in S\subset A\) and \(B\cap S=\emptyset\). Then_
* _If_ \(u\in S^{\circ}\)__ \[\mathbb{P}_{p}(u\xleftrightarrow{A}B)\leq\sum_{y\in S:\partial_{V}y^{\prime} \cap S^{c}\neq\emptyset}\mathbb{P}_{p}(u\xleftrightarrow{S^{\circ}}\partial_{V} y)\mathbb{P}_{p}(y\xleftrightarrow{A}B)\]
* _If_ \(u\in S\setminus S^{\circ}\)_,_ \[\mathbb{P}_{p}(u\xleftrightarrow{A}B)\leq\sum_{y\in S:\partial_{V}y^{\prime} \cap S^{c}\neq\emptyset}\mathbf{1}_{y=u}\mathbb{P}_{p}(y\xleftrightarrow{A}B)\]
Proof.: The conclusion is straightforward when \(u\in S\setminus S^{\circ}\). It remains to prove the case when \(u\in S^{\circ}\). Let \(u\in S^{\circ}\) and assume that the event \(u\xleftrightarrow{A}B\) holds. Consider an open path \(\{u_{j}\}_{0\leq j\leq K}\) from \(u\) to \(B\). Since \(B\cap S=\emptyset\), one can find the least \(k\) such that \(u_{k+1}\notin S^{\circ}\). Then the following events occur disjointly:
* \(u\) is connected to \(\partial_{V}u_{k+1}\) by an open path in \(S^{\circ}\);
* \(u_{k+1}\) is connected to \(B\) in \(A\).
Then the lemma follows.
### Proof of Lemma 3.1(1)
Let \(p>\tilde{p}_{c}\). Define
\[f_{v}(p):=\mathbb{P}_{p}[v\leftrightarrow\Lambda^{c}] \tag{3.6}\]
By Lemma 3.2, for any \(\epsilon>0\), there exists \(w\in V\)
\[\frac{f_{w}^{\prime}(p)}{1-f_{w}(p)}\geq\frac{1-\epsilon}{1-p},\ \forall p>\tilde{p}_{c}.\]
Integrating both the left hand side and right hand side from \(p_{1}>\tilde{p}_{c}\) to \(p\), using the fact that \(f_{w}\left(p_{1}\right)\geq 0\), and letting \(p_{1}\to\tilde{p}_{c}\) we obtain that (3.1) holds. Then a.s. there exists an infinite 1-cluster by the Kolmogorov 0-1 law.
### Proof of Lemma 3.1(2)
We shall use \(p_{c}\) to denote \(p_{c}^{site}(G)\) when there is no confusion. Note that Part (1) implies that \(p_{c}\leq\tilde{p}_{c}\). If \(p_{c}<\tilde{p}_{c}\), it suffices to show that there exists \(p^{\prime}\in(p_{c},\tilde{p}_{c})\) such that
\[\mathbb{P}_{p^{\prime}}(v\leftrightarrow\infty)=0,\ \forall v\in V;\]
which contradicts the definition of \(p_{c}\); then \(p_{c}=\tilde{p}_{c}\) and (3.2) holds.
From the definition of \(\tilde{p}_{c}\), we see that if \(p_{c}<\tilde{p}_{c}\), there exists \(p^{\prime}\in(p_{c},\tilde{p}_{c})\) such that there exists \(\epsilon_{0}>0\) for all \(v\in V\), there exists a finite \(S_{v}\subseteq V\) satisfying \(v\in S_{v}^{\circ}\) and
\[\varphi_{p^{\prime}}^{v}(S_{v})\leq 1-\epsilon_{0}.\]
By Lemma 3.3,
\[\mathbb{P}_{p^{\prime}}(v\leftrightarrow\infty)\leq\sum_{y\in S_{v}:\partial _{V}y^{\prime}\cap S_{v}^{c}\neq\emptyset}\mathbb{P}_{p^{\prime}}(v\xleftrightarrow \partial_{V}y)\mathbb{P}_{p^{\prime}}(y\leftrightarrow\infty)\]
Similarly, there exists a finite \(S_{y}\subseteq V\) satisfying \(y\in S_{y}^{\circ}\) and
\[\varphi_{p^{\prime}}^{y}(S_{y})\leq 1-\epsilon_{0}.\]
Again by Lemma 3.3,
\[\mathbb{P}_{p^{\prime}}(y\leftrightarrow\infty)\leq\sum_{y_{1}\in S_{y}: \partial_{V}y_{1}\cap S_{y}^{c}\neq\emptyset}\mathbb{P}_{p^{\prime}}(y\stackrel{{ S_{y}^{\circ}}}{{\longleftrightarrow}} \partial_{V}y_{1})\mathbb{P}_{p^{\prime}}(y_{1}\leftrightarrow\infty)\]
Since the graph is locally finite, the process can continue for infinitely many steps. Hence we have
\[\mathbb{P}_{p^{\prime}}(v\leftrightarrow\infty)\leq\lim_{n\to \infty}(1-\epsilon_{0})^{n}=0.\]
Then the lemma follows.
**Lemma 3.4**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. For each \(p>p_{c}^{site}(G)\) and \(\epsilon>0\), there exist infinitely many vertices in \(V\) satisfying (3.1)._
Proof.: When \(p>p_{c}^{site}(G)\), \(\tilde{p}:=\frac{p+p_{c}^{site}(G)}{2}>p_{c}^{site(G)}\). By Lemma 3.1, for any \(\epsilon>0\), there exists \(v\in V\), such that for all finite \(S_{v}\in V\) satisfying \(v\in S_{v}^{\circ}\), \(\varphi_{\tilde{p}}^{v}(S_{v})\geq 1-\epsilon\).
For each \(\epsilon>0\), and \(\tilde{p}>p_{c}^{site}\), define
\[V_{\tilde{p},\epsilon}:=\{v\in V:\forall S_{v}\subset V\text{ finite and }v\in S_{v}^{\circ},\varphi_{\tilde{p}}^{v}(S_{v})\geq 1-\epsilon\} \tag{3.7}\]
Assume \(|V_{\tilde{p},\epsilon}|<\infty\), we shall obtain a contradiction. When \(|V_{\tilde{p},\epsilon}|<\infty\), there exists \(v_{0}\in V,N\in V\) such that \(V_{\tilde{p},\epsilon}\subseteq B(v_{0},N)\). Let
\[W:=V\setminus B(v_{0},N)\]
Then by Lemma 3.3, for any finite \(S\subset V\) and \(x\in S^{\circ}\), \(x\notin V_{\tilde{p},\epsilon}\) we have
\[\mathbb{P}_{\tilde{p}}(x\stackrel{{ W}}{{\longleftrightarrow}} \infty)\leq\sum_{y\in S:\partial_{V}y^{\prime}\cap S^{c}\neq\emptyset} \mathbb{P}_{\tilde{p}}(x\stackrel{{ S^{\circ}}}{{\longleftrightarrow}} \partial_{V}y)\mathbb{P}_{\tilde{p}}(y\stackrel{{ W}}{{ \longleftrightarrow}}\infty)\]
Since \(x\notin V_{\tilde{p},\epsilon}\), there exists \(S_{x}\subset V\) finite and \(x\in S_{x}^{\circ}\), \(\varphi_{\tilde{p}}^{z}(S_{x})<1-\epsilon\), then
\[\mathbb{P}_{\tilde{p}}(x\stackrel{{ W}}{{\longleftrightarrow}} \infty)\leq\sum_{y\in S_{x}:\partial_{V}y^{\prime}\cap S_{x}^{c}\neq\emptyset }\mathbb{P}_{\tilde{p}}(x\stackrel{{ S_{y}^{\circ}}}{{ \longleftrightarrow}}\partial_{V}y)\mathbb{P}_{\tilde{p}}(y\stackrel{{ W}}{{ \longleftrightarrow}}\infty)\]
When \(y\in W\), there exists \(S_{y}\subset V\) finite and \(y\in S_{y}^{\circ}\), \(\varphi_{\tilde{p}}^{v}(S_{y})<1-\epsilon\), then we have
\[\mathbb{P}_{\tilde{p}}(y\stackrel{{ W}}{{\longleftrightarrow}} \infty)\leq\sum_{y_{1}\in S_{y}:\partial_{V}y_{1}\cap S_{y}^{c}\neq\emptyset} \mathbb{P}_{\tilde{p}}(z\stackrel{{ S_{y}^{\circ}}}{{ \longleftrightarrow}}\partial_{V}y_{1})\mathbb{P}_{\tilde{p}}(y_{1} \stackrel{{ W}}{{\longleftrightarrow}}\infty)\]
Since each \(S_{x}\), \(S_{y}\) are finite and the graph is locally finite, the process can be continued infinitely many times, then we obtain
\[\mathbb{P}_{\tilde{p}}(z\stackrel{{ W}}{{\longleftrightarrow}} \infty)\leq\lim_{n\to\infty}(1-\epsilon)^{n}=0.\]
Then a.s. there is no infinite 1-cluster in the graph \(G\setminus B(v_{0},N)\). But removing a finite set from \(G\) does not affect the a.s. existence of an infinite 1-cluster since the graph is locally finite. Then contradiction implies that \(|V_{\tilde{p},\epsilon}|=\infty\).
The proof of Lemma 3.4 also implies the following corollary:
**Corollary 3.5**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. For each \(p>p_{c}^{site}(G)\) and \(\epsilon>0\), let \(V_{p,\epsilon}\) be defined as in (3.7) with \(\tilde{p}\) replaced by \(p\). Let \(G\setminus V_{p,\epsilon}\) be the graph obtained from \(G\) by removing all the vertices and incident edges of \(V_{p,\epsilon}\). Then_
\[p_{c}^{site}(G\setminus V_{p,\epsilon})\geq p>p_{c}^{site}(G)\]
The following critical points based on the connectivity properties were defined in Section 3 of [27].
**Definition 3.6**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. Define_
\[p_{exp}(G):=\sup\left\{p\in(0,1):\text{for some }C,\gamma\in(0,\infty), \mathbb{P}_{p}(x\leftrightarrow y)\leq Ce^{-\gamma d_{G}(x,y)},\ \forall x,y\in V\right\}.\]
\[p_{conn}(G):=\sup\left\{p\in(0,1):\lim_{n\to\infty}\sup_{x,y\in V:d_{G}(x,y)\geq n }\mathbb{P}_{p}(x\leftrightarrow y)=0\right\}.\]
It is straightforward to check that \(p_{exp}(G)\leq p_{conn}(G)\).
For each \(v\in V\), let \(C(v)\) be the 1-cluster including \(v\). If \(v\) is closed, then \(C(v)=\emptyset\). For each \(U\subseteq V\), let
\[C(U)=\cup_{v\in U}C(v)\]
Let \(\partial U\) consist of all the vertices not in \(U\) but has at least one neighbor in \(U\).
**Lemma 3.7**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph._
* _Let_ \(\mathcal{A}_{\infty}\) _be the event that there are infinitely many infinite 1-clusters. Then either_ \(\mathbb{P}_{p}(\mathcal{A}_{\infty})=0\) _or_ \(\mathcal{P}_{p}(\mathcal{A}_{\infty})=1\)_._
Proof.: Note that \(\mathcal{A}_{\infty}\) occurs if and only if for any finite subset \(\Lambda\subset V\), there exist infinitely many infinite 1-clusters \(C(v)\)'s satisfying
\[d_{G}(C(v),C(\Lambda\cup\partial\Lambda))\geq 2.\]
which depends only on states of vertices in \(\Lambda^{c}\). Hence \(\mathcal{A}_{\infty}\) is measurable with respect to the tail-event; then \(\mathbb{P}_{p}(\mathcal{A}_{\infty})=0\) or \(\mathbb{P}_{p}(\mathcal{A}_{\infty})=1\) by the Kolmogorov 0-1 law.
**Lemma 3.8**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. Let \(\mathcal{A}_{1}\) be the event that there exists a unique infinite 1-cluster. Then for any \(u,v\in V\)_
\[\mathbb{P}_{p}(u\leftrightarrow v)\geq\mathbb{P}_{p}(u\leftrightarrow\infty) \mathbb{P}_{p}(v\leftrightarrow\infty)\mathbb{P}_{p}(\mathcal{A}_{1})\]
Proof.: \[\mathbb{P}_{p}(u\leftrightarrow v)\geq\mathbb{P}_{p}(u\leftrightarrow\infty,v \leftrightarrow\infty,\mathcal{A}_{1})=\mathbb{P}_{p}(v\leftrightarrow\infty, \mathcal{A}_{1})-\mathbb{P}_{p}(v\leftrightarrow\infty,\mathcal{A}_{1},u \nleftrightarrow\infty)\]
Note that
\[\mathbb{P}_{p}(v\leftrightarrow\infty,\mathcal{A}_{1},u\nleftrightarrow\infty) =\sum_{S:[a\in S,|S|<\infty]\text{ or }S=\emptyset}\mathbb{P}_{p}(v \leftrightarrow\infty,\mathcal{A}_{1}|C(a)=S)\mathbb{P}_{p}(C(a)=S)\] \[\leq\mathbb{P}_{p}(v\leftrightarrow\infty,\mathcal{A}_{1})\sum_ {S:[u\in S,|S|<\infty]\text{ or }S=\emptyset}\mathbb{P}_{p}(C(u)=S)\] \[=\mathbb{P}_{p}(v\leftrightarrow\infty,\mathcal{A}_{1})\mathbb{P }_{p}(u\nleftrightarrow\infty)\]
Hence we have
\[\mathbb{P}_{p}(u\leftrightarrow\infty,v\leftrightarrow\infty, \mathcal{A}_{1})\geq\mathbb{P}_{p}(v\leftrightarrow\infty,\mathcal{A}_{1}) \mathbb{P}_{p}(u\leftrightarrow\infty)\]
Similarly we have
\[\mathbb{P}_{p}(v\leftrightarrow\infty,\mathcal{A}_{1})\geq\mathbb{P }_{p}(v\leftrightarrow\infty)\mathbb{P}_{p}(\mathcal{A}_{1})\]
Then the lemma follows.
**Lemma 3.9**.: _Let \(G=(V,E)\) be an infinite, connected, locally finite graph. When \(p_{c}^{site}(G)<p<p_{conn}(G)\), \(\mathbb{P}_{p}\)-a.s. there are infinitely many infinite 1-clusters in the i.i.d. Bernoulli(\(p\)) site percolation on \(G\)._
Proof.: Let \(\mathcal{A}_{f}\) be the event that the number of infinite 1-clusters is finite and nonzero. By Lemma 3.7, \(\mathbb{P}_{p}(\mathcal{A}_{f})\in\{0,1\}\). Let \(p>p_{c}^{site}(G)\). It suffices to show that if \(\mathbb{P}_{p}(\mathcal{A}_{f})=1\), then \(p\geq p_{conn}\).
If \(\mathbb{P}_{p}(\mathcal{A}_{f})=1\), then \(\mathbb{P}_{p}(\mathcal{A}_{1})>0\). Let \(u,v\in V_{p,\epsilon}\) for some \(\epsilon>0\). We have
\[\mathbb{P}_{p}(u\leftrightarrow v)\geq\left[1-\left(\frac{1-p}{1-p_{c}} \right)^{1-\epsilon}\right]^{2}\mathbb{P}_{p}(\mathcal{A}_{1})>0.\]
By Lemma 3.4, there are infinitely many vertices in \(V_{p,\epsilon}\), hence we can make \(d_{G}(u,v)\to\infty\) given that the graph \(G\) is locally finite. Then \(p\geq p_{conn}\), and the lemma follows.
## 4. Embedded Forests
In this section we show that when \(p<1-p_{c}^{site}(T)\), the connectivity probability in the matching graph \(G_{*}\) (hence in \(G\) as well) decays exponentially by constructing an embedded forest of \(G\) either for graphs with minimal edge degree at least 7, or for graphs with minimal edge degree at least 5 and minimal face degree at least 4. The idea is based on explicitly constructing an embedded forest on the graph such that as the distance of two vertices goes to infinitely, the number of trees in the forest separating them goes to infinity.
**Definition 4.1**.: _Let \(G=(V,E)\) be an infinite, connected graph properly embedded in \(\mathbb{R}^{2}\) with minimal vertex degree at least 7. For each \(v\in V\), a chandelier (resp. anti-chandelier) \(R_{v}\) at \(v\) is a tree rooted \(v\) such that_
* \(v,v_{1},v_{2}\in V\) _are vertices in_ \(R_{v}\)_, such that_ \(v_{1}\) _is the only adjacent vertex of_ \(v\) _in_ \(R_{v}\)_;_ \(v\) _and_ \(v_{2}\) _are the only two adjacent vertices of_ \(v_{1}\) _in_ \(R_{v}\)_; and_
* _moving along the path_ \(v,v_{1},v_{2}\) _in order,_ \(v_{1}\) _has exactly 3 incident faces on the left (resp. right);_
* \(T_{v_{2}}\) _is a tree rooted at_ \(v_{3}\) _isomorphic to the one constructed in Lemma_ 2.10_(1);_
* \(L_{1}\) _and_ \(L_{2}\) _are two boundaries of_ \(T_{v_{2}}\) _satisfying_ \(L_{1}\cap L_{2}=\{v_{2}\}\)_. Let_ \(Q\) _be the region of_ \(\mathbb{R}^{2}\) _bounded by_ \(L_{1}\) _and_ \(L_{2}\) _in which_ \(T_{v_{2}}\) _is embedded, then moving along_ \(v_{1},v_{2},L_{1}\) _in order, at_ \(v_{2}\) _there are at least 3 incident faces on the left (resp. right) outside_ \(Q\)_, and moving along_ \(v_{1},v_{2},L_{3}\) _in order, at_ \(v_{2}\) _there are at least 3 incident faces on the right (resp. left) outside_ \(Q\)_._
See Figure 4.1 for chandeliers and anti-chandeliers.
For each \(v\in V\), define
\[DF_{v}:=\max\{|f|:\text{$f$ is a finite face of $G$ incident to $v$}\}.\]
Recall that when \(f\) is a face then \(|f|\) is the degree of the face, or equivalently, number of edges along the boundary of the face. If all the incident faces of \(v\) are infinite, take \(DF_{v}=-\infty\).
Let \(u,w\in V\) be two distinct vertices. Let \(l_{vw}\) be a shortest path in \(G\) joining \(u\) and \(w\); more precisely, let
\[l_{uw}:=u(=z_{0}),z_{1},z_{2},\ldots,z_{n}(=w)\]
where
* for each \(1\leq i\leq n\), \(z_{i-1}\) and \(z_{i}\) are adjacent vertices in \(G\); and
* \(z_{i}\neq z_{j}\) for any \(0\leq i<j\leq n\).
Figure 4.1. In the left graph, the black lines represent a chandelier, the blue lines represent incident edges on the left a chandelier. In the right graph, the black lines represent an anti-chandelier, and blue lines represent incident edges on the right of an anti-chandelier
We say the positive direction of \(l_{uw}\) the direction of increasing the indices of \(z\)'s. If we say on the left of \(l_{uw}\) or on the right of \(l_{uw}\), we always mean on the left or right of \(l_{uw}\) when moving along \(l_{uw}\) in the positive direction.
We shall construct a sequence of chandeliers
\[R_{1},R_{2},\ldots,\]
rooted at vertices of \(l_{uw}\) and a sequence of anti-chandeliers
\[U_{1},U_{2},\ldots,\]
rooted at vertices of \(l_{uw}\) inductively as follows.
Since each vertex has degree at least \(7\), moving along \(l_{uw}\) in the direction of increasing the indices of \(z_{i}\) (the positive direction of \(l_{uw}\)), at least on one side of \(l_{uw}\)\(z_{i}\) has at least \(2\) incident faces. Let \(\zeta_{1}\) be vertex along \(l_{uw}\) either
1. incident to at least two faces on its left;
2. incident to to an infinite face on its left.
with minimal indices of \(z\) other than \(z_{1}\) and \(z_{n}\).
If \(\zeta_{1}\) is incident to at least two faces on the left of \(l_{uw}\), let \(\langle b_{1},\zeta_{1}\rangle\) be an edge along \(l_{uw}\) such that the \(z\)-index of \(b_{1}\) is less than the \(z\)-index of \(\zeta_{1}\). Let \(e_{1}=\langle\zeta_{1},a_{1}\rangle\) be the incident edge of \(\zeta_{1}\) on the left \(l_{uw}\) along its positive direction such that \(a_{1}\notin l_{uw}\) and there are no other edges of \(G\) in one angle bounded by \(e_{1}\) and \(\langle b_{1},\zeta_{1}\rangle\). Let \(R_{1}\) be the chandelier at \(\zeta_{1}\) starting from the edge \(e_{1}\).
If \(\zeta_{1}\) is incident to an infinite face on the left of \(l_{uw}\), let \(R_{1}=\emptyset\).
Let \(s\geq 1\). Suppose we have found \(\zeta_{s}\) and \(R_{s}\). Let \(\zeta_{s+1}\) be the vertex which, when moving along \(l_{uw}\) in its positive direction, is either
* incident to at least two faces on the left of \(l_{uw}\) or
* incident to an infinite face
with minimal \(z\)-indices after \(\zeta_{s}\).
If \(\zeta_{s+1}\) is incident to at least two faces on the left of \(l_{uw}\), let \(\langle b_{s+1},\zeta_{s+1}\rangle\) be an edge along \(l_{uw}\) such that the \(z\)-index of \(b_{s+1}\) is less than the \(z\)-index of \(\zeta_{s+1}\).
Let \(e_{s+1}=\langle\zeta_{s+1},a_{s+1}\rangle\) be the incident edge of \(\zeta_{s+1}\) on the left \(l_{uw}\) along its positive direction such that \(a_{s+1}\notin l_{uw}\) and there are no other edges of \(G\) in one angle bounded by \(e_{s+1}\) and \(\langle b_{s+1},\zeta_{s+1}\rangle\). The following cases might occur
1. \(a_{s+1}=a_{s}\); in this case let \(R_{s+1}=R_{s}\);
2. \(a_{s+1}\neq a_{s}\). If moving along \(l_{uw}\) in the positive direction, \(e_{s}\) is on its left, then let \(R_{s}\) be the chandelier at \(\zeta_{s}\) starting from the edge \(e_{s}\).
Let \(R_{s+1}\) be the chandelier at \(\zeta_{s+1}\) starting from the edge \(e_{s+1}\).
If \(\zeta_{s+1}\) is incident to an infinite face on the left of \(l_{uw}\), let \(R_{s+1}=\emptyset\).
It is straightforward to check the following lemma:
**Lemma 4.2**.: \(a_{j}\neq a_{j+3}\)
Proof.: From the construction of \(l_{uw}\) we see that \(l_{\zeta_{j}\zeta_{j+3}}\) is the shortest path in \(G\) joining \(\zeta_{j}\) and \(\zeta_{j+3}\) which has length at least 3. If \(a_{j}=a_{j+3}\), then \(\zeta_{j},a_{j}(=a_{j+3}),\zeta_{j+3}\) form a path of length 2 joining \(\zeta_{j}\) and \(\zeta_{j+3}\). The contradiction implies the lemma.
We can similarly construct a sequence of anti-chandeliers on the right of \(l_{uw}\).
**Lemma 4.3**.: _Let \(G=(V,E)\) be an infinite, connected graph properly embedded in \(\mathbb{R}^{2}\) with minimal vertex degree at least 7. Let \(u,w\in V\) be two distinct vertices and let \(l_{uw}\) be a shortest path in \(G\) joining \(u\) and \(w\). Let \(R_{1},\dots,R_{k}\) and \(U_{1},\dots,U_{r}\) be constructed as above. Then_
1. _for any_ \(1\leq i<j\leq k\)_, the chandeliers_ \(R_{i}\) _and_ \(R_{j}\) _are disjoint._
2. _for any_ \(1\leq i<j\leq r\)_, the chandeliers_ \(U_{i}\) _and_ \(U_{j}\) _are disjoint._
Proof.: We only proves part (1) here; part (2) can be proved similarly. It suffices to show that for any \(1\leq i\leq k\)
\[R_{i}\cap R_{i+1}=\emptyset. \tag{4.1}\]
(4.1) follows immediately if at least one of \(R_{\zeta_{j}}\) and \(R_{\zeta_{j+2}}\) is empty. It remains to prove (4.1) when
\[R_{i}\neq\emptyset;\text{ and }R_{i+1}\neq\emptyset. \tag{4.2}\]
Let \(x\) (resp. \(y\)) be the root of \(R_{j}\) (resp. \(R_{j+2}\)) along \(l_{uw}\). Let \(\langle x,x_{1}\rangle\) (resp. \(\langle y,y_{1}\rangle\)) be the edge incident to \(x\) (resp. \(y\)) along \(R_{j}\). From the construction of the sequence \(\{R_{i}\}\) we see that \(x_{1}\neq y_{1}\). Let
\[x(=z_{r}),z_{r+2},\dots,y(=z_{t})\]
be all the vertices along \(l_{uw}\) between \(x\) and \(y\). Let \(z_{g}\) (\(g\geq r+1\)) be the vertex with maximal index among all the vertices corresponding to some \(\zeta_{j}\) such that \(a_{j}=x_{1}\). Consider the
Figure 4.2. Embedded forest
sequence
\[z_{g+1},\ldots,z_{t}\]
Then from the construction of the sequence \(\{R_{i}\}\) we have
* \(1\leq r:=t-g\leq\left\lfloor\frac{DF_{x}}{2}\right\rfloor\);
* \(z_{g+1}\),..., \(z_{t}\) are not incident to edges on the left of \(l_{uw}\);
* In the angle formed by \(\langle z_{t-1},y\rangle\) and \(\langle y,y_{1}\rangle\) on the left of \(l_{uw}\), there are no other edges of \(G\).
Let \(L_{1}\) (resp. \(L_{2}\)) be the self-avoiding path as part of the boundary of \(R_{i}\) (resp. \(R_{i+1}\)) from \(x_{1}\) (resp. \(y_{1}\)) to \(\infty\) without passing through \(x\) (resp. \(y\)). Assume \(R_{i}\cap R_{i+1}\neq\emptyset\), let the \(a\) be the first intersection point of \(L_{1}\) and \(L_{2}\) when moving along \(L_{1}\) starting from \(x_{1}\). Let \(l_{z_{g},z_{t}}\) be the portion of \(l_{uw}\) between \(z_{g}\) and \(z_{t}\); let \(L_{1,a}\) be the portion of \(L_{1}\) between \(x_{1}\) and \(a\); let \(L_{2,a}\) be the portion of \(L_{2}\) between \(y_{1}\) and \(a\).
Then \(\langle z_{g},x_{1}\rangle\), \(L_{1,a}\), \(L_{2,a}\), \(l_{z_{g},z_{t}}\) and \(\langle z_{t},y_{1}\rangle\) form a cycle \(C\). The following cases might occur
1. \(\langle z_{g},y_{1}\rangle\) is an edge. Let \(\tilde{C}\) be the cycle obtained from \(C\) by remove \(z_{g+1},\ldots,z_{t}\). Then we have \[\sum_{v\in\tilde{C}}\sum_{f\in R_{\tilde{C}:f\sim z}}\frac{|f|-2}{|f|}-(| \tilde{C}|-2)\pi\geq-3\pi+\frac{\pi}{3}+\frac{2\pi}{3}+\frac{\pi}{3}+2\pi>0.\] which contradicts (2.4), hence the cycle \(C\) does not exist.
2. \(\langle x_{1},y_{1}\rangle\) is not an edge. Then \(z_{g},\ldots,z_{t},y_{1}\) are in a face of degree at least \(t-g+3\geq 4\), then we have \[\sum_{v\in C}\sum_{f\in R_{\tilde{C}:f\sim z}}\frac{|f|-2}{|f|}-(|\tilde{C}|- 2)\pi\geq-4\pi+(r+2)\frac{r+1}{r+3}\pi+\frac{2\pi}{3}+2\pi>0.\] hence the cycle \(C\) does not exist.
Recall that \(G_{*}\) is the matching graph of \(G\) as defined in Definition 1.3.
**Lemma 4.4**.: _Let \(G=(V,E)\) be a graph satisfying the assumptions of Definition 4.1. Then for each \(p<1-p_{c}^{\text{site}}(T)\) (Recall that \(1-p_{c}^{\text{site}}(T)>\frac{1}{2}\)), there exists \(c_{p}>0\), such that for any \(u,v\in V\),_
\[\mathbb{P}_{p}(u\leftrightarrow v)\leq e^{-c_{p}d_{G_{*}}(u,v)}. \tag{4.3}\]
_where \(u\leftrightarrow v\) means that \(u\) and \(v\) are in the same 1-cluster, while \(u\xleftrightarrow{*}v\) means that \(u\) and \(v\) are in the same 1-*-cluster (1-cluster in the graph \(G_{*}\))._
_Moreover, if \(G\) has uniformly bounded face degree for finite faces_
\[\mathbb{P}_{p}(u\xleftrightarrow{*}v)\leq e^{-c_{p}d_{G_{*}}(u,v)};\qquad \mathbb{P}_{p}(\partial_{V}^{*}u\xleftrightarrow{*}\partial_{V}^{*}v)\leq e^{ -c_{p}d_{G_{*}}(u,v)} \tag{4.4}\]
Proof.: Let \(l_{uv}\) be the shortest path of \(G\) joining \(u\) and \(v\), with \(u=z_{0}\) and \(v=z_{n}\). Construct a sequence of chandeliers \(R_{1},R_{2},\ldots\) on the left of \(l_{uv}\) and a sequence of chandeliers \(U_{1},U_{2},\ldots\) on the right of \(l_{uv}\).
Then one can find an alternating sequence
\[R_{i_{1}}(=R_{1}),U_{i_{1}},R_{i_{2}},U_{i_{2}},\ldots\]
of chandeliers and anti-chandeliers whose roots have increasing \(z\)-indices, such that
* The \(G_{*}\) distance between \(R_{i_{s}}\) and \(U_{i_{s}}\) is at most \(1\); and
* the \(G_{*}\) distance between \(U_{i_{s}}\) and \(R_{i_{s+1}}\) is at most \(3\);
* the sequence are pairwise disjoint.
See Figure 4.2.
Let \(\{Q_{j}:=R_{i_{j}}\cup U_{i_{j}}\}_{j}\). then we can find at least \(k:=\left\lfloor\frac{d_{G_{*}}(u,v)}{3}\right\rfloor-1\)\(Q_{j}\)'s along \(l_{u,v}\). If \(p<1-p_{c}^{site}(T)\), we have \(1-p>p_{c}^{site}(T)\). Let \(x_{j}\) be the root of \(R_{i_{j}}\) and \(y_{j}\) be the root of \(U_{i_{j}}\). Then there is a positive probability \(\theta_{p}>0\) such that all the following occur
1. \(x_{j}\) is in an infinite \(0\)-cluster of \(R_{i_{j}}\); and
2. \(y_{j}\) is in an infinite \(0\)-cluster of \(U_{i_{j}}\); and
3. \(x_{j}\) and \(y_{j}\) are joined by a \(0\)-*-path.
Let \(E_{j}\) be the event that all of (1) (2) (3) occur. If \(u\) and \(v\) are joined by a \(1\)-path, then none of \(E_{i}\) should occur for \(i\in[k]\). Hence we have
\[\mathbb{P}_{p}(u\leftrightarrow v)\leq(1-\theta_{p})^{\frac{d_{G_{*}}(u,v)}{4}} \tag{4.5}\]
Choose \(c_{p}:=-\frac{\log(1-\theta_{p})}{4}\), then (4.3) follows.
Now assume that \(G\) has uniformly bounded face degree for finite faces. Let \(F_{j}\) be the event that both (1)(2) and
1. \(x_{j}\) and \(y_{j}\) are joined by a \(0\)-path.
occur. Then the length of the shortest path joining \(x_{j}\) and \(y_{j}\) is uniformly bounded. Hence the probability of \(F_{j}\) is bounded below by \(\phi_{p}>0\) for all \(j\). If \(u\) and \(v\) are joined by a \(1\)-*-path, then none of \(F_{i}\) should occur for \(i\in[k]\). Hence we have
\[\mathbb{P}_{p}(u\stackrel{{*}}{{\leftrightarrow}}v)\leq(1-\phi_{ p})^{\frac{d_{G_{*}}(u,v)}{4}-1} \tag{4.6}\]
and
\[\mathbb{P}_{p}(\partial_{V}^{*}u\stackrel{{*}}{{\leftrightarrow}} \partial_{V}^{*}v)\leq(1-\phi_{p})^{\frac{d_{G_{*}}(u,v)}{4}-3}. \tag{4.7}\]
Then (4.4) follows.
**Remark 4.5**.: _The proof of Lemma 4.4 shows that when \(G\) is a graph satisfying the assumptions of Definition 4.1,_
\[\frac{1}{2}<1-p_{c}^{site}(T)\leq p_{conn}(G).\]
_When the graph has uniformly bounded face degree for finite faces_
\[\frac{1}{2}<1-p_{c}^{site}(T)\leq p_{exp}(G)\]
**Lemma 4.6**.: _Let \(G=(V,E)\) be a graph satisfying the assumptions of Definition 4.1. Then for any \(n\in\mathbb{N}\), and \(w\in V\), there exists \(M_{n}\in N\) (depending on \(w\)), such that for any \(v\in V\) satisfying \(d_{G}(v,w)\geq M_{n}\), there exist three trees \(T_{a}\), \(T_{b}\), \(T_{c}\) rooted at \(w\) isomorphic to the one constructed in the proof of Proposition 2.10 such that all the following conditions hold:_
* \(T_{a}\) _(resp._ \(T_{b}\)_) has boundaries_ \(l_{w,1}\) _and_ \(l_{w,2}\) _(resp._ \(l_{w,3}\) _and_ \(l_{w,4}\)_), each one of_ \(l_{w,1}\) _and_ \(l_{w,2}\) _(resp._ \(l_{w,3}\) _and_ \(l_{w,4}\)_) is a singly infinite path starting at_ \(w\)_;_
* \(l_{w,2}\) _and_ \(l_{w,3}\) _share an edge_ \((w,x)\)_;_
* _the (open) region_ \(R_{w}\subset\mathbb{R}^{2}\) _including the union_ \(T_{a}\cup T_{b}\) _of the two trees bounded by_ \(l_{w,1}\) _and_ \(l_{w,4}\) _satisfies_ \[B(v,n)\subset R_{w}.\] _where_ \(B(v,n)\) _consists of all the vertices in_ \(V\) _whose graph distance to_ \(v\) _in_ \(G\) _is at most_ \(n\)_; i.e._ \[B(v,n):=\{u\in V:d_{G}(v,u)\leq n\}.\]
* \(T_{c}\cap R_{w}=\emptyset\)_._
Proof.: We first construct \(T_{a}\), \(T_{b}\), \(T_{c}\). Let \(T_{1},T_{2},\ldots,T_{\deg(w)}\) be the \(\deg(w)\) trees rooted at \(w\), each of which is isomorphic to \(T\), in cyclic order. For \(1\leq i\leq\ \deg(w)\), let \(S_{i}\subset\mathbb{R}^{2}\) be the region in \(\mathbb{R}^{2}\) bounded by the boundaries of \(T_{i}\) and containing \(T_{i}\). Note that \(\mathbb{R}^{2}=\cup_{1\leq i\leq\deg(w)}S_{i}\); and \(S_{i}\cap S_{i+2}=\{w\}\). Hence \(v\) is in at most two \(S_{i}\)'s. The following cases might occur
* \(v\in S_{i}\cap S_{i+1}\). In this case let \(T_{a}=T_{i}\), \(T_{b}=T_{i+1}\), \(T_{c}=T_{i+3}\).
* \(v\in S_{i}\); for all \(j\neq i\), \(v\notin S_{j}\). Let \(l_{1}\) and \(l_{2}\) be the two boundaries of \(T_{i}\), each of which is a singly infinite path starting at \(w\) and \(l_{1}\cap l_{2}=\{w\}\). Without loss of generality, assume \[d_{G}(v,l_{1})\leq d_{G}(v,l_{2}).\] The following cases might occur
* the edge \(\langle w,t\rangle\) along \(l_{1}\) is along along the boundary of \(T_{i-1}\). In this case let \(T_{a}=T_{i-1}\), \(T_{b}=T_{i}\), \(T_{c}=T_{i+2}\).
* the edge \(\langle w,t\rangle\) along \(l_{1}\) is along along the boundary of \(T_{i+1}\). In this case let \(T_{a}=T_{i}\), \(T_{b}=T_{i+1}\), \(T_{c}=T_{i+3}\).
Now we show that
\[\lim_{d_{G}(v,w)\to\infty}d_{G}(v,\partial R_{w})=\infty. \tag{4.8}\]
It is straightforward to see that the lemma follows from (4.8).
For each \(k\geq 1\), let \(\partial B(w,k)\) consist of all the vertices in \(G\) whose graph distance to \(w\) is exactly \(k\), i.e.
\[\partial B(w,k):=\{u\in V:d_{G}(w,u)=k\}.\]
Then it straightforward to see that
\[\lim_{k_{2}-k_{1}\to\infty}d_{G}(\partial B(w,k_{2}),\partial B(w,k_{1}))=\infty.\]
Without loss of generality, assume that
\[S_{w}=S_{i}\cup S_{i+1}\]
the case when \(S_{w}=S_{i-1}\cup S_{i}\) can be proved similarly. Then
\[\lim_{k\to\infty}D_{1}(k) :=\lim_{k\to\infty}d_{[S_{i}\cap G]\setminus B(w,k)}(l_{w_{1}},l _{w_{2}})=\infty; \tag{4.10}\] \[\lim_{k\to\infty}D_{2}(k) :=\lim_{k\to\infty}d_{[S_{i+1}\cap G]\setminus B(w,k)]}(l_{w_{3}}, l_{w_{4}})=\infty;\] (4.11) \[\lim_{k\to\infty}D_{3}(k) :=\lim_{k\to\infty}d_{[(R_{w}\setminus S_{i})\cap G]\setminus B( w,k)}(l_{w_{2}},l_{w_{4}})=\infty;\] (4.12) \[\lim_{k\to\infty}D_{4}(k) :=\lim_{k\to\infty}d_{[(R_{w}\setminus S_{i+1})\cap G]\setminus B( w,k)]}(l_{w_{1}},l_{w_{3}})=\infty. \tag{4.9}\]
All the 4 identities above can be proved similarly. For example, to see why (4.12) is true, assume (4.12) does not hold. Then there exists \(0<N<\infty\), such that
\[d_{[(R_{w}\setminus S_{i+1})\cap G]\setminus B(w,k)]}(l_{w_{1}},l_{w_{3}})\leq N.\]
for infinitely many \(k\)'s. Then \(p_{c}^{site}([R_{w}\setminus S_{i+1}]\cap G)=1\). But this is contradiction to the fact that the graph \([R_{w}\setminus S_{i+1}]\cap G\) contains a tree of exponential volume growth.
Then
\[d_{G}(v,\partial R_{w})\geq\frac{1}{2}\max_{k:1\leq k\leq d_{G}(v,w)}\max\left\{ d_{G}(v,\partial B(w,k)\cap R_{w}),\min\{D_{1},D_{2},D_{3},D_{4}\}\right\}\to\infty,\]
as \(d_{G}(v,w)\to\infty\).
**Theorem 4.7**.: _Let \(G=(V,E)\) be a graph satisfying the assumptions of Definition 4.1. The following statements hold._
1. _For each_ \(p\in\left(p_{c}^{site}(G),1-p_{c}^{site}(T)\right)\)_, there are infinitely many infinite 1-clusters;_
2. _If_ \(G\) _has finitely many ends and uniformly bounded face degree for finite faces, for each_ \(p\in\left[1-p_{c}(T),1-p_{c}^{site}(G_{*})\right)\)_, infinite 1-clusters have infinitely many ends._
3. _Assume that_ \(G\) _has infinitely many ends, for each_ \(p\in\left[1-p_{c}(T),1\right]\)_, infinite 1-clusters have infinitely many ends._
Proof.: When \(p\in\left(p_{c}^{site}(G),1-p_{c}(T)\right)\), the existence of infinitely many infinite 1-clusters follows from Lemmas 3.9 and 4.4.
Now consider the case when \(p\in\left[1-p_{c}(T),1-p_{c}^{site}(G_{*})\right)\). In this case when \(G\) satisfies the assumptions of Definition 4.1 and have uniformly bounded face degree for finite faces, we have
\[1-p\in\left(p_{c}^{site}(G_{*}),p_{c}(T)\right]\subseteq\left(p_{c}^{site}(G_{* }),\frac{1}{2}\right]\subseteq\left(p_{c}^{site}(G_{*}),p_{exp}(G_{*})\right)\]
where the last subset relation is obtained by Lemma 4.4. Then by Lemma 3.9, a.s. there are infinitely many infinite 0-*-clusters.
Assume the graph \(G\) has finitely many ends. In this case since when \(p\in\left[1-p_{c}(T),1-p_{c}^{site}(G_{*})\right)\), infinite 0-*-clusters have infinitely many ends, we infer that infinite 1-clusters have infinitely many ends to separate the infinitely many ends of infinite 0-*-clusters. This completes the proof of Part (2).
Now we consider the case when \(G\) has infinitely many ends. Fix a vertex \(v_{0}\in V\). Recall that \(B(v_{0},n)\) is the ball consisting of all the vertices within graph distance \(n\) of \(v_{0}\) in \(G\). Let \(G\setminus B(v_{0},n)\) be the subgraph of \(G\) obtained from \(G\) by removing all the vertices in \(B(v_{0},n)\) and edges that incident to at least one vertex in \(B(v_{0},n)\). Then the number of infinite components of \(G\setminus B(v_{0},n)\) goes to infinity. We make the following claim:
**Claim 4.8**.: _Let \(H\) be an arbitrary infinite component of \(G\setminus B(v_{0},n)\). Then we can find a tree \(T\) embedded in \(H\) isomorphic to the tree constructed in the proof of Proposition 2.10._
To see why Claim 4.8 is true; let \(u\) be a vertex in \(H\) incident to a vertex in \(B(v_{0},n)\). Then \(d_{G}(u,v_{0})=n+1\). Since \(H\) is infinite and connected, we can find a directed singly infinite path in \(H\) starting at \(u\), denoted by
\[z_{0}(:=u),z_{1},z_{2},\ldots,\]
Since the graph \(G\) is locally finite, we have
\[\lim_{m\to\infty}d_{G}(v_{0},z_{m})=\infty.\]
Then we can find \(k\), such that
\[d_{G}(z_{k},v_{0})\geq M_{n}\]
where \(M_{n}\) is given by Lemma 4.6. By Lemma 4.6, there exists a tree \(T\) isomorphic to the one constructed as in the proof of Proposition 2.10, such that \(T\cap B(v_{0},n)=\emptyset\). Then \(T\subseteq H\) since \(T\) is connected.
Then a.s. there is an infinite 1-cluster in \(H\) for all \(p\in[1-p_{c}(T),1]\) because \(1-p_{c}(T)>\frac{1}{2}>p_{c}^{site}(T)\). Then Part(3) of the lemma follows.
**Lemma 4.9**.: _Let \(G=(V,E)\) be a graph satisfying the assumptions of Definition 4.1. Assume \(G_{*}\) has uniformly bounded vertex degree and \(p\in\left[1-p_{c}^{site}(T),1-p_{c}^{site}(G_{*})\right)\). Then_
* _If infinitely many infinite 0-*-clusters have at least two ends, then a.s. there are infinitely many infinite 1-clusters._
* _If at least one infinite 0-*-cluster has infinitely many ends, then a.s. there are infinitely many infinite 1-clusters._
Proof.: The lemma follows from planar duality.
## 5. From infinitely many ends to infinitely many infinite clusters
From Theorem 4.7, we know that when \(\frac{1}{2}<p<1-p_{c}^{site}(G_{*})\), \(\mathbb{P}_{p}\) a.s. infinite 1-clusters have infinitely many ends. In this section, we show that when \(\frac{1}{2}<p<1-p_{c}^{site}(G_{*})\), \(\mathbb{P}_{p}\) a.s. there are infinitely many infinite 1-clusters.
For \(v\in V\), let \(\mathcal{E}_{v}\) be the event that there is a 1-ended infinite 0-*-cluster at \(v\).
**Definition 5.1**.: _Let \(G=(V,E)\) be an infinite, connected, planar, simple graph properly embedded into \(\mathbb{R}^{2}\) with the minimal vertex degree on \(G\) is at least 7. Order all the vertices of \(G\) starting from a fixed minimal vertex. Let \(\omega\in\{0,1\}^{V}\) be a site percolation instance on \(G\)._
1. _Let_ \(\xi\) _be a 0-*-cluster in_ \(\omega\) _which is not adjacent to an infinite face._ 1. _Assume_ \(\xi\) _is finite. Define the_ \(\partial^{out}\xi\) _to be closed path in_ \(G\) _given by_ \[l:=w_{-s},\ldots,w_{-1},w_{0},w_{1},\ldots,w_{t};\] _which satisfies all the following conditions_ * \(\omega(w_{i})=1\)_, for all_ \(i\in[-s..t]\)_; and_ * \(w_{-s}=w_{t}\)_; and_ * \(w_{i}\) _and_ * \(w_{i+1}\) _are adjacent vertices in_ \(G\) _for all_ \(i\in[-s,t-1]\)_; and_ * \(w_{i}\) _is *-adjacent to_ \(u_{i}\in\xi\) _for all_ \(i\in\mathbb{Z}\)_; and_ * _the plane_ \(\mathbb{R}^{2}\) _is divided by_ \(l\) _into a bounded open components_ \(R_{1}\) _and an unbounded open component_ \(R_{2}\)_, such that_ \(\xi\subset R_{1}\)_; and_ * \(w_{0}\) _is the smallest vertex of_ \(l\) _(according to the fixed ordering of vertices in_ \(V\)_); and_ * _moving along_ \(l\) _in the direction of increasing the indices of_ \(w_{i}\)_'s (positive direction of_ \(l\)_),_ \(R_{1}\) _is on the left._ 2. _Assume_ \(\xi\) _is infinite. Define the_ \(\partial^{out}\xi\) _to be the union of disjoint doubly infinite paths in_ \(G\)_; in which each path_ \[l:=\ldots,w_{-n},\ldots,w_{-1},w_{0},w_{1},\ldots,w_{n},\ldots;\] _satisfies all the following conditions_ * \(\omega(w_{i})=1\)_, for all_ \(i\in\mathbb{Z}\)_; and_ * \(w_{i}\) _and_ \(w_{i+1}\) _are adjacent vertices in_ \(G\) _for all_ \(i\in\mathbb{Z}\)_; and_ * \(w_{i}\) _is *-adjacent to_ \(u_{i}\in\xi\) _for all_ \(i\in\mathbb{Z}\)_; and_ * _the plane_ \(\mathbb{R}^{2}\) _is divided by_ \(l\) _into two unbounded open components_ \(R_{1}\) _and_ * \(w_{0}\) _is the smallest vertex along_ \(l\) _(according to the fixed ordering of vertices in_ \(V\)_); and_ * _moving along_ \(l\) _in the direction of increasing the indices of_ \(w_{i}\)_'s (positive direction of_ \(l\)_),_ \(R_{1}\) _is on the left._
_Moreover, if_ \(\mathcal{E}_{v_{0}}\) _occurs, i.e.,_ \(\xi\) _is 1-ended, then_ \(\partial^{out}\xi\) _consists of exactly one path; denoted by_ \(l_{\xi}\)_._
2. _Let_ \(\xi\) _be a 0-*-cluster that is adjacent to an infinite face_ \(F\)_. To each edge_ \((u,v)\) _on the infinite face, add an auxiliary vertex_ \(w\) _and join_ \(uw\) _and_ \(vw\) _such that_ \(u,v,w\) _form a triangle. Let_ \((v,u_{1})\) _and_ \((v,u_{2})\) _be two edges incident to the infinite face_ \(F\) _and_ \(w_{1}\) _and_ \(w_{2}\) _be the two corresponding auxiliary vertices to these two faces; join_ \(w_{1}\) _and_ \(w_{2}\) _by an edge such that_ \(w_{1},w_{2},v\) _form a triangular face. Make all the auxiliary vertices open. Then define_ \(\partial^{out}\xi\) _as in Part (1) of the definition._
**Lemma 5.2**.: _Let \(G=(V,E)\) be an infinite, connected graph, properly embedded into \(\mathbb{R}^{2}\) with minimal vertex degree at least 7 and uniformly bounded face degree for finite faces. Then for each \(p\in\left(\frac{1}{2},1-p_{c}^{site}(G_{*})\right)\), \(\mathbb{P}_{p}\)-a.s. at least one infinite 0-*-cluster has infinitely many ends._
Proof.: If \(p\in\left(\frac{1}{2},1-p_{c}^{site}(G_{*})\right)\), we have \(1-p\in\left(p_{c}^{site}(G_{*}),\frac{1}{2}\right)\); hence by lemma 4.4\(1-p\in(p_{c}^{site}(G_{*}),p_{conn}(G_{*}))\); then by Lemma 3.9 a.s. there are infinitely many infinite 0-*-clusters. Assume with strictly positive probability, every infinite 0-*-cluster has finitely many ends; we shall obtain a contradiction.
Since with strictly positive probability every infinite 0-*-cluster has finitely many ends, there exists \(v_{0}>0\) such that
\[\mathbb{P}_{p}(\mathcal{E}_{v_{0}})=c_{0}>0. \tag{5.1}\]
Let \(v_{0}\in V\) and \(\xi=C_{0*}(v_{0})\); i.e. \(\xi\) is the 0-*-cluster passing through \(v_{0}\).
Assume \(\mathcal{E}_{v_{0}}\) occurs. For each \(n\geq 1\), define
\[\Phi(l_{\xi},n):=\{(u,v)\in V\times V:u,v\in l_{\xi},u=w_{i},i<-n,v=w_{j},j>n\}; \tag{5.2}\]
and
\[D(l_{\xi},n):=\min_{(u,v)\in\Phi(\partial\xi,n)}d_{G}(u,v)\]
Now we do not assume that \(\mathcal{E}_{v_{0}}\) occurs. Let \(l\in\partial^{out}\xi\). For any \(i,j\in\mathbb{Z}\), \(i<j\), let \(l_{[i,j]}\) be the finite portion of \(l\) starting from \(w_{i}\) and ending at \(w_{j}\).
Let
\[\Delta_{1}(l,n): =\{m:m<-n,d_{G}(w_{m},v_{0})\leq d_{G}(w_{0},v_{0})\};\] \[\Delta_{2}(l,n): =\{k:k>n,d_{G}(w_{k},v_{0})\leq d_{G}(w_{0},v_{0})\}.\]
Define
\[S_{1}(l,n): =\begin{cases}\min\Delta_{1}(l,n)&\text{if }\Delta_{1}(l,n)\neq \emptyset\\ -n&\text{if }\Delta_{1}(l,n)=\emptyset\end{cases}\] \[S_{2}(l,n): =\begin{cases}\max\Delta_{2}(l,n)&\text{if }\Delta_{2}(l,n)\neq \emptyset\\ n&\text{if }\Delta_{2}(l,n)=\emptyset\end{cases}\]
Let
\[\Gamma_{1}(l,n):= \{m<S_{1}(l,n):\exists\ i\in[S_{1}(l,n),S_{2}(l,n)],i-m>2;\text{s.t. }d_{G_{*}}(w_{m},w_{i})=2;\text{ and}\] \[\exists\ u\in C_{0,*}(v_{0}),\ \text{s.t.}\ d_{G_{*}}(w_{m},u)=1=d_{G_{*}}(u,w_{i})\}.\] \[\Gamma_{2}(l,n):= \{k>S_{2}(l,n):\exists\ i\in[S_{1}(l,n),S_{2}(l,n)],k-i>2;d_{G_{*} }(w_{k},w_{i})=2;\text{and}\] \[\exists\ u\in C_{0,*}(v_{0}),\ \text{s.t.}\ d_{G_{*}}(w_{k},u)=1=d_{G_{*} }(u,w_{i})\}.\]
Define
\[Q_{1}(l,n): =\begin{cases}\min\Gamma_{1}(l,n)&\text{if }\Gamma_{1}(l,n)\neq \emptyset\\ -n&\text{if }\Gamma_{1}(l,n)=\emptyset\end{cases}\] \[Q_{2}(l,n): =\begin{cases}\max\Gamma_{2}(l,n)&\text{if }\Gamma_{2}(l,n)\neq \emptyset\\ n&\text{if }\Gamma_{2}(l,n)=\emptyset\end{cases}\]
Let \(\mathcal{W}_{m,v_{0}}\) be the collection of all length \(m\) SAWs on \(G\) that do not pass through \(v_{0}\).
Then
\[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m] \cap\mathcal{E}_{v_{0}}\right)\] \[=\sum_{[(n_{1},n_{2}):n_{1}\leq-n<n\leq n_{2}]}\sum_{[L\in \mathcal{W}_{n_{2}-n_{1},v_{0}}]}\mathbb{P}_{p}\left(L=l_{[n_{1},n_{2}]}\text { for some }l\in\partial^{out}\xi;\ Q_{1}(l,n)=n_{1},Q_{2}(l,n)=n_{2}\right)\] \[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m] \cap\mathcal{E}_{v_{0}}|L=l_{[n_{1},n_{2}]}\text{ for some }l\in\partial^{out}\xi;\ Q_{1}(l,n)=n_{1},Q_{2}(l,n)=n_{2}\right)\]
Note that for each fixed \((n_{1},n_{2})\) satisfying \(n_{1}\leq-n<n\leq n_{2}\) and \(L\in\mathcal{W}_{n_{2}-n_{1},v_{0}}\), we have
\[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m] \cap\mathcal{E}_{v_{0}}|L=l_{[n_{1},n_{2}]}\text{ for some }l\in\partial^{out}\xi;\ Q_{1}(l,n)=n_{1},Q_{2}(l,n)=n_{2}\right)\] \[\leq\mathbb{P}_{p}(\partial_{V}^{*}w_{n_{1}-1}\overset{0*}{ \longleftrightarrow}\partial_{V}^{*}w_{n_{2}+1})\leq\alpha^{m} \tag{5.3}\]
where the last inequality follows from Lemma 4.4, and \(\alpha\in(0,1)\) is an absolute constant.
To see why the other inequalities of (5.3) are true, note that conditional on \(L=l_{[n_{1},n_{2}]}\) for some \(l\in\partial^{out}\xi;\ Q_{1}(l,n)=n_{1},Q_{2}(l,n)=n_{2}\), if \(\mathcal{E}_{v_{0}}\) occurs, i.e., there is an infinite 1-ended 0-*-cluster passing through \(v_{0}\), then \(\partial_{V}^{*}w_{n_{1}-1}\) and \(\partial_{V}^{*}w_{n_{2}+1}\) must be joined by an 0-*-path without using any vertex on \(l\) and vertices *-incident to \(l_{[n_{1},n_{2}]}\) on the left of \(l\) (when traveling along \(l\) in the positive direction). Then (5.3) follows from BK inequality.
Moreover if \(\partial_{V}^{*}w_{n_{1}-1}\overset{0*}{\longleftrightarrow}\partial_{V}^{*} w_{n_{2}+1}\) occurs, let \(q_{n_{1},n_{2}}\) be the shortest 0-*-path joining \(\partial_{V}^{*}w_{n_{1}-1}\), \(\partial_{V}^{*}w_{n_{2}+1}\) without using any vertex on \(l\) and vertices *-incident to \(l_{[n_{1},n_{2}]}\) on the left of \(l\) when traveling along \(l\) in the positive direction (if more than one such paths exist, let \(q_{n_{1},n_{2}}\) be the lexicographically smallest one according to a fixed ordering of vertices); let \(|q_{n_{1},n_{2}}|\) be the length of \(q_{n_{1},n_{2}}\). Let \(W_{n_{1},n_{2}}\) be the collection of vertices each of which is along a *-path joining \(\partial_{V}^{*}w_{n_{1}-1}\), \(\partial_{V}^{*}w_{n_{2}+1}\) with length at most \(|q_{n_{1},n_{2}}|\); and let \(r_{n_{1},n_{2}}\) be
the maximal distance in \(G_{*}\) between a vertex in \(W_{n_{1},n_{2}}\) and \(v_{0}\). Let
\[\Delta_{3}(l,n): =\{m<n_{1}:d_{G_{*}}(w_{m},v_{0})\leq r_{n_{1},n_{2}}+2\}\] \[\Delta_{4}(l,n): =\{k>n_{2}:d_{G_{*}}(w_{k},v_{0})\leq r_{n_{1},n_{2}}+2\}\]
Define
\[S_{3}(l,n): =\begin{cases}\min\Delta_{3}(l,n)&\text{if }\Delta_{3}(l,n)\neq \emptyset\\ n_{1}-1&\text{if }\Delta_{3}(l,n)=\emptyset\end{cases}\] \[S_{4}(l,n): =\begin{cases}\max\Delta_{4}(l,n)&\text{if }\Delta_{4}(l,n)\neq \emptyset\\ n_{2}+1&\text{if }\Delta_{4}(l,n)=\emptyset\end{cases}\]
Let
\[\Gamma_{3}(l,n):= \{m<S_{3}(l,n):\exists\ i\in[S_{3}(l,n),S_{4}(l,n)],i-m>2;d_{G_{*} }(w_{m},w_{i})=2;\text{and}\] \[\exists\ u\in C_{0,*}(v_{0}),\ \text{s.t. }d_{G_{*}}(w_{m},u)=1=d_{G_{*}}(u,w_{i})\}.\] \[\Gamma_{4}(l,n):= \{k>S_{4}(l,n):\exists\ i\in[S_{3}(l,n),S_{4}(l,n)],k-i>2;d_{G_{*} }(w_{k},w_{i})=2;\text{and}\] \[\exists\ u\in C_{0,*}(v_{0}),\ \text{s.t. }d_{G_{*}}(w_{m},u)=1=d_{G_{*}}(u,w_{i})\}.\]
Define
\[Q_{3}(l,n): =\begin{cases}\min\Gamma_{3}(l,n)&\text{if }\Gamma_{3}(l,n)\neq \emptyset\\ n_{1}-1&\text{if }\Gamma_{3}(l,n)=\emptyset\end{cases}\] \[Q_{4}(l,n): =\begin{cases}\max\Gamma_{4}(l,n)&\text{if }\Gamma_{4}(l,n)\neq \emptyset\\ n_{2}+1&\text{if }\Gamma_{4}(l,n)=\emptyset\end{cases}\]
Then
\[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m] \cap\mathcal{E}_{v_{0}}\right)\] \[=\sum_{[(n_{1},n_{2},n_{3},n_{4}):n_{3}<n_{1}\leq-n<n\leq n_{2}<n _{4}]}\sum_{[L\in\mathcal{W}_{n_{4}-n_{3},v_{0}}]}\] \[\mathbb{P}_{p}\left(L=l_{[n_{3},n_{4}]}\text{ for some }l\in\partial^{ out}\xi;\ Q_{1}(l,n)=n_{1},Q_{2}(l,n)=n_{2},Q_{3}(l,n)=n_{3},Q_{4}(l,n)=n_{4}\right)\] \[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m] \cap\mathcal{E}_{v_{0}}|L=l_{[n_{3},n_{4}]}\text{ for some }l\in\partial^{ out}\xi;\ Q_{i}(l,n)=n_{i},\forall 1\leq i\leq 4\right)\]
Note that
\[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m] \cap\mathcal{E}_{v_{0}}|L=l_{[n_{3},n_{4}]}\text{ for some }l\in\partial^{ out}\xi;\ Q_{i}(l,n)=n_{i},\forall 1\leq i\leq 4\right)\] \[\leq\mathbb{P}_{p}([\partial_{V}^{*}w_{n_{1}-1}\overset{0*}{ \longleftrightarrow}\partial_{V}^{*}w_{n_{2}+1}]\circ[\partial_{V}^{*}w_{n_{3} -1}\overset{0*}{\longleftrightarrow}\partial_{V}^{*}w_{n_{4}+1}])\leq[ \alpha^{m}]^{2} \tag{5.4}\]
To see why (5.4) is true, note that conditional on \(L=l_{[n_{3},n_{4}]}\) for some \(l\in\partial^{out}\xi;\ Q_{i}(l,n)=n_{i}\) for all \(1\leq i\leq 4\), if \(\mathcal{E}_{v_{0}}\) occurs, i.e., there is an infinite 1-ended 0-*-cluster passing through \(v_{0}\), then
* \(\partial_{V}^{*}w_{n_{1}-1}\) and \(\partial_{V}^{*}w_{n_{2}+1}\) can be joined by an 0-*-path \(t_{1}\) without using any vertex on \(l\) and vertices *-incident to \(l_{[n_{1},n_{2}]}\) on the left of \(l\) (when traveling along \(l\) in the positive direction); and
* \(\partial_{V}^{*}w_{n_{3}-1}\) and \(\partial_{V}^{*}w_{n_{4}+1}\) can be joined by an 0-*-path without using any vertex on \(l\) and vertices *-incident to \(l_{[n_{1},n_{2}]}\) on the left of \(l\) (when traveling along \(l\) in the positive direction); and disjoint from the 0-*-path \(t_{1}\) joining \(\partial_{V}^{*}w_{n_{1}-1}\) and \(\partial_{V}^{*}w_{n_{1}-1}\).
Then (5.4) follows from BK inequality.
We may continue this process and obtain
\[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>m]\cap\mathcal{E }_{v_{0}}\right)\leq(\alpha^{m})^{K}\]
for any \(K\geq 1\). Hence for any fixed \(M\),
\[\mathbb{P}_{p}\left(\cap_{(u,v)\in\Phi(l_{\xi,n})}[d_{G}(u,v)>M]\cap\mathcal{E }_{v_{0}}\right)=0.\]
for all sufficiently large \(n\). We infer that a.s. for every 1-ended infinite 0-*-cluster, there exists infinitely many pairs \((u,v)\in\Phi(l_{\xi,n})\), such that \(d_{G}(u,v)<M\). Therefore a.s. for every 1-ended infinite 0-*-cluster \(\xi\), \(p_{c}^{site}(\xi)=1\).
Under the assumption that with strictly positive probability every infinite 0-*-cluster has finitely many ends, there is a strictly positive probability that every infinite 0-*-cluster is 1-ended. Then for every \(p^{\prime}>p\), there is a strictly positive probability that no infinite 0-*-cluster exists. This is impossible since \(1-p>p_{c}^{site}(G_{*})\). Then the theorem follows.
**Theorem 5.3**.: _Let \(G=(V,E)\) be an infinite, connected, planar graph properly embedded into \(\mathbb{R}^{2}\) with minimal vertex degree at least 7 and uniformly bounded face degree for finite faces. Then for each \(p\in\left(\frac{1}{2},1-p_{c}^{site}(G_{*})\right)\), a.s. there are infinitely many infinite 1-clusters._
Proof.: By Lemma 5.2, at least one infinite 0-*-cluster has infinitely many ends.
The following cases might occur
1. \(G\) has finitely many ends. Let \(K\) be a finite subgraph such that each infinite component of \(G\setminus K\) is one-ended; then there exists at least one infinite component \(\mathcal{C}\) of \(G\setminus K\), and an infinite 0-*-cluster \(\xi\) such that 1. \(\mathcal{C}\) is one-ended; and 2. one component of \(\xi\cap\mathcal{C}\) has infinitely many ends in \(C\). It follows that a.s. there are infinitely many infinite 1-clusters by planar duality.
2. \(G\) has infinitely many ends. The following cases might occur 1. There exists a finite subgraph \(K\) such that there exists one infinite, 1-ended component \(\mathcal{C}\) of \(G\setminus K\), and an infinite 0-*-cluster \(\xi\) such that one component of \(\xi\cap\mathcal{C}\) has infinitely many ends in \(\mathcal{C}\), then a.s. there are infinitely many infinite 1-clusters by planar duality. 2. Assume (2a) does not occur; then there exists a sequence of finite subgraphs \(K_{1}\subset K_{2}\subset\ldots K_{n}\subset\ldots\); such that * the number of infinite components in \(G\setminus K_{i}\), as \(i\to\infty\), goes to \(\infty\); and
* In every infinite component of \(G\setminus K_{i}\), there exists at least one infinite 1-cluster by Claim 4.8; and
* there exists one infinite 0-*-cluster that has infinitely many ends in every \(G\setminus K_{i}\).
then again a.s. there are infinitely many infinite 1-clusters by planar duality.
**Proof of Theorem 1.5**. Theorem 1.5 follows from Theorem 5.3, Theorem 4.7(1) and the fact that \(p_{c}^{site}(T)<\frac{1}{2}\).
## Appendix A Proof of Lemma 2.10 under condition (2).
Let \(G\) be a graph satisfying condition (2) of Lemma 2.10. Again we will find a tree as a subgraph of \(G\) recursively.
Let \(v\in V\). Let \(v_{0}\), \(v_{1}\) be two vertices adjacent to \(v\) in \(G\) such that \(v\), \(v_{0}\), \(v_{1}\) share a face. Starting from \(v,v_{0}\) construct a walk
\[\pi_{0}:=v,v_{0},v_{00},v_{000},\ldots,\]
Starting from \(v,v_{1}\) construct a walk
\[\pi_{1}:=v,v_{1},v_{11},v_{111},\ldots,\]
such that
* moving along \(v_{0},v,v_{1}\) in order, the face shared by \(v_{0},v,v_{1}\) is on the right; and
* moving along the walk \(\pi_{0}\) starting from \(v\), at each vertex \(v_{0^{k}}\) (\(k\geq 1\)), there are exactly 2 incident faces on the right of \(\pi_{0}\); and
* moving along the walk \(\pi_{1}\) starting from \(v\), at each vertex \(v_{1^{k}}\) (\(k\geq 1\)), there are exactly 2 incident faces on the left of \(\pi_{1}\).
By Corollary 2.6, both \(\pi_{0}\) and \(\pi_{1}\) are infinite and self-avoiding.
Let
\[\pi_{1,1}:=\pi_{1}\setminus\{v\}=v_{1},v_{11},v_{111},\ldots\]
There exists \(v_{01}\in V\) such that
* \(v_{01}\) is adjacent to \(v_{0}\); and
* \(v_{0},v_{00},v_{01}\) share a face on the left of the walk \(\pi_{0}\).
Similarly, there exist \(v_{10},v_{1,\frac{1}{2}}\in V\) such that
* both \(v_{10}\) and \(v_{1,\frac{1}{2}}\) are adjacent to \(v_{1}\); and
* \(v_{1},v_{1,\frac{1}{2}},v_{11}\) share a face on the right of the walk \(\pi_{1}\); and
* \(v_{1},v_{10},v_{1,\frac{1}{2}}\) share a face; moving along \(v_{10},v_{1},v_{1,\frac{1}{2}}\) in order, the face is on the right.
Note that \(v_{10}\neq v\), \(v_{01}\neq v\) and \(v_{1,\frac{1}{2}}\neq v\) since each vertex in \(G\) has degree at least 5.
Starting from \(v_{0},v_{01}\), construct a walk
\[\pi_{01}:=v_{0},v_{01},v_{011},v_{0111},\ldots\]
Starting from \(v_{1},v_{10}\), construct a walk
\[\pi_{10}:=v_{1},v_{10},v_{100},v_{1000},\ldots\]
such that
* moving along \(v_{00},v_{0},v_{01}\) in order, the face shared by \(v_{00},v_{0},v_{01}\) is on the right; and
* moving along \(v_{1,\frac{1}{2}},v_{1},v_{11}\) in order, the face shared by these vertices is on the right; and
* moving along the walk \(\pi_{01}\) starting from \(v_{0}\), at each vertex \(v_{01^{k}}\) (\(k\geq 1\)), there are exactly 2 incident faces on the left of \(\pi_{01}\); and
* moving along the walk \(\pi_{10}\) starting from \(v\), at each vertex \(v_{10^{k}}\) (\(k\geq 1\)), there are exactly 2 incident faces on the right of \(\pi_{10}\).
By Corollary 2.6, both walks are infinite and self-avoiding. Furthermore, let
\[\tilde{\pi}_{01}:=v,\pi_{01};\qquad\tilde{\pi}_{10}:=v,\pi_{10}\]
By Corollary 2.6, \(\tilde{\pi}_{01}\) is self-avoiding.
We claim that \(\tilde{\pi}_{10}\) is self-avoiding. Assume \(\tilde{\pi}_{10}\) is not self-avoiding; we shall obtain a contradiction. Since \(\pi_{10}\) is self-avoiding, if \(\tilde{\pi}_{10}\) is not self-avoiding, we can find a polygon \(P_{0}\) consisting of vertices of \(\tilde{\pi}_{10}\) including \(v\). Assume the polygon \(P_{0}\) has exactly \(m\) vertices on the boundary \(\partial P_{0}\) denoted by \(w_{1},\ldots,w_{m}\).
Under the assumption that each vertex of \(G\) has degree at least 5 and each face has degree at least 4, \(\kappa(z)<0\); and
(A.1) \[\frac{|f|-2}{|f|}\geq\frac{1}{2}\]
Note that each vertex along \(\partial P_{0}\) except \(v\) and \(v_{1}\) are incident to at least 2 faces in \(P_{0}\); \(v\) and \(v_{1}\) are incident to at least 1 face in \(P_{0}\). Then we have
\[\sum_{z\in\partial P_{0}}\sum_{f\in P_{0}:f\sim z}\frac{|f|-2}{|f|}\pi\geq(m-2 )\pi+\sum_{f\in P:f\sim v,\text{or }f\sim v_{1}}\frac{|f|-2}{|f|}\pi\geq(m-2)\pi+\pi>(m-2)\pi\]
which contradicts (2.4), and therefore \(\tilde{\pi}_{10}\) is self-avoiding.
We claim that \(\pi_{01}\) and \(\pi_{10}\) never intersect each other. Otherwise let \(w\in V\cap\pi_{01}\cap\pi_{10}\) be the first intersection vertex of \(\pi_{01}\) and \(\pi_{10}\), respectively, where \(\pi_{01}\) and \(\pi_{10}\) start from \(v_{0}\) and \(v_{1}\), respectively. Then the portion of \(\pi_{01}\) between \(v_{0}\) and \(w\), the portion of \(\pi_{10}\) between \(v_{1}\) and \(w\) and the edges \(v_{0}v\), \(v_{1}v\) form a polygon \(P\) in the plane. Assume the polygon \(P\) has exactly \(n\) vertices on the boundary; then we must have (2.2) holds. Under the assumption that \(v_{0}\) and \(v_{1}\) have degrees at least 5, \(v_{0}\) is incident to at least 2 faces in \(P\) and \(v_{1}\) is incident to at least 1 face in \(P\).
Under the assumption that each vertex has degree at least 5 and each face has degree at least 4, \(\kappa(z)<0\); we have (A.1).
Note that each vertex along \(\partial P\) except \(v\), \(w\) and \(v_{1}\) are incident to at least \(2\) faces in \(P\); \(v\) and \(w\) and \(v_{1}\) are incident to at least \(1\) face in \(P\). Then we have
\[\sum_{z\in\partial P}\sum_{f\in P:f\sim z}\frac{|f|-2}{|f|}\pi\geq(n-3)\pi+\sum _{f\in P:f\sim v,\text{or }f\sim w,\text{ or }f\sim v_{1}}\frac{|f|-2}{|f|}\pi\geq(n-3)\pi+\frac{3\pi}{2}>(n-2)\pi.\]
Hence (2.4) never holds, and therefore \(\pi_{01}\) and \(\pi_{10}\) are disjoint. We repeat the same construction with \((v_{0},v,v_{1})\) replaced by \((v_{00},v_{0},v_{01})\).
Starting \(v_{1},v_{1,\frac{1}{2}}\), we construct a walk
\[\pi_{1,\frac{1}{2}}:=v_{1},v_{1,\frac{1}{2}},v_{1,\frac{1}{2},1},v_{1,\frac{1 }{2},1,1},\ldots\]
such that
* moving along the walk \(\pi_{1,\frac{1}{2}}\) staring from \(v_{1}\), at each vertex \(\pi_{1,\frac{1}{2},1^{k}}\) (\(k\geq 0\)), there are exactly \(2\) incident faces on the left.
Let
\[\tilde{\pi}_{1,\frac{1}{2}}:=v,\pi_{1,\frac{1}{2}}\] \[\pi_{1,\frac{1}{2},1}:=\pi_{1,\frac{1}{2}}\setminus\{v_{1}\}=v_{ 1,\frac{1}{2}},v_{2,\frac{1}{2},1},v_{2,\frac{1}{2},1,1},\ldots\]
By Corollary 2.6, \(\tilde{\pi}_{1,\frac{1}{2}}\) is infinite and self-avoiding. Moreover, using Lemma 2.5, one can prove that
* The intersection of any two paths in \(\pi_{1},\pi_{10},\pi_{1,\frac{1}{2}}\) is \(\{v_{1}\}\).
* \(\pi_{1,\frac{1}{2}}\cap\pi_{0}=\emptyset\) and \(\pi_{1,\frac{1}{2}}\cap\pi_{01}=\emptyset\)
Let \(v\) be the level-\(0\) vertex, \(v_{0},v_{1}\) be level-\(1\) vertices, and \(v_{00},v_{01},v_{10},v_{1,\frac{1}{2}},v_{11}\) be the level-\(2\) vertices. In general for \(k\geq 2\), define the set \(S_{k}\) of level-\(k\) vertices as in (2.7).
Assume we find all the level-\(k\) vertices. For each \(v_{b}\in S_{k}\), the following cases might occur
* \(b_{k}=0\): in this case we define \(2\) paths \(\pi_{b,0}\), \(\pi_{b,1}\) as defining \(\pi_{0}\) and \(\pi_{1}\) with \(v_{b}\) replaced by \(v\).
* \(b_{k}=1\): in this case we define \(3\) paths \(\pi_{b,0}\), \(\pi_{b,\frac{1}{2}}\)\(\pi_{b,1}\) as defining \(\pi_{10}\)\(\pi_{1,\frac{1}{2}}\) and \(\pi_{11}\) with \(v_{b}\) replaced by \(v_{1}\).
* \(b_{k}=\frac{1}{2}\): in this case we define \(2\) paths \(\pi_{b,0}\), \(\pi_{b,1}\) as defining \(\pi_{0}\) and \(\pi_{1}\) with \(v_{b}\) replaced by \(v\).
Then we find a tree \(T\) whose vertex set consists of \(\{v,v_{0},v_{1}\}\cup_{k\geq 2}S_{k}\) and edge set consists of all the edges along a path \(\pi_{b}\) such that for some \(k\geq 1\)\(b=(b_{1},\ldots,b_{k})\in\left\{0,\frac{1}{2},1\right\}^{k};\) if \(b_{j}=\frac{1}{2},\text{ then }j\geq 2,\text{ and }b_{j-1}=1\) as a subgraph of \(G\). Then Part (2) follows.
**Acknowledgements.** ZL acknowledges support from National Science Foundation DMS 1608896 and Simons Foundation grant 638143. |
2306.16938 | Restore Translation Using Equivariant Neural Networks | Invariance to spatial transformations such as translations and rotations is a
desirable property and a basic design principle for classification neural
networks. However, the commonly used convolutional neural networks (CNNs) are
actually very sensitive to even small translations. There exist vast works to
achieve exact or approximate transformation invariance by designing
transformation-invariant models or assessing the transformations. These works
usually make changes to the standard CNNs and harm the performance on standard
datasets. In this paper, rather than modifying the classifier, we propose a
pre-classifier restorer to recover translated (or even rotated) inputs to the
original ones which will be fed into any classifier for the same dataset. The
restorer is based on a theoretical result which gives a sufficient and
necessary condition for an affine operator to be translational equivariant on a
tensor space. | Yihan Wang, Lijia Yu, Xiao-Shan Gao | 2023-06-29T13:34:35Z | http://arxiv.org/abs/2306.16938v1 | # Restore Translation Using Equivariant Neural Networks
###### Abstract
Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks. However, the commonly used convolutional neural networks (CNNs) are actually very sensitive to even small translations. There exist vast works to achieve exact or approximate transformation invariance by designing transformation-invariant models or assessing the transformations. These works usually make changes to the standard CNNs and harm the performance on standard datasets. In this paper, rather than modifying the classifier, we propose a pre-classifier restorer to recover translated (or even rotated) inputs to the original ones which will be fed into any classifier for the same dataset. The restorer is based on a theoretical result which gives a sufficient and necessary condition for an affine operator to be translational equivariant on a tensor space.
## 1 Introduction
Deep convolutional neural networks (CNNs) had outperformed humans in many computer vision tasks [9, 12]. One of the key ideas in designing the CNNs is that the convolution layer is equivariant with respect to translations, which was emphasized both in the earlier work [5] and the modern CNN [12]. However, the commonly used components, such as pooling [7] and dropout [19, 20], which help the network to extract features and generalize, actually make CNNs not equivariant to even small translations, as pointed out in [1, 3]. As a comprehensive evaluation, Figure 1 shows that two classification CNNs suffer the accuracy reductions of more than \(11\%\) and \(59\%\) respectively on CIFAR-10 and MNIST, when the inputs are horizontally and vertically translated at most 3 pixels.
Invariance to spatial transformations, including translations, rotations and scaling, is a desirable property for classification neural networks and the past few decades have witnessed thriving explorations on this topic. In general, there exist three ways to achieve exact or approximate invariance. The first is to design transformation-invariant neural network structures [2, 6, 8, 10, 15, 16, 18, 21]. The second is to assess and approximate transformations via a learnable module [4, 11] and then use the approximation to reduce the transformed inputs to "standard" ones. The third is data augmentation [1, 3, 17] by adding various transformations of the samples in the original dataset.
Those ad-hoc architectures to achieve invariance often bring extra parameters but harm the network performance on standard datasets. Moreover, the various designs with different purposes are not compatible with each other. Data augmentation is not a scalable method since the invariance that benefits from a certain augmentation protocol does not generalize to other transformations [1]. Including learnable modules such as the Spatial Transformer, all the three approaches require training the classifier from scratch and fail to endow existing trained networks with some invariance. It was indicated in [1] that "the problem of insuring invariance to small image transformations in neural networks while preserving high accuracy remains unsolved."
In this paper, rather than designing any in-classifier component to make the classifier invariant to some transformation, we propose a pre-classifier restorer to restore translated or rotated inputs to the original ones. The invariance is achieved by feeding the restored inputs into any following classifier. Our restorer depends only on the dataset instead the classifier. Namely, the training processes of the restore and classifier are separate and a restore is universal to any classifier trained on the same dataset.
We split the whole restoration into two stages, transformation estimation and inverse transformation, see Figure 2. In the first stage, we expect that standard inputs lead to standard outputs and the outputs of translated inputs reflect the translations. Naturally, what we need is a strictly translation-equivariant neural network. In Section 3, we investigate at the theoretical aspect the sufficient and necessary condition to construct a strictly equivariant affine operator on a tensor space. The condition results in _the circular filters_, see Definition 3.5, as the fundamental module to a strictly translation-equivariant neural network. We give the canonical architecture of translation-equivariant networks, see Equation (2). In Section 4, details of the restorer are presented. We define a translation estimator, the core component of a restored, as a strictly translation-equivariant neural network that guarantees the first component of every output on a dataset to be the largest component, see Definition 4.1. For a translated input, due to the strict equivariance, the largest component of the output reflect the translation. Thus we can translate it inversely in the second stage and obtain the original image. Though the restorer is independent on the following classifier, it indeed depends on the dataset. Given a dataset satisfying some reasonable conditions, i.e. _an aperiodic finite dataset_, see Definition 4.2, we prove the existence of a translation estimator, i.e. a restored, with the canonical architecture for this dataset. Moreover, rotations can be viewed as translations by converting the Cartesian coordinates to polar coordinates and the rotation restorer arises in the similar way.
In Section 5, the experiments on MNIST, 3D-MNIST and CIFAR-10 show that our restorers not only visually restore the translated inputs but also largely eliminate the accuracy reduction phenomenon.
## 2 Related works
As generalization of convolutional neural networks, group-equivariant convolutional neural networks [2, 6] exploited symmetries to endow networks with invariance to some group actions, such as the combination of translations and rotations by certain angles. The warped convolutions [10] converted some other spatial transformations into translations and thus obtain equivariance to these spatial transformations. Scale-invariance [21, 8, 15]
Figure 1: The accuracy reduction after vertical and horizontal translations. The translation scope is [-3, 3] pixels. Left: LeNet-5 on MNIST; Right: VGG-16 on CIFAR-10.
was injected into networks by some ad-hoc components. Random transformations [16] of features maps were introduced in order to prevent the dependencies of network outputs on specific poses of inputs. Similarly, probabilistic max pooling [18] of the hidden units over the set of transformations improved the invariance of networks in unsupervised learning. Moreover, local covariant feature detecting methods [14, 22] were proposed to address the problem of extracting viewpoint invariant features from images.
Another approach to achieve invariance is "shiftable" down-sampling [13], in which any original pixel can be linearly interpolated from the pixels on the sampling grid. These "shiftable" down-sampling exists if and only if the sampling frequency is at least twice the highest frequency of the unsampled signal.
The Spatial Transformer [4, 11], as a learnable module, produces a predictive transformation for each input image and then spatially transforms the input to a canonical pose to simplify the inference in the subsequent layers. Our restorers give input-specific transformations as well and adjust the input to alleviate the poor invariance of the following classifiers. Although the Spatial Transformers and our restorer are both learnable modules, the training of the former depend not only on data but also on the subsequent layers, while the latter are independent of the subsequent classifiers.
## 3 Equivariant neural networks
Though objects in nature have continuous properties, once captured and converted to digital signals, there properties are represented by real tensors. In this section, we study the equivariance of operators on a tensor space.
### Equivariance in tensor space
Assume that a map \(\tilde{x}:\mathbb{R}^{d}\to\mathbb{D}\) stands for a property of some \(d\)-dimensional object where \(\mathbb{D}\subseteq\mathbb{R}\). Sampling \(\tilde{x}\) over a \((n_{1},n_{2},\cdots,n_{d})\)-grid results in a tensor \(x\) in a tensor space
\[\mathcal{H}\coloneqq\mathbb{D}^{n_{1}}\otimes\mathbb{D}^{n_{2}}\otimes\cdots \otimes\mathbb{D}^{n_{d}}. \tag{1}\]
We denote \([n]=[0,1,\ldots,n-1]\) for \(n\in\mathbb{Z}_{+}\) and assume \(k\operatorname{\text{mod}}n\in[n]\) for \(k\in\mathbb{Z}\). For an index \(I=(i_{1},i_{2},\cdots,i_{d})\in\prod_{i=1}^{d}[n_{i}]\) and \(x\in\mathcal{H}\), denote \(x[I]\) to be the element of \(x\) with subscript \((i_{1},i_{2},\cdots,i_{d})\). For convenience, we extend the index of \(\mathcal{H}\) to \(I=(i_{1},i_{2},\cdots,i_{d})\in\mathbb{Z}^{d}\) by defining
\[x[I]=x[i_{1}\operatorname{\text{mod}}n_{1},\cdots,i_{d}\operatorname{\text{ mod}}n_{d}].\]
**Definition 3.1** (Translation).: _A translation \(T^{M}:\mathcal{H}\to\mathcal{H}\) with \(M\in\mathbb{Z}^{d}\) is an invertible linear operator such that for all \(I\in\mathbb{Z}^{d}\) and \(x\in\mathcal{H}\),_
\[T^{M}(x)[I]=x[I-M].\]
_The inverse of \(T^{M}\) is clearly \(T^{-M}\)._
**Definition 3.2** (Equivariance).: _A map \(w:\mathcal{H}\to\mathcal{H}\) is called equivariant with respect to translations if for all \(x\in\mathcal{H}\) and \(M\in\mathbb{Z}^{d}\),_
\[T^{M}(w(x))=w(T^{M}(x)).\]
**Definition 3.3** (Vectorization).: _A tensor \(x\) can be vectorized to \(X\in\overrightarrow{\mathcal{H}}=\mathbb{D}^{N}\) with \(N=n_{1}n_{2}\cdots n_{d}\) such that_
\[X(\delta(I))\coloneqq x[I],\]
_where \(\delta(I)\coloneqq(i_{1}\operatorname{\text{mod}}n_{1})n_{2}n_{3}\cdots n_{d }+(i_{2}\operatorname{\text{mod}}n_{2})n_{3}n_{4}\cdots n_{d}+\cdots+(i_{d} \operatorname{\text{mod}}n_{d})\), and we denote \(X=\overrightarrow{x}\). Moreover, the translation \(T^{M}\) is vectorized as \(T^{M}(X)\coloneqq\overrightarrow{T^{M}(x)}\)._
### Equivariant operators
When \(\mathbb{D}=\mathbb{R}\), the tensor space \(\mathcal{H}\) is a Hilbert space by defining the inner product as \(x\cdot z\coloneqq\overrightarrow{x}\cdot\overrightarrow{z}\) which is the inner product in vector space \(\overrightarrow{\mathcal{H}}\). In the rest of this section, we assume \(\mathbb{D}=\mathbb{R}\).
According to the Reize's representation theorem, there is a bijection between continuous linear operator space and tensor space. That is, a continuous linear operator \(v:\mathcal{H}\to\mathbb{R}\) can be viewed as a tensor \(v\in\mathcal{H}\) satisfying \(v(x)=v\cdot x\). Now we can translate \(v\) by \(T^{M}\) and obtain \(T^{M}(v):\mathcal{H}\to\mathbb{R}\) such that \(T^{M}(v)(x)=T^{M}(v)\cdot x\).
We consider a continuous linear operator \(w:\mathcal{H}\to\mathcal{H}\). For \(I\in\mathbb{Z}^{d}\) and \(x\in\mathcal{H}\), denote \(w_{I}(x)=w(x)[I]\). Then \(w_{I}:\mathcal{H}\to\mathbb{R}\) is a continuous linear operator. An _affine operator_\(\alpha:\mathcal{H}\to\mathcal{H}\) differs from a continuous linear operator \(w\) by a _bias tensor_\(c\) such that \(\alpha(x)=w(x)+c\) for all \(x\in\mathcal{H}\).
**Theorem 3.4**.: _Let \(\alpha(x)=w(x)+c:\mathcal{H}\to\mathcal{H}\) be an affine operator. Then, \(\alpha\) is equivariant with respect to translations if and only if for all \(M\in\mathbb{Z}^{d}\),_
\[w_{M}=T^{M}(w_{\mathbf{0}})\text{ and }c\propto\mathbf{1},\]
_where \(\mathbf{0}\) is the zero vector in \(\mathbb{Z}^{d}\) and \(c\propto\mathbf{1}\) means that \(c\) is a constant tensor, that is, all of its entries are the same._
Proof of Theorem 3.4 is given in Appendix A. Recall that \(\overrightarrow{\mathcal{H}}=\mathbb{R}^{N}\) is the vectorization of \(\mathcal{H}\) and \(T^{M}\) also translates vectors in \(\overrightarrow{H}\). Each continuous linear operator on \(\mathcal{H}\) corresponds to a matrix in \(\mathbb{R}^{N\times N}\) and each bias operator corresponds to a vector in \(\mathbb{R}^{N}\). Now we consider the translation equivariance in vector space.
**Definition 3.5** (Circular filter).: _Let \(W=(W_{0},W_{1},\cdots,W_{N-1})^{T}\) be a matrix in \(\mathbb{R}^{N\times N}\). \(W\) is called a circular filter if \(W_{\delta(M)}=T^{M}(W_{0})\) for all \(M\in\mathbb{Z}^{d}\)._
As the vector version of Theorem 3.4, we have
**Corollary 3.6**.: _Let \(A:\mathbb{R}^{N}\to\mathbb{R}^{N}\) be an affine transformation such that_
\[A(X)=W\cdot X+C,\]
_in which \(W\in\mathbb{R}^{N\times N}\), \(C\in\mathbb{R}^{N}\). Then, \(A\) is equivariant with respect to translations in the sense that for all \(M\in\mathbb{Z}^{d}\)_
\[A(T^{M}(X))=T^{M}(A(X))\]
_if and only if \(W\) is a circular filter and \(C\propto\mathbf{1}\)._
This affine transformation is very similar to the commonly used convolutional layers [12, 5] in terms of shared parameters and the alike convolutional operation. But the strict equivariance calls for the same in-size and out-size, and circular convolutions, which are usually violated by CNNs.
### Equivariant neural networks
To compose a strictly translation-equivariant network, the spatial sizes of the input and output in each layer must be the same and thus down-samplings are not allowed. Though Corollary 3.6 provides the fundamental component of a strictly translation-equivariant network, different compositions of this component lead to various equivariant networks. Here we give the _canonical architecture_. We construct the strictly translation-equivariant network \(F\) with \(L\) layers as
\[F(X)=F_{L}\circ F_{L-1}\circ\cdots\circ F_{1}(X). \tag{2}\]
The \(l\)-the layer \(F_{l}\) has \(n_{l}\) channels and for an input \(X\in\mathbb{R}^{n_{l-1}\times N}\) we have
\[F_{l}(X)=\sigma(W[l]\cdot X+C[l])\in\mathbb{R}^{n_{l}\times N}, \tag{3}\]
where
\[W[l] =(W^{1}[l],\cdots,W^{n_{l}}[l])\in\mathbb{R}^{n_{l}\times n_{l-1} \times N\times N},\] \[C[l] =(C^{1}[l]\cdot\mathbf{1},\cdots,C^{n_{l}}[l]\cdot\mathbf{1}),\] \[W^{k}[l] =(W^{k,1}[l],\cdots,W^{k,n_{l-1}}[l])\in\mathbb{R}^{n_{l-1}\times N \times N},\] \[C^{k}[l] =(C^{k,1}[l],\cdots,C^{k,n_{l-1}}[l])\in\mathbb{R}^{n_{l-1}},\]
\(\sigma\) is the activation, \(W^{k,r}[l]\in\mathbb{R}^{N\times N}\) are circular filters, \(C^{k,r}[l]\in\mathbb{R}\) are constant biases for \(k=1,\cdots,n_{l}\) and \(r=1,\cdots,n_{l-1}\), the \(\cdot\) denotes the inner product and \(\mathbf{1}\) is the vector whose all components are 1.
## 4 Translation restorer
### Method
In Section 3.3, we propose a strictly equivariant neural network architecture (2) such that any translation on the input will be reflected on the output. Generally speaking, once the outputs of an equivariant network on a dataset have some spatial structure, this structure shifts consistently as the input shifts. Thus, the translation parameter of a shifted input can be solved from its output. Finally, we can restore the input via the inverse translation. Figure 2 shows how a restorer works.
The whole restoration process splits into two stages, translation estimation and inverse translation. We first define the translation estimator which outputs a consistent and special structure on a dataset.
**Definition 4.1**.: _Let \(\mathcal{D}\subset\mathbb{D}^{P\times N}\) be a dataset with \(P\) channels. Then a translation-equivariant network_
\[F:\mathbb{R}^{P\times N}\rightarrow\mathbb{R}^{N}\]
_is said to be a translation estimator for \(\mathcal{D}\) if_
\[F(X)[0]=\text{max}_{i=0}^{N-1}F(X)[i],\]
_where \(F(X)[i]\) is the \(i\)-th component of \(F(X)\)._
Figure 2: The pre-classifier translation restorer. For a shifted data \(T^{M}(x)\) as the input, the translation estimator obtains the translation \(M\) and restore the original data \(T^{-M}(T^{M}(x))=x\), which is feed into a pre-trained classifier.
Given such a translation estimator for dataset \(\mathcal{D}\) and a shifted input \(X^{\prime}=T^{M}(X)\) for some \(X\in\mathcal{D}\), we propagate \(X^{\prime}\) through \(F\) and get the output \(F(X^{\prime})\in\mathbb{R}^{N}\). Since the first component of \(F(X)\) is the largest, the location of the largest component of \(F(X^{\prime})\) is exactly the translation parameter:
\[\delta(M)=\text{argmax}_{i=0}^{N-1}F_{i}(X^{\prime}).\]
Then we can restore \(X=T^{-M}(X^{\prime})\) by inverse translation. The restored inputs can be feed to any classifier trained on the dataset \(\mathcal{D}\).
### Existence of the restorer
In this section, we show the existence of restorers, that is, the translation estimator. Note that our restorer is independent of the following classifier but dependent on the dataset. For translation, if a dataset contains both an image and a translated version of it, the estimator must be confused. We introduce aperiodic datasets to clarify such cases.
**Definition 4.2** (Aperiodic dataset).: _Let \(\mathcal{D}\subset\mathbb{D}^{P\times N}\) be a finite dataset with \(P\) channels. We call \(\mathcal{D}\) an aperiodic dataset if \(\mathbf{0}\notin\mathcal{D}\) and_
\[T^{M}(X)=X^{\prime}\iff M=\mathbf{0}\text{ and }X=X^{\prime},\]
_for \(M\in\mathbb{Z}^{d+1}\) and \(X,X^{\prime}\in\mathcal{D}\). Here \(d\) is the spatial dimension and \(M\) decides the translation in the channel dimension in addition._
Let \(\mathcal{D}\) be an aperiodic dataset. Given that \(\mathbb{D}=[2^{Q+1}]\) which is the case in image classification, we prove the existence of the translation estimator for such an aperiodic dataset. The proof consists of two steps. The data are first mapped to their binary decompositions through a translation-equivariant network as Equation (2) and then the existence of the translation-restorer in the form of Equation (2) is proved for binary data.
Let \(\mathbb{D}=[2^{Q+1}]\) and \(\mathbb{B}=\{0,1\}\). We denote \(\eta:\mathbb{D}\rightarrow\mathbb{B}^{Q}\) to be the binary decomposition, such as \(\eta(2)=(0,1,0)\) and \(\eta(3)=(1,0,1)\). We perform the binary decomposition on \(X\in\mathbb{D}^{P\times N}\) element-wisely and obtain \(\eta(X)\in\mathbb{B}^{G\times N}\), where \(G=PQ\) is the number of channels in binary representation. A dataset \(\mathcal{D}\subseteq\mathbb{D}^{P\times N}\) can be decomposed into \(\mathcal{B}\subset\mathbb{B}^{G\times N}\). Note that the dataset \(\mathcal{D}\) is aperiodic if and only if its binary decomposition \(\mathcal{B}\) is aperiodic.
The following Lemma 4.3 demonstrates the existence of a translation-equivariant network which coincides with the binary decomposition \(\eta\) on \([2^{Q+1}]^{P\times N}\). Proof details are placed in Appendix B.
**Lemma 4.3**.: _Let \(\eta:[2^{Q+1}]\rightarrow\mathbb{B}\) be the binary decomposition. There exists a \((2Q+2)\)-layer network \(F\) in the form of Equation (2) with ReLU activations and width at most \((Q+1)N\) such that for \(X\in[2^{Q+1}]^{P\times N}\)_
\[F(X)=\eta(X).\]
The following Lemma 4.4 demonstrate the existence of a 2-layer translation restorer for an aperiodic binary dataset. Proof details are placed in Appendix C.
**Lemma 4.4**.: _Let \(\mathcal{B}=\{Z_{s}|s=1,2,\cdots,S\}\subset\mathbb{B}^{G\times N}\) be an aperiodic binary dataset. Then there exists a 2-layer network \(F\) in the form of Equation (2) with ReLU activations and width at most \(SN\) such that for all \(s=1,2,\cdots,S\),_
\[F(Z_{s})[0]=\text{max}_{i=0}^{N-1}F(Z_{s})[i].\]
Given a \((2Q+2)\)-layer network \(F^{\prime}\) obtained from Lemma 4.3 and a 2-layer network \(F^{\prime\prime}\) obtained from Lemma 4.4, we stack them and have \(F=F^{\prime\prime}\circ F^{\prime}\) which is exactly a translation restorer. We thus have proved the following theorem.
**Theorem 4.5**.: _Let \(\mathcal{D}=\{X_{s}|s=1,2,\cdots,S\}\subset[2^{Q+1}]^{P\times N}\) be an aperiodic dataset. Then there exists a network \(F:\mathbb{R}^{P\times N}\rightarrow\mathbb{R}^{N}\) in the form of Equation (2) with ReLU activations such that for \(s=1,2,\cdots,S\),_
\[F(X_{s})[0]=\text{max}_{i=0}^{N-1}F(X_{s})[i],\]
_of which the depth is at most \(2Q+4\) and the width is at most \(\max(SN,(Q+1)N)\). Namely, this network is a translation restorer for \(\mathcal{D}\)._
## 5 Experiments
The core component of the restorer is the translation estimator which outputs the translation parameter of the shifted inputs.
We use the architecture described in Equation (2) with \(L=6\), \(n_{l}=1\) for \(l=1,\cdots,L\) and ReLU activations. The training procedure aims at maximizing the first component of the outputs. Thus the max component of the output indicates the input shift. The experimental settings are given in Appendix D. We report four sets of experiments below.
Figure 3: The restoreers for MNIST and CIFAR-10.
### Translation restoration
We first focus on the performance of translation restoration. Experiments are conducted on MNIST, CIFAR-10, and 3D-MNIST.
2D Images.We train translation restorers for MNIST and CIFAR-10. MNIST images are resized to 32x32 and CIFAR-10 images are padded 4 blank pixels at the edges.
In Figure 3, the left column is the original images, the middle column is the randomly shifted images and the right column is the restored images. On both datasets, images are randomly shifted vertically and horizontally at most \(\frac{1}{4}\) of its size. The shift is a circular shift where pixels shifted out of the figure appear on the other end. We can see that the shifted images are disorganized but the restored images are very alike the original images.
To evaluate the restoration performance of pretrained restorers, we train classifiers and test them on randomly shifted images and restored ones and the results are given in Table 1. When images are not shifted, the restorers lead to only \(0.3\%\) and \(0.03\%\) accuracy reduction on two datasets. Nevertheless, even if the translation scope is 1, restorers improve the accuracy. Moreover, no matter how the images are shifted, the restorer can repair them to the same status and result in the same classification accuracy, namely \(98.58\%\) and \(88.18\%\), while the accuracies drop significantly without the restorer, and the larger the range of translation, the more obvious the restoration effect
Different ArchitecturesOur proposed restorer is an independent module that can be placed before any classifier. It is scalable to different architectures the subsequent classifier uses.
In Table 2, we evaluate the restoration performance on popular architectures including VGG-16, ResNet-18, DenseNet-121, and MobileNet v2. Translated images (w/Trans.) are randomly shifted within scope 4 in both vertical and horizontal directions. The reduction of accuracy on original images is no more than \(0.04\%\) and the restorer improves the accuracy on shifted images by \(1.66\%\sim 6.02\%\).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & Res.\textbackslash\)Trans. & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \multirow{3}{*}{MNIST} & w/o & 98.89 & 98.21 & 95.41 & 87.07 & 76.61 & 62.9 & 51.33 & 41.1 & 35.7 \\ & w/ & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 \\ & Effect & -0.3 & +0.38 & +3.18 & +11.52 & +21.98 & +35.69 & +47.26 & +57.49 & +62.89 \\ \hline \multirow{3}{*}{CIFAR-10} & w/o & 88.21 & 86.58 & 85.9 & 83.65 & 82.16 & 80.46 & 79.37 & 77.71 & 76.01 \\ & w/ & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 \\ \cline{1-1} & Effect & -0.03 & +1.6 & +2.28 & +4.53 & +6.02 & +7.72 & +8.81 & +10.47 & +12.17 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Restoration performance on MNIST and CIFAR-10. Images are randomly shifted within the translation scope ranging from 0 to 8 in both vertical and horizontal directions. We use LeNet-5 on MNIST and ResNet-18 on CIFAR-10. ”Res.” and ”Trans.” stand for restorer and translation respectively.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{VGG-16} & \multicolumn{2}{c}{ResNet-18} & \multicolumn{2}{c}{DenseNet-121} & \multicolumn{2}{c}{MobileNet v2} \\ Res.\textbackslash\)Trans. & w/o & w/ & w/o & w/ & w/o & w/ & w/o & w/ \\ \hline w/o & 89.27 & 83.40 & 88.21 & 82.16 & 92.14 & 90.46 & 88.10 & 83.36 \\ w/ & 89.23 & 89.23 & 88.18 & 88.18 & 92.12 & 92.12 & 88.09 & 88.09 \\ Effect & -0.04 & +5.83 & -0.03 & +6.02 & -0.02 & +1.66 & -0.01 & +4.73 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Restoration performance on different architectures and CIFAR-10.
Translation Augmentation.Training with translation augmentation is another approach to improving the translational invariance of a model. However, translation augmentation is limited in a certain scope and thus cannot ensure the effectiveness for test images shifted out of the scope.
In Figure 4, we compare the restoration performance on models not trained with translation augmentation (dash lines) and models trained with translation augmentation (solid lines). The augmentation scope is \(10\%\) of the image size, that is, 3 pixels for MNIST and 4 pixels for CIFAR-10. Translation augmentation indeed improves the translational invariance of the classifier on images shifted in the augmentation scope. However, when the shift is beyond the augmentation scope, the accuracy begins to degrade. In such a case, the pre-classifier restorer is still able to calibrate the shift and improve the accuracy of the classifier trained with augmentation.
3D Voxelization Images.3D-MNIST contains 3D point clouds generated from images of MNIST. The voxelization of the point clouds contains grayscale 3D tensors.
Figure 5 visualizes the restoration on 3D-MNIST. In the middle of each subfigure, the 3-dimensional digit is shifted in a fixed direction. This fixed direction is detected by the translation estimator and the restored digit is shown on the right.
Figure 4: Restoration performance on classifiers trained with or without translation augmentations. The models are LeNet-5 for MNIST and VGG-16 for CIFAR-10. ”res.” and ”aug” stand for restorer and augmentation, respectively.
Figure 5: The restored on 3D-MNIST. In each sub-figure, the left is the original digit, the middle is shifted digit, and the right is the restored digit.
### Rotation restoration
Rotation can be regarded as a kind of translation. The Euclidean space \(\mathbb{R}^{d+1}\) can be characterized by polar coordinates
\[(\phi_{1},\cdots,\phi_{d-1},\phi_{d},r)\in[0,\pi]\times\cdots\times[0,\pi] \times[0,2\pi)\times\mathbb{R}^{+}.\]
We can sample a property map, defined in Section 3.1, \(\tilde{x}:\mathbb{R}^{d+1}\rightarrow\mathbb{R}\) over a \((n_{1},n_{2},\cdots,n_{d+1})\)-grid along the polar axes and obtain a tensor \(x\) such that for given \(R>0\) and \(0<a<1\)
\[x(I)=\tilde{x}(\frac{\pi i_{1}}{n_{1}-1},\cdots,\frac{\pi i_{d-1}}{n_{d-1}-1}, \frac{2\pi i_{d}}{n_{d}},Ra^{i_{d+1}}),\]
where \(I=(i_{1},i_{2},\cdots,i_{d+1})\in\prod_{i=1}^{d+1}[n_{i}]\). The last index \(Ra^{i_{d+1}}\) can be replaced with \(\frac{i_{d+1}R}{n_{d+1}}\). Note that the vectorization \(X\) is in \(\mathbb{D}^{n_{d+1}\times N}\) with \(N=n_{1}n_{2}\cdots n_{d}.\) Since we only care about the rotation, i.e. the translation \(T^{M}\) on the first \(d\) components of \(I\), the space \(\mathbb{D}^{n_{d+1}\times N}\) is viewed as an \(n_{d+1}\)-channel vector space. Thus, we can treat rotation as translation and leverage the method discussed above to restore rotated inputs.
Visualization.We experiment with the rotation restoration on MNIST. Since the transfer from Cartesian coordinates to polar coordinates requires high resolution, we first resize the images to \(224\times 224\) pixels. Other settings are similar to the aforementioned experiments.
Figure 6 visualizes the rotation restoration. We can tell from it that most rotated images are restored correctly, though some of them are not restored to the original images. The rotated digit 9 in the top row is more like an erect digit 6 than the original one and the restorer just leaves it alone. The reason why rotation restoration is not as perfect as translation restoration is that the dataset is not aperiodic with respect to rotations. On the one hand, some digits seem like the rotated version of other digits, such as 6 and 9. On the other hand, even in a certain digit class, images vary in digit poses. For example, a rotated digit 0 is similar to an erect one.
Note that the group-equivariant CNNs in [2, 6] can only deal with rotations by certain angles such as \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\). On the other hand, our approach can deal with rotations from more angles.
Figure 6: The rotation restoration on the first 24 images in the test set of MNIST. The left column of each subfigure is the original images and the right is the restored images. In the middle column of each subfigure, the images are rotated \(40^{\circ},90^{\circ}\) and \(150^{\circ}\) respectively.
Conclusion
This paper contributes to the equivalent neural networks in two aspects. Theoretically, we give the sufficient and necessary conditions for an affine operator \(Wx+b\) to be translational equivariant, that is, \(Wx+b\) is translational equivariant on a tensor space if and only if \(W\) has the high dimensional convolution structure and \(b\) is a constant tensor. It is well known that if \(W\) has the convolution structure, then \(Wx\) is equivariant to translations [5, 9], and this is one of the basic principles behind the design of CNNs. Our work gives new insights into the convolutional structure used in CNNs in that, the convolution structure is also the necessary condition and hence the most general structure for translational equivariance. Practically, we propose the translational restorer to recover the original images from the translated or rotated ones. The restorer can be combined with any classifier to alleviate the performance reduction problem for translated or rotated images. As a limitation, training a restorer on a large dataset such as the ImageNet is still computationally difficult.
|
2309.00707 | The Power of Patents: Leveraging Text Mining and Social Network Analysis
to Forecast IoT Trends | Technology has become an indispensable competitive tool as science and
technology have progressed throughout history. Organizations can compete on an
equal footing by implementing technology appropriately. Technology trends or
technology lifecycles begin during the initiation phase. Finally, it reaches
saturation after entering the maturity phase. As technology reaches saturation,
it will be removed or replaced by another. This makes investing in technologies
during this phase unjustifiable. Technology forecasting is a critical tool for
research and development to determine the future direction of technology. Based
on registered patents, this study examined the trends of IOT technologies. A
total of 3697 patents related to the Internet of Things from the last six years
of patenting have been gathered using lens.org for this purpose. The main
people and companies were identified through the creation of the IOT patent
registration cooperation network, and the main groups active in patent
registration were identified by the community detection technique. The patents
were then divided into six technology categories: Safety and Security,
Information Services, Public Safety and Environment Monitoring, Collaborative
Aware Systems, Smart Homes/Buildings, and Smart Grid. And their technical
maturity was identified and examined using the Sigma Plot program. Based on the
findings, information services technologies are in the saturation stage, while
both smart homes/buildings, and smart grid technologies are in the saturation
stage. Three technologies, Safety and Security, Public Safety and Environment
Monitoring, and Collaborative Aware Systems are in the maturity stage. | Mehrdad Maghsoudi, Reza Nourbakhsh, Mehrdad Agha Mohammadali Kermani, Rahim Khanizad | 2023-09-01T19:27:21Z | http://arxiv.org/abs/2309.00707v1 | ## The Power of Patents: Leveraging Text Mining and Social Network Analysis to Forecast IoT Trends
## Abstract
Technology has become an indispensable competitive tool as science and technology have progressed throughout history. Organizations can compete on an equal footing by implementing technology appropriately. Technology trends or technology lifecycles begin during the initiation phase. Finally, it reaches saturation after entering the maturity phase. As technology reaches saturation, it will be removed or replaced by another. This makes investing in technologies during this phase unjustifiable. Technology forecasting is a critical tool for research and development to determine the future direction of technology. Based on registered patents, this study examined the trends of IOT technologies. A total of 3697 patents related to the Internet of Things from the last six years of patenting have been gathered using lens.org for this purpose. The main people and companies were identified through the creation of the IOT patent registration cooperation network, and the main groups active in patent registration were identified by the community detection technique. The patents were then divided into six technology categories: Safety and Security, Information Services, Public Safety and Environment Monitoring, Collaborative Aware Systems, Smart Homes/Buildings, and Smart Grid. And their technical maturity was identified and examined using the Sigma Plot program. Based on the findings, information services technologies are in the saturation stage, while both smart homes/buildings, and smart grid technologies are in
the saturation stage. Three technologies, Safety and Security, Public Safety and Environment Monitoring, and Collaborative Aware Systems are in the maturity stage.
_Keywords:_**Internet of Things, Patent Mining, IOT patent, community detection, Technology life cycle.**
## 1 Introduction
A network of physical devices that collect, analyze, and share data in real-time is known as the Internet of Things, The Internet of Things aims to improve our quality of life. [1]. In 1999, AUTO-ID Labs proposed the "Internet of Things" concept. [2]. By 2025, there are expected to be over 25 billion connected devices, including houses, phones, automobiles, and factories. RFID, geolocation, and sensor networks are all emerging technologies that will continue to grow. [3]. As a result of improving efficiency and quality of processes, IoT offers great potential for reducing costs. Industrial and consumer markets will be able to benefit from new sensors, embedded processors, and connectivity methods that will be developed. [4] The majority of applications, including healthcare, smart homes, smart agriculture, industry 4.0 and factory automation, intelligent transportation systems, smart cities, infrastructure monitoring, the retail sector, environmental monitoring, smart water, and power grids, among others, are impacted by IoT. [1] Figure 1 shows the diversity of IoT applications. IoT network management has grown to be a very challenging task and owing to the enormous increase in connected devices, the difficulty will only increase in networks beyond 5G, i.e., 6G. Researchers have proposed innovative IoT management solutions in response to these problems, such as load balancing, energy management, security, scalability, and fault tolerance. [5]. When connection and intelligent components take precedence over the physical component of the "object," the Internet of Things promotes the service industry. The IoT will increase the economic flexibility, personalization, and effectiveness of the physical environment. [4]. High technology now plays a crucial role in enterprises as a competitive advantage due to the complexity and variety of digital transitions. Hence, accurately predicting new technologies and their patterns for the future at the correct moment is crucial for company continuity and compliance. Although the future of technology is fraught with uncertainty and ambiguity because of the highest fast changes [6].
Figure 1: Diversity of IoT applications [1]
Technology forecasting results in a preliminary appraisal of their potential as well as a determination of their future sectors of use and prospects. Technology forecasting is a useful technique to support the management and planning of future research activities since it not only helps scientists and technology strategists in making scientific judgments but also aids researchers in understanding the path of technology development. [7, 8]. As a result, while investing in new technology, it is important to consider where it stands in terms of the product life cycle [6], A current hot issue in technology management that benefits all sectors is technical prediction. Studies in this discipline offer a thorough, comprehensive look into potential advancements in a variety of human activities, frequently leading to significant disruptions in employment, personal life, corporate structures, and governmental policy. [9]. A technological trend is seen as a consistently expanding field of technology with a particular pattern; the pattern as a trend should have existed for a specific amount of time. Many techniques have been created to locate and predict the pattern. The conventional approach to identifying and projecting technological trends often relies on the expertise of experts and entails a time-consuming, expensive process influenced by subjective variables. [10]. The direction and pace of technological progress are predicted through technology trend analysis. Many technologies, including fuel cells, food safety, optical storage, RFID, 3D TV, programming languages, operating systems, supercomputers, etc., are analyzed using various technological forecasting techniques in the literature. The pattern of technological development aids in the analysis of technological innovation and market features. [11], One of these techniques is patent analysis, which many researchers have taken into consideration. [6]. For decision-makers, the patents include highly specific and important information. Other details seen in patent filings include the number of citations, title, abstract, and inventors. This data is used to do various patent analyses. [12]. Patents are significant sources of knowledge about emerging technologies and breakthroughs that frequently boost societal progress and economic performance, protect a country's or an organization's secret knowledge, and provide them a long-term competitive edge. [13]. It may be employed to create corporate technology strategies and assess a company's technological capacity as the foundation for corporate merger and acquisition plans. [14]. Text analysis and clustering algorithms have been used to study patents due to their abundance and the fact that they contain both structured and unstructured information regarding technology [6]. So, the purpose of this research is to examine IOT technology patents to identify its trends as an emerging technology by applying text mining and SNA analysis, and community detection.
The structure of this paper is such that Section 2 provides a brief description of the necessary background including IoT, patent analysis, SNA analysis, community detection and text mining, and technology life cycle. In section 3, the research method is presented. The results of IOT patent analysis and technology life cycle forecasting are discussed in Section 4. Finally, Section 5 concludes the paper with some discussions.
## 2 Literature review
In the last decade, a huge number of bibliographic works and patents have been published on IoT. Many efforts have been made to explain its layers and domains in light of its variety of applications. Innovation assessments and frontier technology detection were also interesting. Technology diffusion will be accelerated by these kinds of studies. Throughout this literature review, we hope to gain a sufficient understanding of IoT technology structure and patent analysis methods such as SNA. A method for analyzing patent networks for innovation assessment.
**2.1.Internet of things**
The Internet of Things (IoT) is a technological platform that connects the real and virtual worlds via the Internet. IoT combines a variety of intelligent systems and smart gadgets to improve our lives. [15] Four elements make up a basic Internet of Things architecture: wired or wireless sensors and actuators, data gathering systems and gateways, edge infrastructure, and cloud platforms. [16]. The Internet of Things (IoT) idea expands the current Internet network and denotes ubiquitous network-connected objects that serve as the connection between the physical and digital worlds. [17], The Internet of Things (IoT) is a collection of systems and techniques for exchanging or gathering data. Security is one of the insurance, necessity of care, condition, and varied issues. [18] IoT utilizes devices with sensors that can communicate data to the cloud to do data analytics and provide control choices for cyber-physical systems. [19]. It began in 1998, and Kevin Ashton first used the term IoT in 1999. [20], IoT applications are typically summed up using phrases with the prefix "smart," such as "smart supply chain," "smart cities," "smart health care," and so on. [21], From 8.7 billion linked devices in 2020 to more than 25 billion IoT devices in 2030, the IoT technology prediction predicts growth of almost 300%. With more than 3 billion active devices in 2020, China was at the forefront of IoT application development. Every industrial sector and retail market contains prevalent IoT devices. For instance, by 2020, the retail sector will account for 60% of all IoT devices. It is anticipated that this allocation won't change during the following 10 years. Applications monitoring, energy management, medical systems, building automation, and transportation are just a few of the sectors of society where IoT services are being deployed. Data management is crucial for the IoT service since the majority of IoT services depend on the server to deliver an IoT service to seamless customers. [22].
In the field of information and communication technology, the combination of the phrases "internet" and "things" enables a remarkable degree of creativity (ICT). The first shifts attention to an "Internet-oriented vision" of the IoT, while the second shift attention to the second vision, "Things-oriented vision." Both are used to provide a framework that allows a vast array of diverse items to be interconnected. [23].
Figure 2 depicts the IoT concept from the three aspects discussed above: internet-oriented, things-oriented (smart objects and sensors), and semantic-oriented (knowledge). Concerning these three visions, the essential ideas, methods, and standards are stated and grouped. Concerning these three visions, the essential ideas, methods, and standards are stated and grouped. This example makes it obvious that the three basic concepts come together to create the unique IoT paradigm.[23] And Figure 3 shows the evolution of IoT technology [23]
Figure 2: The IoT vision from the aforementioned perspectives[23]
### 2.2. Patent analysis
Patents are a global source of technological knowledge and a quantifiable result of R&D activity. Patents integrate technical, legal, and economic information in a way that academic publications do not, making them richer sources of data than academic papers. [24], Three steps make up a patent analysis: information preparation, analysis, and knowledge discovery. [25]. Class level and patent level are the two components of the patent analysis. [26], Both organized and unstructured data are present in patent information. Examples of structured data include the patent number, International Patent Classification (IPC) numbers, filing date, inventor, etc. Unstructured data, which contains information with various forms and styles (such as headlines and abstracts in patent applications), can disclose fresh insights, including the popular technology indicated by high-frequency terms that haven't yet been completely utilized. [27], To analyze activities, trends, and technological gaps, traditional patent analysis literature has generally employed bibliometric analysis to examine organized bibliographical data and patent citations. [28]. Strategic planning, technology management, competition analysis, and R&D unit management have all been made possible by the study of information gleaned through patents. Hence, in the field of technological trend research and its predictions, patent analysis has evolved into a strategic instrument. The finest resources for technology forecasting and technology are patents, which provide up-to-date information on many technological fields. [6]. The presented patent life-cycle outlines some of the tasks that can be automated, at least in part. The phrase "patent analysis" is frequently used to condense these duties. We determined the most well-liked jobs for autonomous patent analysis from the literature. 1. Supporting activities, such as pre-processing, collecting data for additional analysis, or translating patents into other languages; 2. Patent categorization, in which patent
Figure 3: Evolution of 10T[23]
applications are organized hierarchically according to the invention's subject matter; 3. Prior art search, automated patent landscaping, infringement search, freedom-to-operate search, and passage retrieval are subsets of patent retrieval 4. Patent valuation and market value prediction is a cutting-edge study that examines the text and bibliographic information of patents to assess the validity of patent applications. This analysis is also used to solve regression issues and add market value; 5. Technology forecasting, which uses patents to analyze the technological environment and aids academics in identifying emerging or popular technologies; 6. Patent text creation, which automates the production of patent claims by using the format and design elements found in published patent filings; 7. Litigation analysis is a legal procedure whereby possible patents cause a legal disagreement or litigation between any two businesses by preventing the formulation of business plans 8. Computer tasks that use images and drawings from patent papers rather than text, or computer vision activities. [29]. Here are the stages of patent mining has shown in Figure 4:
## 3 Social Network Analysis (SNA)
SNA is a quantitative research technique that identifies how participants in a network interact with one another. A network allows for the simplification and visualization of many complicated systems. [30, 31], A study technique called social network analysis (SNA) examines the connections between various groupings of entities, including people and organizations, communities, businesses, nations, and other large collective groups. The main emphasis of network analysis is the phenomena or data that their connection models represent. [32, 33], A method and a collection of tools called social network analysis are used to analyze the relational features of networks. The responsibilities and positions of the network's members must be determined to apply the notions surrounding the behavior of networks. [34, 35] Freeman's "The History of Social Network Analysis" provided one of the most thorough summaries of the history of the development of SNA (Freeman 2004). Freeman offered "the history of social network
Figure 4: The stages of potent mining
analysis written from a social network viewpoint" by tracing the relationships between those engaged in the field's growth and highlighting key historical moments using an approach borrowed from the sociology of science. [36, 37], It is crucial to recognize the functions and places of the network's members in order to apply notions underlying the behavior of networks.[34], Among the methods used in social network analysis 1) Centrality: The centrality analysis is the process of determining how central a component is to a network. [38, 39], Measures of centrality seek to quantify the relative significance of the nodes. Given the descriptions provided above, it is important to investigate the relationship between the crucial and central nodes. As a result, a lot of academics look at the effectiveness of employing central nodes to discover key nodes. Degree centrality, closeness centrality, and betweenness centrality are the most used metrics in this area. [33, 40, 41]
* Closeness: The concept of closeness centrality (CC) is based on the proximity of nodes. A node is central when it can travel a short distance to any other node in the network. The following is a definition of a node's proximity centrality: [40, 42] It is shown in Equation 1: \[c_{c}(i)=\frac{n-1}{j_{\epsilon v,j\neq i}^{\sum\nolimits_{\begin{subarray}{c} j\leq dis(i,j)\\ \text{Equation 1}\end{subarray}}}}\]
* betweenness: One of the most often used centrality metrics in the literature on networks is betweenness centrality (BC). It is based on networks' shortest pathways. The shortest pathways have the most central nodes. The following definition of a node's betweenness centrality: [40] is shown in Equation 2: \[C_{B}(i)=\sum_{\begin{subarray}{c}j,k\in v,j\neq k\neq i\\ \text{Equation 2}\end{subarray}}\frac{\sigma_{jk}(i)}{\sigma_{jk}}\]
* Eigenvector centrality: Based on the idea that a node's centrality is inversely correlated with the sum of its neighbors' centrality values, a node's Eigenvector centrality is calculated. When a node is connected to several nodes with high eigenvector centrality, the node's eigenvector centrality increases. [43] Figure 5 indicates a schematic view of betweenness, closeness, and eigenvector. [43, 44]
#### 2.3.1 Community detection
Each community consists of a set of nodes that are highly connected among themselves but have few connections to other nodes in the network as a whole. [45, 46]. Nodes in networks combine to form closely knit units known as network communities, clusters, or modules. Identification of such groups of functionally linked nodes from the unlabeled network is the aim of network community discovery. [47, 48]. Potential community structures in the network may be found by using community detection, which is useful in many different study fields. Community detection is therefore commonly utilized in a variety of networks. For instance, community detection in social networks may assist platform service providers in identifying groups with like interests, which helps propose products to specific users. Community identification in citation networks helps locate new research groups and forecast trends in the field. Community identification in biological networks can help to uncover the community structure and development of certain biomolecules. [49-51] here is the framework of the CDBNE model in Figure 6:
### 2.4 Text mining
Figure 5: Betweenness, closeness, eigenvector[43]
Figure 6: The framework of the CDBNE model[49]
In order to investigate unstructured data sources and find potentially useful schemas, models, patterns, or laws from textual data sources, TM is described as a collection of approaches. TM techniques have been effectively used in a variety of industries, including banking, education, and medicine.[52], Industry sectors that have adopted text mining include healthcare, government, education, and manufacturing. Text mining techniques include pattern matching, topic tracking, summarization, categorization, grouping relationships, and information visualization. [53], It may be used in the study of mathematics, knowledge acquisition, computer technology, language processing, and natural language processing. Text mining is the process of examining and transforming a sizable volume of unstructured textual material into a format that can be utilized for analysis and the discovery of new, significant insights. [52]. We outline the phases of a text-mining project in this section using a broad definition of the word. Although not all projects will adhere to this approach exactly, we think that the majority of text-mining projects will go through the following stages. Figure 7 indicates the stages of text mining [54]
An essential area of natural language processing is text clustering. With a given text collection, it is defined as discovering groupings of comparable texts. It has various useful uses. The load of the news organization and summary is lessened, for instance, when a lot of daily news is grouped into several subjects. Finding user interest patterns based on the grouping of the texts they are interested in is a popular application. The development of intelligent information filtering and suggestion requires this important step. As a result, this assignment has received a lot of effort. [55], Figure 8 illustrates the performing contrastive learning for text representation. [55]
Figure 7: Stages of text mining
Clustering text documents is a key strategy used in the text mining field as well as several machine learning and pattern recognition applications. It is a procedure that groups a collection of text texts into hierarchies of semantic clusters. Information retrieval, document organization, and automated topic extraction are often uses of text document clustering. [56]. The process of text document clustering includes the following steps [57, 58]:
* Data Collection: Collecting the text data you wish to cluster is the first step. This might take the shape of a single document or a substantial body of text.
* Data Preparation: Preparing the data is the initial text and document clustering stage. This entails gathering the pertinent texts or papers and editing them to eliminate any extraneous letters, punctuation, and stop words. The text can also be made more normal by lemmatizing, stemming, or changing all of the text to lowercase.
* Feature Extraction: Extraction of the text's features is the following stage. This entails transforming the text into a numerical image that may be clustered. The Bag of Words model, in which each text is represented as a vector of word counts, is the technique for feature extraction that is most frequently utilized. Word Embeddings and Term Frequency-Inverse Document Frequency (TF-IDF) are other techniques.
* Clustering: The documents can be grouped into clusters based on the closeness of their characteristics once the feature space has been created using clustering techniques like K-means, hierarchical clustering, or density-based clustering.
* Cluster Evaluation: To make sure the clustering findings are accurate and, valuable, their quality should be assessed. The Elbow method, Davies Bouldin index, silhouette score, etc. are examples of evaluation measures. The number of clusters can be chosen in this phase.
* Interpretation and Visualization: In order to understand the data, the clustering findings need then be shown and explained. The subjects or themes present in each cluster can be visualized using methods like word clouds or topic modeling.
### Technology life cycle
Figure 8: Illustration of performing contrastive learning for text representations[55]
There are three stages of presentation, growth, and saturation in every technological life cycle. as presented in Figure 9. [6]
The development of technology may be seen as a series of trajectories, each of which represents a stage in the technology life cycle (TLC) and differs in terms of technological content and traits. In this regard, examining a TLC is a crucial starting point for learning about the many paths of technological advancement. Little was the one who initially suggested the idea of the TLC (1981). Since then, the majority of TLC analysis studies have been built on the strong premise that technology or a collection of technologies has a life cycle that includes germination, development, maturity, death, and maybe resurrection. [59]. One of the most well-known techniques for technology forecasting by depicting the technology life cycle is the "S-shaped curve". several models may be used to show an S-shaped curve, but the logistic model is the best option. [6] Equation 3 shows the logistic model that is applied in this research.
\[\boldsymbol{y_{t}}\frac{k}{\boldsymbol{1+e}^{-\left(\frac{t-a}{b}\right)}}\]
When t denotes time, \(\boldsymbol{y_{t}}\) is the total number of patents at that time. a is a point of inflection and b is a model parameter that governs the curvy forms. K is the greatest degree of \(\boldsymbol{y_{t}}\) and the top limit for patent numbers. [6]
These parameters can be achieved by the use of Least-Squares Fitting computations with the aid of statistical software such as Sigma Plot. After achieving these parameters for determining technology life cycle stages,
\[\text{If }\frac{\boldsymbol{y_{t}}}{k}<10\%,\text{ the stage of technological demonstration, if}\]
Figure 9: Technology lifecycle[6]
\(10\%<\frac{\gamma_{\mathrm{r}}}{k}<50\%\), Nevertheless, technology is still developing, if
\(50\%<\frac{\gamma_{\mathrm{r}}}{k}<90\%\), Consequently, technology is maturing, and finally if
\(90\%<\frac{\gamma_{\mathrm{r}}}{k}\), then technology has reached its peak. [60]
### Related Works
In the literature, many surveys have been published that forecast the future trends of technologies and their challenges by investigating patents that were created during the time According to: [61], tried to discuss research dynamics and propose a taxonomy of the Industry 4.0 research landscape along with future research directions. They named their article "Analysis and Synthesis of Industry 4.0 research landscape Using latent semantic analysis approach". A data-driven text mining approach, Latent Semantic Analysis (LSA), was used to review and extract knowledge from the large corpus of 503 abstracts of academic papers published in various journals and conference proceedings. The adopted technique extracts several latent factors that characterize the emerging pattern of research. The cross-loading analysis of high-loaded papers is performed to identify the semantic link between research areas and themes. LSA results uncover 13 principal research areas and 100 research themes. The study discovers "smart factory" and "new business model" as dominant research areas. A taxonomy is developed which contains five topical areas of the Industry 4.0 field.
similar work has been done named "Capturing the salient aspects of IoT research: A Social Network Analysis",[62]. In this research, they investigated on to capture the intellectual structure of the IoT field and research trends from quantitative and statistical analysis of research publications. In this research they used Social Network Analysis to survey 7767 papers between 2015-2018 registered in the Web of Science (WOS) database and from 2011 through 2014, 798 papers were extracted which acted as a base for comparison. The conclusion represents that the Increasing number of research papers is one of the reflections of research activity in various domains of this field and Research activity in Social Science is very important to understand the complexity of the human-machine interfaces that IoT will bring and the study can uncover a structure of IoT research which is not possible to be visualized by mere analysis of research papers. It also shows that research in this field has to be in different subdomains to create overall competency in IoT.
Another similar work titled "Exploring technology opportunities and evolution of IoT-related logistics services with text mining",[63], surveyed IoT technologies and services evolution, in this article they used data analysis and text mining techniques, technology opportunity analysis (TOA) and technology-service evolution analysis (TSEA) to review data. Gathering data was done from three databases. First, they explored five sources of journal articles according to Science Direct, IEEE Xplore, Taylor & Francis, Wiley, and Emerald Insight then they used WIPO for patents; then they used market reports and news gathered from the IoT Solutions World Congress website. Finally, 144 academic journal articles, 223 patents, and 36 market reports and news; a total of 403 documents with 134,524 words were gathered, the results of this paper provide methodological guidelines on this topic for a comprehensive understanding of IoT-related logistics services.[6], research with this title: "Blockchain technology forecasting by patent analytics and text mining" tried to survey blockchain patents. they used text mining and clustering methods to explore approximately 14000 patents worldwide registered in World Intellectual Property Organization
(WIPO) database. the purpose of this research is to explore the blockchain trends by investigating blockchain technology and its classification; They concluded that the patents in the USA patent database are in the growth phase and most patents are in the financial section but blockchain technology is in the emergence phase and evaluate by researches and inventors.
Another similar work is [64]. In this research titled "Identifying technological trajectories in the mining sector using patent citation networks" a patent citation network was used for a survey on technology changes in the mining industry between 1970 and 2015 using a combination of databases WIPO and EPO-PATSTAT. In line with this research, he considered two aspects of technical changes that have been largely disregarded in extant research. First, inventions' geographical patterns. Second, the role of key applicants in such patterns. This paper represents the main mining patents and pioneer inventors in this field are almost origins from the USA, so the trajectories are geographically limited and it illustrates that developing countries lag behind the technological frontier in mining.
Furthermore, just a few applicant firms are responsible for most inventive activities reflecting a highly concentrated oligopolistic structure, hence characterizing trajectories as applicant bounded.
"Investigating the Structure of the Internet of Things", is the title of [30] work. They analyzed the network of co-occurrence IPCs of IoT patents. They used social network analytics for an extracted data set containing 32 557 patents between April 1996 and June 2020 on The Lens website which is an open-source database and was chosen as the source of data. Their research represents the structure of the innovation path of IoT, and they will help create more applications and combinations of IoT with other fields. These results provide a guideline that shows how steps for innovation should start.
And at last [65], in this research named "Role and challenge of technology toward a smart sustainable city: Topic modeling, classification, and time series analysis using information and communication technology patent data" investigated on definition and classification of essential technology groups that comprise a smart sustainable city (SSC) and Exploration of patterns of technological growth by analyzing patents. They used Topic Modeling Network Analysis (community analysis), machine learning classification, and multivariate time series analysis respectively to survey 32020 patents from the Patent Cooperation Treaty (PCT), US Patent and Trademark Office (USPTO), and patent offices of Europe, Korea, Japan, and China. The result of the research represented the role and direction of technology for SSC and presents a comprehensive methodology for analyzing technology using patents.
Here, the summaries of related works specification have been shown in Table 1:
## 3 Methodology
Figure 10 illustrates the three main stages of this research. Data gathering is the initial phase, with keyword searches of patent information found in the title, abstract, and claims being the most efficient way to find patents in a certain subject. This stage makes use of the Lens.org patent index website, one of the biggest and best-known sources of patent information available anywhere in the globe for searching and exporting patent data. [66].
This study's second phase focuses on data pre-processing. Preparing the data for network and text mining is the aim of data pre-processing. To create a patent collaboration network, a communication table like Table 1 must be generated. With the origin and destination columns reflecting their respective responsibilities, this table offers details about two people or businesses engaged in patent registration. The value of the weight column, which represents the sum of the contributions made by the two parties, corresponds to the number of patents they have jointly registered. The number three is inserted into this column, for instance, if two parties have jointly registered three patents.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Source** & **Target** & **weight** \\ \hline Zhang Xu & Mirfakhraei Khashayar & 4 \\ Zhang Zheng & Smith Ned M & 4 \\ Zimmerman Scott & Myles Phillip & 4 \\ Aboul-Magd Osama & Suh Jung Hoon & 3 \\ Abraham Robin & Mittal Vijay & 3 \\ Gupta Ojas & Tidemann Jeremy & 8 \\ Ho Jostine Fei & Badawy Mohamed M & 8 \\ Johnson Shikik & Barzegar Farhad & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: COOPERATION IN PATENT REGISTRATION
Figure 10: Research methodology
Table 2 serves as a reference for creating and analyzing the network in this study, which is achieved using Gephi software - a powerful tool for network formation and analysis [41].
Text mining of patents requires pre-processing of patent titles and abstracts. Stop words are removed from these components to generate the text input necessary for creating the vector matrix. The text vector is then generated using the k-means method, followed by patent clustering as shown in Figure 10.
The presentation of the analysis's findings is the research's last step. The constructed network is examined using commonality identification tools, and the patent cluster is examined using a technology life cycle diagram.
## 4 Results
### Patent Data Collecting
In this study, we extracted information on the internet of things from the Lens.org database. The Lens website is the largest global patent database and provides free access to patent data. [66]. To collect the data, we used the search terms "Internet of things" and "IOT," which resulted in 3697 active patents. Figure 11 displays the distribution of these patents based on their year of publication and reveals a consistent increase in patenting activity related to the Internet of things in recent years.
The results of patent document clustering are applied to predicting the future trends of technologies in each cluster. The results are obtained by the logistic model and Sigma Plot Software as presented in **Error! R reference source not found..** And the Growth curve of IOT technology patents in six clusters is shown in Figure 11.
### The Data Collection
The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection of the data collection of the data collection. The data collection is a collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data collection of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data of the data collection of the data of the data collection of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data of the data collection of the data of the data of the data of the data of the data of the data of the data collection of the data of data of the data of the
### Technology Clustering
#### 4.2.1 Preprocessing Patent Titles and Abstracts
Word embedding is a technique used in natural language processing (NLP) to represent words as numeric vectors in a vector space for text analysis [67]. The numeric vectors are structured as a vector of real values that represent the meaning of each word, such that words that are closest to each other in the vector space are also similar in meaning. In this research, the titles and abstracts of each patent are mapped to an n-dimensional vector of real numbers using word embeddings [68]. This can be achieved by a combination of language modeling operations and feature learning techniques, where a lexicon of words or phrases is mapped to vectors of real numbers.
The specific text embedding used in this research is the "all MiniLM L6 v2" model, which is developed based on the BERT language model. This model was chosen due to its large training dataset, which includes more than 73 million articles. Figure 12 illustrates the process of text embedding [69].
#### 4.2.2 Patent Clustering
The Davies-Bouldin index is a measure of cluster separation and compactness, used to determine the optimal number of clusters in a dataset. A lower DB index indicates better clustering, as it means the clusters are more separated and compact [70]. In this research, the DB index was used to find the optimal number of clusters for the patent dataset, with K values ranging from 2 to 11. The resulting DB values were plotted in Figure 13, and the optimal number of clusters was determined to be 6 based on the minimum value of the DB index.
Figure 12: Process of text embeddings
The names of the clusters and the keywords of each cluster are shown in Table 3.
The name of each cluster had been chosen by considering the most important words and their repeat in the words cloud:
* **Safety and Security** are two essential conditions that must be met to ensure the operation and availability of IoT-based applications that refer to security and IoT security and safety are difficult to ensure since the variety of IoT devices, communication interfaces, and applications results in a wide range of safety and security needs and raises the cost of establishing relevant security measures. [74]. In Table 5, it is shown that this cluster patent technology emerging has started since 2012. Furthermore, it entered the growth stage in 2018. The first cluster contains roughly 15.3% of the patents that are currently accessible
Figure 13: Davies Bouldin index values for 2 to 11 clusters
in the database. This cluster represents the security of IOT technologies by examining its representative words (in conjunction with the keywords extracted from text mining and manual patent investigation). According to Table 5, this cluster will mature between 2021 and 2023 and is currently in its maturity stage. This cluster patent technology saturation will start in 2024.
* **Information services** assemble the sensor data that must be reported and delivered via the communication network to the IoT application for processing. [72]. The second cluster consists of words related to the field of information services. According to Table 5, this cluster will mature between 2021 and 2023 and is currently in its maturity stage. This cluster patent technology saturation will start in 2024. The saturation stage is the last stage of a technology life that technology is replaced by other technology. The process of saturation or decline is characterized by a collapse in a technology's innovation potential, which results in the technology's obsolescence after it has reached maturity [75].This cluster contains 17.7% of the patents that are currently accessible in the database.
* **Public safety and Environmental monitoring** concern the protection of people's lives and property. All nations around the globe should pay attention to this issue. In terms of life, health, significant public and private property, and social production, public security refers to the safety of an indeterminate majority. Natural catastrophes, law and order incidents, and crimes are the three basic dimensions of public security. [76]. The number of patents in the third cluster has likewise been registered since 2014, It can be inferred that their growth began in 2018. According to Table 5, the trend of this cluster's patents is in the growth stage. And it enters saturation level in 2024. This cluster contains 7.3% of patents.
* **Collaborative Aware systems** use the information gathered by the information aggregation services in order to decide what to do and how to do it. These technologies may be utilized by smart buildings, smart homes, smart transportation systems, and industrial automation. [72]. With 28% frequency, the fourth cluster pertains to collaborative aware systems technologies. According to Table 5, these patent clusters are in the maturity stage.
* A **Smart home/building** is a comparable setting, distinct from any other home that has heating, lighting, and other technological gadgets. One notable distinction is that they may be remotely operated via a computer or a smartphone. [73].this cluster contains 13.2% of the patents that are currently accessible in the database.
* A **Smart grid** is an electricity distribution system that monitors and responds to local variations in use using digital communications technologies. It is also known as a digital technology that permits two-way communication and enables users to submit their electrical needs following observations made with the use of sensors in return [73]. In 2016, referred patents entered the growth stage and in 2019, they reached the maturity stage. Moreover, in the year 2021, it is expected to reach its upper limit and will be entered into the saturation stage. This cluster contains 17.7% of patents.
In order to retrieve results for the last 6 years (since 2024 has not yet ended, we cannot enter the patents of that year into the patent database), we identified patent data related to IOT.
Figure 15 shows the current position of IOT technology in the S-curve diagrams for each cluster, and also the current trend of IOT technology for each cluster is listed in Table 4. Furthermore, the previous position and prediction of the future trends for each cluster are shown in Table 5
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Cluster name** & **Emerging** & **Growth** & **Maturity** & **Saturation** \\ \hline Safety and Security & 2012 & 2018 & 2021 & 2024 \\ Information services & 2008 & 2018 & 2020 & 2022 \\ Public safety and Environment & 2014 & 2019 & 2021 & 2024 \\ monitoring & 2016 & 2019 & 2021 & 2024 \\ Collaborative Aware Systems & 2015 & 2017 & 2019 & 2021 \\ Smart homes/ buildings & 2014 & 2016 & 2019 & 2021 \\ Smart grid & 2014 & 2016 & 2019 & 2021 \\ \hline \hline \end{tabular}
\end{table}
Table 5: RESULTS OF TECHNOLOGY STAGE DIVISION (the current stage is in gray cells)
Figure 14: IoT Patent Distribution
Figure 15: Growth curve of blockchain technology patents in six clusters
According to Figure 15, each cluster has been chosen represents various IoT technologies and each cluster has a situation on the S-curve as it is shown in Figure 15, Safety and Security (cluster 0), Public Safety and Environment Monitoring (cluster 2), and, Collaborative Aware Systems (cluster 3) are in the maturity stage. The reason for choosing these clusters for analysis is they represent important and emerging technologies in various fields that have the potential to transform industries, improve efficiencies, and provide new opportunities for businesses. By analyzing the S-curve for each cluster, you can gain insights into the current state of the technology, its growth trajectory, and potential for future development.
it's difficult to predict with certainty, but based on the S-curve analysis, we can make some educated guesses. For example, the clusters in the maturity stage may have reached their peak growth potential and may experience slower growth in the future. On the other hand, clusters in the saturation stage may continue to grow, but at a slower pace due to market saturation and competition. each cluster presents different opportunities and challenges. Companies may want to invest in the clusters in the saturation stage if they see opportunities for differentiation or innovation that can help them stand out in a crowded market. For clusters in the maturity stage, investing in incremental improvements or finding new applications or use cases may be the best approach. Ultimately, the decision to invest in a particular cluster will depend on a company's strategic goals, risk tolerance, and assessment of the market opportunity.
We have 7545 Nodes and 17247 edges in this network. There may be several people who register the patent. The people or companies that register a patent create a node. The presence of two people or two companies or one person with one company in the registration of a patent creates a connection or edge, and the collection of these connections in the registration of patents creates this network. various analyses for this graph are represented in Table 6.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Measure** & **Value** & **Description** \\ \hline Average Degree & 4.572 & On average, each person cooperated with more than 4 people to register patents. This number shows the high level of cooperation between individuals and companies in joint patent registration. \\ Average Weighted Degree & 6.83 & Considering repeated collaborations, each person has collaborated with more than 6 people in registering patents. \\ \hline \hline \end{tabular}
\end{table}
Table 6: MEASURES FOR ANALYSISING THE NETWORK
Figure 16: The relationship mapping between patents cooperation
Here in Table 7 some of the most important nodes are shown considering their type of centrality.
### Community detection
One other analysis that we can use for this network is community detection. a community is defined as a subset of nodes within the graph such that connections between the nodes are denser than connections with the rest of the network. We use this method for analyzing our graph and the results are shown in Table 8.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Nodes** & **Centrality type** & **Value** & **11Description** \\ \hline Smith Ned M & Degree centrality & 56 & The person who has the highest degree has many connections in the network. In this network, Smith Ned M has cooperated with a total of 56 people in patenting the Internet of Things. \\ Wang Jun & Closeness centrality & 1.0 & Closeness centrality represents the speed of diffusion in the network. In this network, Wang Jun has the highest closeness centrality, and in case of innovation or a new application of the Internet of Things, it will spread faster in the network and others will learn about it or follow its example. \\ Wang Jun & Betweenness centrality & 0.0011 & Betweenness centrality represents the fact that a person has a high speed in creating access to other members of the network. In this network, Wang Jun has the highest intermediary center and has the greatest possibility of establishing cooperation with other members of the network in registering Internet of Things patents. \\ Zhu Ming & Eigenvector & 0.0364 & The higher the number of the eigenvector indicates that the node is related to more important nodes. The higher the number of these important neighbors, the higher the eigenvector for the node in question. Here, Zhu Ming has connections with more important nodes than others. \\ \hline \hline \end{tabular}
\end{table}
Table 7: MOST IMPORTANT NODES CONSIDERING THEIR CENTRALITY.
The names of the detected communities have been selected based on their main members, and the following analyzes are presented based on the selected names:
* **USA companies**: This cluster includes several individuals who work at the law firm Fox Rothschild LLP, which suggests that they specialize in legal matters related to IoT patents, and considering that all of these companies are American so it is related to USA companies. This cluster has 155 nodes which contain 2.05% of the whole amount of network. It has 404 edges which are related to each other with a density of about 0.034 which illustrate the weakness of this relation.
* **Chinese companies**: This cluster includes several individuals with Chinese names, which suggests that they may work for Chinese technology companies that are active in IoT patent registration. This cluster contains 131 nodes which make up 1.74% of the network. It has 740 edges that are related to each other with a density of about 0.087 which shows the strong relationship between components.
* **Global companies**: This cluster includes individuals with a range of different last names, which suggests that they come from different companies and may have different areas of expertise related to IoT patents. This cluster has 110 nodes which contain 1.46% of the whole network. It has 562 edges that have a very powerful relationship because of their density which is about 0.094.
* **Spanish-speaking companies** This cluster includes several individuals with Spanish-sounding last names, which suggests that they may work for companies based in Spanish-speaking countries and specialize in IoT patents. This cluster has 109 nodes which contain
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{**Color**} & \multicolumn{1}{c}{\begin{tabular}{c} **Community** \\ **Name** \\ \end{tabular} } & \multicolumn{1}{c}{**Demographic information**} & \multicolumn{1}{c}{**Most important components**} \\ \hline \multirow{3}{*}{\(\bullet\) Brown} & \multirow{3}{*}{USA Companies} & Node: 155(2.05\%) & Trim Craig M – Fox Jeremy R – Bender Michael – Bage-404(2.34\%) & Baughman Aaron K – Kwatra Shikhar – Rakshit \\ & & Density: 0.034 & Sarbajit K. \\ \cline{3-4} & & & Node: 131(1.74\%) & Zhu Ming – Chen Jie – Zhao Jian – Ao Weilin – \\ \cline{3-4} & \multirow{3}{*}{Blue} & Chinese & Edge: 740(4.29\%) & Luo Jizhong – Qiu Ruicheng – Sun Zhongqiu – Wu \\ \cline{3-4} & & & Density: 0.087 & Zhuokun – Xiao Chunhong – Xu Yifei \\ \cline{3-4} & & & Node: 110(1.46\%) & Smith Ned M – Poornachandran Rajesh – Nolan Keith – Kelly Mark – Brady John – Nolan Michael – Burns Gregory \\ \cline{3-4} & \multirow{3}{*}{Green} & \multirow{3}{*}{Global Companies} & Node: 109(1.44\%) & Xu Hao – Rico Alvarino Alberto – Jiang Jing – Li Junyi – Luo Tao – Montojo Juan \\ \cline{3-4} & & & Node: 109(1.44\%) & Wang Wei – Miao Weiwei – Li Wei – Yao Jiming – Guo Yunfei \\ \cline{3-4} & & & & \\ \cline{3-4} & \multirow{3}{*}{Red} & \multirow{3}{*}{\begin{tabular}{c} **Spanish speaking** \\ **Companies** \\ \end{tabular} } & \multirow{3}{*}{
\begin{tabular}{c} **Spanish speaking** \\ **Companies** \\ \end{tabular} } \\ & & & & \\ \cline{3-4} & & & & \\ \cline{3-4} & & & & \\ \cline{3-4} & & & & \\ \cline{3-4} & & & & \\ \cline{3-4} & & & & \\ \cline{3-4} & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 8: COMMUNITY DETECTION ANALYSIS
1.44% of the whole network. It has 338 edges which are related to each other with a density of about 0.057 which shows the relation isn't weak but it's not very strong too.
* **Chinese tech companies**: This cluster includes several individuals with Chinese names, which suggests that they may work for Chinese technology companies that are active in IoT patent registration. This cluster has 109 nodes which contain 1.44% of the whole network. It has 493 edges which are related to each other strongly because of their density which is about 0.084
* **European companies**: This cluster includes individuals with English and French-sounding last names, which suggests that they may work for technology companies based in Europe that are active in IoT patent registration. This cluster has 107 nodes which contain 1.42% of the whole network. It has 296 edges that are related to each other not weakly but, not strongly. Because the density is 0.052.
## 5 Conclusion
In conclusion, this research used patent analytics, text mining, and social network analysis to identify trends in Internet of Things (IoT) technology. The study collected 3697 patents related to IoT technology over the past six years and categorized them into six technology categories. The main people and companies were identified through the creation of the IoT patent registration cooperation network, and the main groups active in patent registration were identified using the community detection technique. The patents were also examined for their technical maturity using the Sigma Plot program.
Based on the findings, information services technologies are in the saturation stage, while both smart homes/buildings and smart grid technologies are in the saturation stage. Three technologies, safety and security, public safety and environment monitoring, and collaborative aware systems, are in the maturity stage. The research also identified high levels of cooperation between individuals and companies in joint patent registration, with an average degree of 4.572 and an average weighted degree of 6.83.
Overall, this research demonstrates the importance of technology forecasting in determining the future direction of technology, and the effectiveness of patent analytics, text mining, and social network analysis in identifying technology trends. The findings of this study can be useful for organizations and individuals involved in research and development of IoT technologies, as it provides insights into the current state of the technology and its potential future direction. Further research can build on this study by examining additional patent data and applying different analytical techniques to gain a deeper understanding of IoT technology trends. |
2306.10391 | The Dirichlet problem for the minimal surface equation on unbounded
helicoidal domains of $\mathbb{R}^{m}$ | We consider a helicoidal group $G$ in $\mathbb{R}^{n+1}$ and unbounded
$G$-invariant $C^{2,\alpha}$-domains $\Omega\subset\mathbb{R}^{n+1}$ whose
helicoidal projections are exterior domains in $\mathbb{R}^{n}$, $n\geq2$. We
show that for all $s\in\mathbb{R}$, there exists a $G$-invariant solution
$u_{s}\in C^{2,\alpha}\left( \overline{\Omega}\right) $ of the Dirichlet
problem for the minimal surface equation with zero boundary data which
satisfies $\sup_{\partial\Omega}\left\vert \operatorname{grad}u_{s}\right\vert
=\left\vert s\right\vert $. Additionally, we provide further information on the
behavior of these solutions at infinity. | Ari Aiolfi, Caroline Assmann, Jaime Ripoll | 2023-06-17T17:03:26Z | http://arxiv.org/abs/2306.10391v1 | The Dirichlet problem for the minimal surface equation on unbounded helicoidal domains of \(\mathbbm{R}^{m}\)
###### Abstract
We consider a helicoidal group \(G\) in \(\mathbb{R}^{n+1}\) and unbounded \(G\)-invariant \(C^{2,\alpha}\)-domains \(\Omega\subset\mathbb{R}^{n+1}\) whose helicoidal projections are exterior domains in \(\mathbb{R}^{n}\), \(n\geq 2\). We show that for all \(s\in\mathbb{R}\), there exists a \(G\)-invariant solution \(u_{s}\in C^{2,\alpha}\left(\overline{\Omega}\right)\) of the Dirichlet problem for the minimal surface equation with zero boundary data which satisfies \(\sup_{\partial\Omega}|\mathrm{grad}\,u_{s}|=|s|\). Additionally, we provide further information on the behavior of these solutions at infinity.
## 1 Introduction
The Dirichlet problem for the minimal surface equation (mse) in \(\mathbb{R}^{m}\), \(m\geq 2\), namely,
\[\left\{\begin{array}{c}\mathcal{M}\left(u\right):=\mathrm{div}\left(\frac{ \mathrm{grad}\,u}{\sqrt{1+|\mathrm{grad}\,u|^{2}}}\right)=0\text{ in }\Omega,\ u\in C^{2}\left(\Omega\right)\cap C^{0}\left( \overline{\Omega}\right)\\ u|_{\partial\Omega}=\varphi\end{array}\right. \tag{1}\]
where \(\Omega\subset\mathbb{R}^{m}\) is a \(C^{2}\)-domain and \(\varphi\in C^{0}\left(\partial\Omega\right)\) is given a priori, has been intensively explored in the last decades. One of the most general answers to the Dirichlet problem (1) for bounded domain was given by H. Jenkins and J. Serrin in [10]. They showed that (1) is solvable for arbitrary \(\varphi\in C^{0}\left(\partial\Omega\right)\) if only if \(\Omega\) is mean convex. Moreover, they noted that if \(\varphi\in C^{2}\left(\partial\Omega\right)\), a bound on the oscillation of \(\varphi\) in terms of the second order norm of \(\varphi\) should be enough to ensure the solvability of (1) on arbitrary bounded domains (Theorem 2 of [10]).
The study of the Dirichlet problem for the mse on unbounded domains began with J. C. C. Nitsche in the so called _exterior domains_ that is, when \(\mathbb{R}^{m}-\Omega\) is relatively compact (Section 4 of [16]). Since then several authors continue the investigation of the Dirichlet problem for the mse in exterior domains ([17], [14], [15], [3],[13][18], [20][1]).
The Dirichlet problem (1) for more general unbounded domains reduces, to authors knowledge, to few works: when \(m=2\), Rosenberg and Sa Earp ([7]) proved that (1) has a solution if \(\Omega\subset\mathbb{R}^{2}\) is convex subset distinct of a half plane for any continuous boundary data \(\varphi\). In the half plane case, in the case that \(\varphi\) is bounded, Collin and Krust ([3]) proved that if \(\Omega\) is a convex set distinct of a half plane, then the solution is unique and, if \(\Omega\) is a half plane then there is a unique solution with linear growth. For an arbitrary dimension \(m\), Z. Jin, in [11], proved that (1) has a solution if \(\Omega\) is a mean convex domain contained in some special like parabola-shape region or in the complement of a cone in \(\mathbb{R}^{m}.\) More recently N. Edelen and Z. Wang proved that if \(\Omega\varsubsetneq\mathbb{R}^{n}\) is an open convex domain (e.g. a half-space) and \(\varphi\in C^{0}\left(\partial\Omega\right)\) is a linear function, then any solution of (1) must also be linear.
In our work we obtain an extension of the exterior Dirichlet problem for the minimal surface equation in \(\mathbb{R}^{m},\)\(m\geq 3,\) in the following sense: we say that a domain is \(k-\)bounded, \(0\leq k\leq m,\) if it is bounded in \(k\) directions of the space \(\mathbb{R}^{m}\) (as a direction we mean an equivalence class of parallel lines of \(\mathbb{R}^{m}\)). As so, a domain of \(\mathbb{R}^{m}\) is relatively compact if and only if it is \(m-\)bounded. In our main results we study the Dirichlet problem for the mse on certain domains \(\Omega\) of \(\mathbb{R}^{m},\)\(m\geq 3,\) which complement \(\mathbb{R}^{m}-\Omega\) is \((m-1)-\)bounded, and for zero boundary data. We recall that Theorem 3.5 of [3] proves the existence of solutions of the Dirichlet problem for the mse on a strip, a special \(1-\)bounded domain of \(\mathbb{R}^{2},\) for arbitrary continuous bounded boundary data.
To state precisely our theorems we need to recall a result of the third author and F. Tomi (Theorem 2 of [20]) which asserts that if \(\Omega\) is \(G\)-invariant \(C^{2,\alpha}\)-domain for \(m\geq 3,\) where \(G\) is a subgroup of \(ISO\left(\mathbb{R}^{m}\right)\) that acts freely and properly on \(\mathbb{R}^{m},\) such that \(P\left(\Omega\right)\) is a bounded and mean convex domain, then (1) has an unique \(G\)-invariant solution for any \(G-\)invariant boundary data \(\varphi\in C^{2,\alpha}\left(\partial\Omega\right)\), where \(P:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{m}/G\) is the projection through the orbits of \(G\) and \(\mathbb{R}^{m}/G\) is endowed with a metric such that \(P\) becomes a Riemannian submersion.
Related to the above result, we would like to mention that the use of a Lie group of symmetries to study minimal surfaces was first considered by Wu-ye Hsiang and Blaine Lawson in [8]. Although proving distinct facts, we can say that Proposition 3 of [20] has the same spirit of Theorem 2 of [8]. Also related to these results, there is the idea of using Lie groups of symmetries
to the study of minimal graphs (constant mean curvature graphs and more general PDE's too), as Killing graphs in warped products. This technique was first considered by Marcos Dajczer and the third author of this work in [4] and, since then, many works have been done extending and generalizing the results of [4], as [5], [6] and [9].
Let \(\lambda\in\mathbb{R}\), \(a\in\mathbb{R}-\left\{0\right\}\) and \(i,j,k\in\left\{1,...,n+1\right\}\) be given with any two \(i,j\) and \(k\) distinct. Consider the helicoidal like group \(G\equiv G_{\lambda,a}^{i,j,k}\) in \(\mathbb{R}^{n+1}\), \(n\geq 2\), determined by the one parameter subgroup of isometries \(G=\left\{\varphi_{t}\right\}_{t\in\mathbb{R}}\), where \(\varphi_{t}:\mathbb{R}^{n+1}\longrightarrow\mathbb{R}^{n+1}\) is given by
\[\varphi_{t}\left(p\right)=\alpha\left(p\right)e_{i}+\beta\left(p\right)e_{j}+ \gamma\left(p\right)e_{k}+\underset{l\neq i,j,k}{\sum}x_{l}e_{l}, \tag{2}\]
where \(p=\left(x_{1},...,x_{n+1}\right)\), \(\left\{e_{i}\right\}_{i=1}^{n+1}\) is usual orthonormal basis of \(\mathbb{R}^{n+1}\),
\[\alpha\left(p\right)=x_{i}\cos\lambda t+x_{j}\sin\lambda t,\,\beta\left(p \right)=x_{j}\cos\lambda t-x_{i}\sin\lambda t\]
and \(\gamma\left(p\right)=x_{k}+at\).
Let \(\pi:\mathbb{R}^{n+1}\longrightarrow\left\{x_{k}=0\right\}\equiv\mathbb{R}^{n}\) be the _helicoidal_ projection determined by \(G\), that is,
\[\pi\left(p\right)=\widehat{\alpha}\left(p\right)e_{i}+\widehat{\beta}\left(p \right)e_{j}+\underset{l\neq i,j,k}{\sum}x_{l}e_{l}, \tag{3}\]
where
\[\widehat{\alpha}\left(p\right)=x_{i}\cos\frac{\lambda x_{k}}{a}-x_{j}\sin \frac{\lambda x_{k}}{a},\,\widehat{\beta}\left(p\right)=x_{j}\cos\frac{ \lambda x_{k}}{a}+x_{i}\sin\frac{\lambda x_{k}}{a}.\]
Set
\[M:=(\mathbb{R}^{n},\left\langle,\right\rangle_{G}), \tag{4}\]
where \(\left\langle\;,\;\right\rangle_{G}\) is the metric such that \(\pi\) becomes a Riemannian submersion (clearly \(G\) acts freely and properly in \(\mathbb{R}^{n+1}\) and \(\left\{x_{k}=0\right\}\equiv\mathbb{R}^{n}\) is a slice relatively to the orbits \(Gp=\left\{\varphi_{t}\left(p\right),\,t\in\mathbb{R}\right\}\)). One may see that the map \(\psi:\mathbb{R}^{n+1}/G\rightarrow\mathbb{R}^{n}\) given by \(\psi=\pi\circ P^{-1}\) is well defined and is an isometry with the metrics mentioned above. From the isometric submersion theory, it follows that \(M\) is complete.
Let \(\Omega\subset\mathbb{R}^{n+1}\) be a \(G\)-invariant domain of class \(C^{2,\alpha}\) and set \(\Lambda=\pi\left(\Omega\right)\). Let \(d_{E}\left(p\right)=d_{E}\left(p,\partial\Omega\right)\), \(p\in\Omega\), be the (Euclidean) distance in \(\mathbb{R}^{n+1}\) to \(\partial\Omega\) and let \(d\left(q\right)=d_{M}\left(q,\partial\Lambda\right)\), \(q\in\Lambda\), be the distance in \(M\) to \(\partial\Lambda\). Given \(u\in C^{2}\left(\Omega\right)\cap C^{0}\left(\overline{\Omega}\right)\) and \(\varphi\in C^{0}\left(\partial\Omega\right),\,G\)-invariant functions, that is, \(u=v\circ\pi\) for some \(v\in C^{2}\left(\Lambda\right)\cap C^{0}\left(\overline{\Lambda}\right)\) and \(\varphi=\psi\circ\pi\) for some \(\psi\in C^{0}\left(\partial\Lambda\right)\), it follows from Proposition 3 of [20] that \(u\) is solution of (1) (relatively to \(m=n+1\)) if, and only if,
(5)
where \(\nabla\) and \(\mathrm{div}_{M}\) are the gradient and divergence in \(M\), respectively, and
\[J\left(\pi\left(p\right)\right)=d\pi_{p}\left(\overrightarrow{H}_{Gp}\left(p \right)\right),\ p\in\mathbb{R}^{n+1}, \tag{6}\]
where \(\overrightarrow{H}_{Gp}\) is the mean curvature vector of the 1-dimensional submanifold \(Gp\) of \(\mathbb{R}^{n+1}\). Moreover, \(\left|\overline{\nabla}u\right|=\left|\nabla v\right|\circ\pi\), where \(\overline{\nabla}\) denotes the gradient in \(\mathbb{R}^{n+1}\).
(1) When \(\Lambda\) is a bounded and mean convex domain is proved in [20], as mentioned above, that there is an unique \(G\)-invariant solution \(u\in C^{2,\alpha}\left(\overline{\Omega}\right)\) of (1) for all \(G\)-invariant \(\varphi\in C^{2,\alpha}\left(\partial\Omega\right)\). In this note we work with the case where \(\Lambda=\pi\left(\Omega\right)\) is an exterior domain in \(\mathbb{R}^{n}\) and the boundary data is zero. We observe that \(\Lambda\) is an exterior domain in \(M\) if and only if \(\Omega\) is \(n-\)bounded.
Figure 1: A \(G\)-invariant domain in \(\mathbb{R}^{3}\)
Regarding the condition of zero boundary data, we recall an old but quite suggestive result to our work of Osserman that proves that in \(\mathbb{R}^{2}\) with the Euclidean metric, there is a boundary data on a disk \(D\) for which there is no solution to the exterior Dirichlet problem in \(\mathbb{R}^{2}-D\). This strongly suggests, since \(K_{M}>0\), that the zero boundary data condition can not also be dropped out in our case. But we don't have a counter example.
In order to state our main results, we remember that, relatively to an exterior domain \(\mathbb{R}^{n}-\overline{\mathfrak{B}}_{\rho}\left(p_{0}\right)\) in \(\mathbb{R}^{n}\), \(n\geq 2\), where \(\mathfrak{B}_{\rho}\left(p_{0}\right)\) is an open ball of \(\mathbb{R}^{n}\) centered at \(p_{0}\in\mathbb{R}^{n}\) and of radius \(\rho>0\), the function
\[v_{\rho}\left(p\right):=\rho\int_{1}^{\frac{\tau}{\rho}}\frac{dt}{\sqrt{t^{2 \left(n-1\right)}-1}},\,\tau=\left|p-p_{0}\right|,\,p\in\mathbb{R}^{n}- \mathfrak{B}_{\rho}\left(p_{0}\right), \tag{7}\]
is the solution relatively to the Dirichlet problem (1) which satisfies
\[\lim_{p\rightarrow\partial\mathfrak{B}_{\rho}\left(p_{0}\right)}\left| \overline{\nabla}v_{\rho}\left(p\right)\right|=\infty,\,\lim_{\left|p\right| \rightarrow\infty}\left|\overline{\nabla}v_{\rho}\left(p\right)\right|=0\]
(a half-catenoid). If \(n\geq 3\), \(v_{\rho}\) is bounded and its height at infinity, which we denote by \(h\left(n,\rho\right)\), is
\[h\left(n,\rho\right)=\rho h\left(n,1\right)=\rho\int_{1}^{\infty}\frac{dt}{ \sqrt{t^{2\left(n-1\right)}-1}}. \tag{8}\]
In all of the results from now on, \(G\equiv G_{\lambda,a}^{i,j,k}\) is as defined in (2), with \(\lambda,a,i,j,k\) fixed.
We prove:
**Theorem 1**: _Let \(\Omega\subset\mathbb{R}^{n+1}\), \(n\geq 2\), be a \(G\)-invariant and \(C^{2,\alpha}\)-domain such that \(\pi\left(\Omega\right)\) is an exterior domain of \(\mathbb{R}^{n}=\left\{x_{k}=0\right\}\), where \(G\) and \(\pi\) are as defined in (2) and (3), respectively. Let \(\varrho>0\) be the radius of smallest geodesic ball of \(\mathbb{R}^{n}\) which contain \(\partial\pi\left(\Omega\right)\), centered at origin of \(\mathbb{R}^{n}\) if \(\lambda\neq 0\). Then, given \(s\in\mathbb{R}\), there is a \(G\)-invariant solution \(u_{s}\in C^{2,\alpha}\left(\overline{\Omega}\right)\) of the Dirichlet problem (1) with \(u_{s}|_{\partial\Omega}=0\), such that: i) \(\sup_{\overline{\Omega}}\left|\overline{\nabla}u_{s}\right|=\sup_{\partial \Omega}\left|\overline{\nabla}u_{s}\right|=\left|s\right|\); ii) \(u_{s}\) is unbounded if \(n=2\) and \(s\neq 0\) and, if \(n\geq 3\), either_
\[\sup\left|u_{s}\right|\leq h\left(n,\varrho\right)\]
_or there is a complete, non-compact, properly embedded \(n\)-dimensional submanifold \(N\subset\Omega\), such that_
\[\lim_{d_{E}\left(p\right)\rightarrow+\infty}u_{s}\left|{}_{N}\left(p\right)=h \left(n,\varrho\right),\]
_where \(h\left(n,\varrho\right)\) is given by (8); iii) \(u_{s}\) satisfies_
\[\lim_{d_{E}(p)\rightarrow\infty}\left|\overline{\nabla}u_{s}\left(p\right) \right|=0\]
_if \(\lambda=0\) or \(s=0\) or, if \(\lambda\neq 0\), \(3\leq n\leq 6\) and \(u_{s}\)is bounded._
Additional informations relatively to set of \(G\)-invariant solutions of the Dirichlet problem (1) also are obtained under the assumption that \(M-\pi\left(\Omega\right)\) satisfies the interior sphere condition of radius \(r>0\), that is, for each \(q\in\partial\pi\left(\Omega\right)\), there is a geodesic sphere \(S_{q}\) of \(M\) of radius \(r\) contained in \(M-\pi\left(\Omega\right)\) such that \(S_{q}\) is tangent to \(\partial\pi\left(\Omega\right)\) at \(q\) and \(r\) is maximal with this property.
Given \(a\in\mathbb{R}-\left\{0\right\}\), \(\lambda\in\mathbb{R}\), \(n\geq 3\) and \(r>0\) set
\[C=C\left(r,n,\lambda,a\right):=\frac{2\left|a\right|\left(n-1\right)+\left| \lambda\right|r}{2\left|a\right|r}. \tag{9}\]
Let \(\varsigma>C\) be the solution of the equation
\[\cosh\left(\frac{\mu}{\sqrt{\mu^{2}-C^{2}}}\right)=\frac{\mu}{C},\,\mu>C, \tag{10}\]
and set
\[\mathcal{L}=\mathcal{L}\left(r,n,\lambda,a\right):=\left\{\begin{array}{l} \frac{1}{\sqrt{\varsigma^{2}-C^{2}}}\text{ if }\lambda\neq 0\\ h\left(n,r\right)\text{ if }\lambda=0\end{array}\right., \tag{11}\]
where \(h\left(n,r\right)\) is given by (8).
**Theorem 2**: _Let \(\Omega\subset\mathbb{R}^{n+1}\), \(n\geq 3\), be a \(G\)-invariant and \(C^{2,\alpha}\)-domain such that \(\pi\left(\Omega\right)\) is an exterior domain of \(\mathbb{R}^{n}=\left\{x_{k}=0\right\}\), where \(G\) and \(\pi\) are as defined in (2) and (3), respectively. Assume that \(M-\pi\left(\Omega\right)\) satisfies the interior sphere condition of radius \(r>0\), where \(M\) is the \(n\)-dimensional Riemannian manifold given by (4). Let \(\mathcal{L}=\mathcal{L}\left(r,n,\lambda,a\right)\) be as defined in (11), where \(\lambda\) and \(a\) are given in the definition of \(G\). Then, given \(c\in\left[0,\mathcal{L}\right]\), there is a \(G\)-invariant solution \(u_{c}\in C^{2}\left(\Omega\right)\cap C^{0}\left(\overline{\Omega}\right)\) of (1) with \(u_{c}|_{\partial\Omega}=0\), such that_
\[\underset{d_{E}(p)\rightarrow\infty}{\lim}u_{c}\left(p\right)=c.\]
_In particular, if \(c\in\left[0,\mathcal{L}\right)\) then \(u_{c}\in C^{2,\alpha}\left(\overline{\Omega}\right)\)._
We note that our approach is not applicable for general boundary data. Moreover, we were not able to prove that the solutions \(u_{s}\) obtained in Theorem 1 have a limit at infinity and this could be the subject of a future research.
Preliminaries
We first observe that, relatively to the PDE given in (5), the maximum and comparison principles work (see Section 3 of [20]).
In this section, we give further informations on \(M\) and we provide the basic results to construct barriers relative to the Dirichlet problem (5) when the boundary data is zero. We shall use the meaning of the indexes \(i\) and \(j\) as defined in (2).
**Lemma 3**: _Let \(G\) be as defined in (2). Given \(p=\left(x_{1},...,x_{n+1}\right)\in\mathbb{R}^{n+1}\), the orbit \(Gp\) has constant curvature_
\[H_{Gp}=\frac{\lambda^{2}\sqrt{x_{i}^{2}+x_{j}^{2}}}{\lambda^{2}(x_{i}^{2}+x_{j }^{2})+a^{2}}. \tag{12}\]
_In particular,_
\[\underset{p\in\mathbb{R}^{n+1}}{\sup}H_{Gp}=\frac{\left|\lambda\right|}{2 \left|a\right|}, \tag{13}\]
_and the supremum is attended, if \(\lambda\neq 0\), at those orbits through the points \(p\) in \(\mathbb{R}^{n+1}\) such that \(\left|\left(x_{i},x_{j}\right)\right|=\left|a\right|/\left|\lambda\right|\)._
**Proof.** An arch length parametrization of \(Gp\) is given by
\[\gamma_{p}(s)=F\left(p\right)e_{i}+G\left(p\right)e_{j}+H\left(p\right)e_{k}+ \sum_{l\neq i,j,k}x_{l}e_{l}.\]
where
\[F\left(p\right)=x_{i}\cos(A\left(p\right)s)+x_{j}\sin(A\left(p\right)s),\,G \left(p\right)=x_{j}\cos(A\left(p\right)s)-x_{i}\sin(A\left(p\right)s)\]
and
\[H\left(p\right)=x_{k}+\frac{aA\left(p\right)s}{\lambda},\]
with
\[A(p)=\frac{\lambda}{\sqrt{\lambda^{2}(x_{i}^{2}+x_{j}^{2})+a^{2}}}. \tag{14}\]
The mean curvature vector of \(Gp\) is then
\[\overrightarrow{H}_{Gp}(\gamma_{p}(s))=\gamma_{p}^{\prime\prime}(s)=-A^{2} \left(p\right)\left[F\left(p\right)e_{i}+G\left(p\right)e_{j}\right], \tag{15}\]
and, consequently, the mean curvature of \(Gp\) in \(\mathbb{R}^{n+1}\) is given by (12). Since the mean curvature of the orbit \(Gp\) only depends on the Euclidean distance of \(x_{i}e_{i}+x_{j}e_{j}\) to the origin, setting \(\sigma\left(p\right)=\left|\left(x_{i},x_{j}\right)\right|\), we have that \(\xi:\left[0,\infty\right)\longrightarrow\mathbb{R}\) given by
\[\xi\left(\sigma\right)=\frac{\lambda^{2}\sigma}{\lambda^{2}\sigma^{2}+a^{2}}, \tag{16}\]
has a maximal absolute point at \(\sigma_{0}=\left|a\right|/\left|\lambda\right|\) if \(\lambda\neq 0\) and the result follows.
Since \(\pi:\mathbb{R}^{n+1}\longrightarrow M\) is a Riemannian submersion, given two orthogonal vector fields \(X,Y\in\chi\left(M\right)\) and their respective horizontal lift \(\overline{X},\overline{Y}\), we know that
\[K_{M}\left(X,Y\right)=K_{\mathbb{R}^{n+1}}\left(\overline{X},\overline{Y} \right)+\frac{3}{4}\left|\left[\overline{X},\overline{Y}\right]^{v}\right|_{ \mathbb{R}^{n+1}}^{2}\]
where \(K\) and \(\left[\overline{X},\overline{Y}\right]^{v}\) means, respectively, the sectional curvature and the vertical component of \(\left[\overline{X},\overline{Y}\right]\), that is, the component which is tangent to the orbits \(Gp\), \(p\in\mathbb{R}^{n+1}\). As \(K_{\mathbb{R}^{n+1}}\left(\overline{X},\overline{Y}\right)=0\), it follows that \(K_{M}\left(X,Y\right)\geq 0\) and, therefore, \(Ric_{M}\geq 0\) (straightforward, but quite extensive calculations, give us that, in fact, \(K_{M}>0\) with \(K_{M}\to 0\) at infinity).
**Lemma 4**: _Let \(\Lambda\) be an exterior domain in \(M\). Denote by \(\nu\) the horizontal lift of \(\nabla d\), where \(d=d_{M}\left(.,\partial\Lambda\right)\). Then_
\[\left\langle\nabla d,J\right\rangle_{M}\circ\pi=\left\langle\nu,\overrightarrow {H}_{G}\right\rangle \tag{17}\]
_where \(\left\langle,\right\rangle\) is the Euclidean metric and \(J\) is given by (6). In particular_
\[-H_{Gp}\left(p\right)\leq\left\langle\nabla d,J\right\rangle_{M}\left(\pi \left(p\right)\right),\]
_for all \(p\in\pi^{-1}\left(\Lambda\right)\), where \(H_{Gp}\) is given by (12)._
**Proof.** As \(\overrightarrow{H}_{G}\) and \(\nu\) are the horizontal lift of \(J\) and \(\nabla d\) respectively, we have \(J\left(\pi\left(p\right)\right)=d\pi_{p}\left(\overrightarrow{H}_{Gp}\left(p \right)\right)\) and \(\nabla d\left(\pi\left(p\right)\right)=d\pi_{p}\left(\nu\left(p\right)\right)\). Since \(\pi\) is a Riemannian submersion,
\[d\pi_{p}\big{|}_{\left[T_{p}\mathbb{R}^{n+1}\right]^{h}}:\left[T_{p}\mathbb{R }^{n+1}\right]^{h}\longrightarrow T_{\pi(p)}M\]
is an isometry, where \(\left[T_{p}\mathbb{R}^{n+1}\right]^{h}\) means the horizontal vector space relatively to \(Gp\) at \(p\) and, from this, we have (17). In particular, \(\left\langle\nu,\overrightarrow{H}_{Gp}\right\rangle\) is constant along \(Gp\). Note that \(\left|\nu\right|=1\) since \(\left|\nabla d\right|=1\). Thus
\[-H_{Gp}\left(p\right)=-\left|\overrightarrow{H}_{Gp}\left(p\right)\right|\leq \left\langle\nu,\overrightarrow{H}_{Gp}\right\rangle\left(p\right)\]
and the result follows.
**Proposition 5**: _Let \(G,\)\(\pi\) and \(M=\left(\mathbb{R}^{n},\left\langle,\right\rangle_{G}\right)\) as defined in (2), (3) and (4), respectively, and assume \(n\geq 3\) and \(\lambda\neq 0\). Let \(o\) be an arbitrary but fixed point of \(M\) and let \(\Lambda=M-\overline{B_{r}(o)}\), where \(B_{r}(o)\) is the open geodesic ball of \(M\) of radius \(r\) centered at \(o\). Let \(b\in\mathbb{R}\) satisfying \(b>C\), where \(C\) is given by (9). Consider \(\psi:\left[0,\infty\right)\longrightarrow\mathbb{R}\) given by_
\[\psi\left(t\right)=\frac{1}{b}\cosh^{-1}\left(1+bt\right) \tag{18}\]
_and \(\Lambda_{0}:=\left\{q\in\Lambda;d\left(q\right)\leq t_{0}\right\}\), where_
\[t_{0}=\frac{b-C}{bC}. \tag{19}\]
_Then \(w=\psi\circ d:\Lambda_{0}\rightarrow\mathbb{R}\) is such that \(w\in C^{2}\left(\Lambda_{0}\right)\cap C^{0}\left(\overline{\Lambda}_{0}\right)\), \(w|_{\partial\Lambda}=0\), \(w>0\) on \(\Lambda_{0}\),_
\[\lim_{d\left(q\right)\to 0}\left|\nabla w\left(q\right)\right|=+\infty\]
_and \(\mathfrak{M}\left(w\right)\leq 0\) on \(\Lambda_{0}\), where \(\mathfrak{M}\) is the operator defined in (5)._
**Proof.** Let \(\varphi\in C^{2}(\left(0,\infty\right))\cap C^{0}\left(\left[0,\infty\right)\right)\) to be determined _a posteriori_ and consider the function \(w:\)\(\Lambda\subset M\longrightarrow\mathbb{R}\) given by \(w\left(q\right)=\left(\varphi\circ d\right)\left(q\right)\). Straightforward calculations give us that, in \(\Lambda\),
\[div_{M}\left(\frac{\nabla w}{\sqrt{1+\left|\nabla w\right|^{2}}}\right)=g \left(d\right)\Delta d+g^{\prime}\left(d\right)\]
and
\[\frac{1}{\sqrt{1+\left|\nabla w\right|^{2}}}\left\langle\nabla w,J\right\rangle _{M}=g\left(d\right)\left\langle\nabla d,J\right\rangle_{M},\]
where
\[g\left(d\right):=\frac{\varphi^{\prime}\left(d\right)}{\sqrt{1+\left[\varphi ^{\prime}\left(d\right)\right]^{2}}}, \tag{20}\]
\(\Delta\) is the Laplacian in \(M\) and "\({}^{\prime}\)" means \(\frac{\partial}{\partial d}\). Thus, \(\mathfrak{M}\left(w\right)\leq 0\) in \(\Lambda\) if and only if
\[g\left(d\right)\Delta d+g^{\prime}\left(d\right)-g\left(d\right)\left\langle \nabla d,J\right\rangle_{M}\leq 0\text{ in }\Lambda. \tag{21}\]
From the Laplacian's Comparison Theorem, since \(Ric_{M}\geq 0\) and \(\dim M=n\), we have
\[\Delta d\left(q\right)\leq\frac{n-1}{d\left(q\right)+r}\leq\frac{n-1}{r},\,q \in\Lambda. \tag{22}\]
Now, we assume that our \(\varphi\) satisfies \(\varphi\left(0\right)=0\) and \(\varphi^{\prime}\left(d\right)>0\) for \(d>0\) (consequently, \(\varphi\left(d\right)>0\) for \(d>0\)). From (20), it follows that \(g\left(d\right)>0\) for \(d>0\) and, from (22), we conclude that if
\[\frac{\left(n-1\right)g\left(d\right)}{r}+g^{\prime}\left(d\right)-g\left(d \right)\left\langle\nabla d,J\right\rangle_{M}\leq 0\text{ in }\Lambda \tag{23}\]
then we have (21). From Lemma 4 and (13), we see that
\[-\frac{\left|\lambda\right|}{2\left|a\right|}\leq-H_{Gp}\left(p\right)\leq \left\langle\nabla d,J\right\rangle_{M}\circ\pi\left(p\right)\]
and, then, if
\[\frac{g^{\prime}\left(d\right)}{g\left(d\right)}\leq-C\text{ in }\Lambda, \tag{24}\]
where \(C\) is given by (9), then we have (23). From (20), we see that (24) is equivalent to
\[\frac{\varphi^{\prime\prime}\left(d\right)}{\varphi^{\prime}\left(d\right)} \leq-C\left(1+\left[\psi^{\prime}\left(d\right)\right]^{2}\right)\text{ in }\Lambda. \tag{25}\]
We will assume from now on that our \(\varphi\) satisfies
\[\underset{d\to 0}{\lim}\varphi^{\prime}\left(d\right)=+\infty.\]
A function \(\varphi\) which satisfies all the requirements demanded until now is given by
\[\varphi\left(t\right)=\alpha\cosh^{-1}\left(1+bt\right)\]
\(t\geq 0\), with \(\alpha\), \(b\) positive constants to be determinate, where, here, \(t=d\left(q\right)\), \(q\in\Lambda\). Assuming such one \(\varphi\), we see that (25) is equivalent to
\[\frac{-b\left(1+bt\right)}{\left(\left(1+bt\right)^{2}-1+\alpha^{2}b^{2} \right)}\leq-C. \tag{26}\]
We assume \(\alpha,b\) such that \(\alpha b=1\). Thus, we have (26) if
\[t\leq\frac{b-C}{bC},\,C<b.\]
Thus, assuming \(b>C\), setting
\[t_{0}:=\frac{b-C}{bC},\]
considering the neighborhood of \(\partial\Lambda\) in \(\Lambda\) given by
\[\Lambda_{0}:=\left\{q\in\Lambda;d\left(q\right)\leq t_{0}\right\},\]
and the function
\[\psi\left(t\right)=\frac{1}{b}\cosh^{-1}\left(1+bt\right),\,t\geq 0,\]
we have that
\[w\left(q\right)=\psi\circ d\left(q\right),\,q\in\Lambda_{0}, \tag{27}\]
satisfies
\[\mathfrak{M}\left(w\right)\leq 0\]
in \(\Lambda_{0}\) (note that \(\exp_{\partial\Lambda}:\partial\Lambda\times\left[0,t_{0}\right]\longrightarrow \Lambda_{0}\), \(\exp_{\partial\Lambda}\left(p,t\right)=\exp_{p}t\eta\left(p\right)\), where \(\eta\) is the unit vector field normal to \(\partial\Lambda\) that points to \(\Lambda\), is a diffeomorphism, since \(\Lambda\) is the exterior of a geodesic ball in \(M\)). The others conclusion follow directly from the definition of \(\psi\).
**Corollary 6**: _Assume the same hypotheses of Proposition 5. Let \(\varsigma>C\) be the solution of the equation (10), where \(C=C\left(r,n,\lambda,a\right)\) is given by (9). The function \(w\) given in Proposition 5 with the biggest height at \(\partial\Lambda_{0}-\partial\Lambda\) is obtained taking \(b=\varsigma\). In particular, for such \(w\),_
\[\underset{\overline{\Lambda}_{0}}{\sup}w=\underset{\partial\Lambda_{0}- \partial\Lambda}{\sup}w=\mathcal{L},\]
_where \(\mathcal{L}\) is given by (11) and, setting \(W:\overline{\Lambda}\subset M\longrightarrow\mathbb{R}\) given by_
\[W\left(q\right)=\left\{\begin{array}{l}w\left(q\right)\text{, if }q\in \Lambda_{0}\\ \mathcal{L}\text{ if }q\in\Lambda-\Lambda_{0}\end{array}\right., \tag{28}\]
_we have \(W\in C^{0}\left(\overline{\Lambda}\right)\) and radial with relation to the point \(o\), \(\mathfrak{M}\left(W\right)=0\) on \(\Lambda-\partial\Lambda_{0}\), \(W|_{\partial\Lambda}=0\), with_
\[\underset{d\left(q\right)\to 0}{\lim}\left|\nabla W\left(q\right)\right|=+\infty.\]
**Proof.** Given \(\mu>C\), take \(b\) as in Proposition 5 given by \(b=\mu\). The correspondent \(t_{0}\) is
\[t_{0}=\frac{\mu-C}{\mu C}\]
and, at \(t_{0}\), from (27), we have \(\psi\left(t_{0}\right)=\mu^{-1}\cosh^{-1}\left(\mu C^{-1}\right)\). The function
\[f\left(\mu\right)=\frac{1}{\mu}\cosh^{-1}\left(\frac{\mu}{C}\right),\,\mu>C,\]
clearly satisfies \(f\left(\mu\right)\to 0\) when \(\mu\to C\) and \(f\left(\mu\right)\to 0\) when \(\mu\rightarrow+\infty\) and the absolute maximal point of \(f\) in \(\left(C,\infty\right)\) is reach at the point \(\varsigma\) which is solution of (10) and \(f\left(\varsigma\right)=\left(\varsigma^{2}-C^{2}\right)^{-1/2}\). The other conclusions follow from the definition of \(W\) and from the fact that \(\Lambda\) is the exterior of a geodesic ball of \(M\) center at \(o\).
**Remark 7**: _We observe that if \(\lambda=0\), then \(G\) is the group of the translations in \(e_{k}\)- direction. In this case, we have \(M\equiv\mathbb{R}^{n}\) and the domain \(\Lambda\), as in the hypothesis of Proposition 5, is the exterior of the geodesic ball of \(\mathbb{R}^{n}\) of radius \(r\). Then \(v_{r}\) given by (7) is a solution of (5) if the boundary data is zero. In particular, if \(n\geq 3\), its height at infinity is \(h\left(n,r\right)\), where \(h\left(n,r\right)\) is given by (8)._
**Proposition 8**: _Let \(G,\)\(\pi\) and \(M=\left(\mathbb{R}^{n},\left\langle,\right\rangle_{G}\right)\), \(n\geq 2\), as defined in (2), (3) and (4), respectively. Let \(\Lambda_{\rho}:=M-\overline{\mathfrak{B}_{\rho}\left(0\right)}\), where \(\mathfrak{B}_{\rho}\left(0\right)\) is the open geodesic ball of \(\mathbb{R}^{n}\) of radius \(\rho\) centered at origin of \(\mathbb{R}^{n}\). Then \(v_{\rho}\in C^{2}\left(\Lambda_{\rho}\right)\cap C^{0}\left(\Lambda_{\rho}\right)\) given by (7) is a non-negative solution of the Dirichlet (5) relatively to \(\Lambda_{\rho}\) with \(v_{\rho}|_{\partial\Lambda_{\rho}}=0\), which is unbounded if \(n=2\) and satisfies \(|\nabla v_{\rho}|\circ\pi=\left|\overline{\nabla}v_{\rho}\right|\),_
\[\lim_{d\left(q\right)\to 0}|\nabla v_{\rho}\left(q\right)|=\infty\text{, }\lim_{d \left(q\right)\rightarrow\infty}|\nabla v_{\rho}\left(q\right)|=0 \tag{29}\]
_and_
\[\lim_{d\left(q\right)\rightarrow\infty}v_{\rho}\left(q\right)=h\left(n,\rho \right)\text{ if }n\geq 3 \tag{30}\]
_where \(d=d_{M}\left(.,\partial\Lambda_{\rho}\right)\), \(h\left(n,\rho\right)\) is given by (8)._
**Proof.** Let \(v_{\rho}:\mathbb{R}^{n}-\mathfrak{B}_{\rho}\left(0\right)\rightarrow\mathbb{R}\) be the function given by (7), \(\mathbb{R}^{n}\equiv\left\{x_{k}=0\right\}\). As the graph of \(v_{\rho}\) is a minimal graph in \(\mathbb{R}^{n+1}\) it follows that the graph of
\[u_{\rho}\left(x_{1},...,x_{n+1}\right):=v_{\rho}\left(x\right),\text{ }x=\underset{i\neq k}{\sum}x_{i}e_{i}\in\mathbb{R}^{n}-\mathfrak{B}_{\rho} \left(0\right),\]
is a minimal graph in \(\mathbb{R}^{n+2}\). In particular \(\left|\overline{\nabla}u_{\rho}\left(x_{1},...,x_{n+1}\right)\right|=\left| \overline{\nabla}v_{\rho}\left(x\right)\right|\). Note that, setting \(\Omega_{\rho}\subset\mathbb{R}^{n+1}\) the domain of \(u_{\rho}\), we have \(\Omega_{\rho}\) a \(G\)-invariant domain such that its helicoidal projection \(\pi\left(\Omega\right)\) on \(\mathbb{R}^{n}\) coincides with the image of the its orthogonal projection on \(\mathbb{R}^{n}\) in this case. From (3) we see that \(|x|=|\pi\left(x_{1},...,x_{n+1}\right)|\). As \(v_{\rho}\) is radial, it follows that
\[u_{\rho}\left(x_{1},...,x_{n+1}\right)=\left(v_{\rho}\circ\pi\right)\left(x_{1 },...,x_{n+1}\right)\]
and, then, \(u_{\rho}\) is a \(G\)-invariant function with \(u_{\rho}|_{\partial\Omega_{\rho}}=0\). It follows from Proposition 3 of [20] that \(v_{\rho}\in C^{2}\left(\Lambda_{\rho}\right)\cap C^{0}\left(\Lambda_{\rho}\right)\) is a solution of the Dirichlet problem (5) relatively to \(\Lambda_{\rho}\) with \(v_{\rho}|_{\partial\Lambda_{\rho}}=0\) and, moreover,
\[|\nabla v_{\rho}|\circ\pi=\left|\overline{\nabla}u_{\rho}\right|.\]
The other conclusions follow immediately from the definition of \(u_{\rho}\) and from the properties satisfied by \(v_{\rho}\), taking into account that \(d\left(q\right)\to 0\) (\(d\left(q\right)\rightarrow\infty\)) if only if \(d_{E}\left(q,\partial\Lambda_{\rho}\right)\to 0\) (\(d_{E}\left(q,\partial\Lambda_{\rho}\right)\rightarrow\infty\)).
Proof of the main results
The proof of Theorem 1 and Corollary 2 follow directly from the results of this section, by using Proposition 3 of [20].
**Proposition 9**: _Let \(G,\)\(\pi\) and \(M=\left(\mathbb{R}^{n},\left\langle,\right\rangle_{G}\right)\) as defined in (2), (3) and (4), respectively. Let \(U\) be an exterior \(C^{2,\alpha}\)-domain in \(M\), \(U=\pi\left(\Omega\right)\), where \(\Omega\subset\mathbb{R}^{n+1}\) is a \(G\)-invariant \(C^{2,\alpha}\)-domain and let \(d=d_{M}\left(.,\partial U\right)\). Let \(\varrho>0\) be the radius of the smallest geodesic ball of \(\mathbb{R}^{n}\) which contain \(M-U\), centered at origin of \(\mathbb{R}^{n}\) if \(\lambda\neq 0\). Given \(s\geq 0\), there is a non-negative solution \(\vartheta_{s}\in C^{2,\alpha}\left(\overline{U}\right)\) of the Dirichlet problem (5) with \(\vartheta_{s}|_{\partial U}=0\), such that_
\[\sup_{\overline{U}}|\nabla\vartheta_{s}|=\sup_{\partial U}|\nabla\vartheta_{s }|=s,\]
\(\vartheta_{s}\) _is unbounded if \(n=2\) and \(s\neq 0\) and, if \(n\geq 3\), either_
\[\sup_{\overline{U}}\vartheta_{s}\leq h\left(n,\varrho\right) \tag{31}\]
_or there is a complete, non-compact, properly embedded \(\left(n-1\right)\)-dimensional submanifold \(\Sigma\) of \(M\), \(\Sigma\subset U\), such that_
\[\lim_{d(q)\rightarrow+\infty}\vartheta_{s}\left|_{\Sigma}\left(q\right)=h \left(n,\varrho\right), \tag{32}\]
_where \(h\left(n,\varrho\right)\) is given by (8). Moreover,_
\[\lim_{d(q)\rightarrow\infty}|\nabla\vartheta_{s}\left(q\right)|=0\]
_if \(\lambda=0\) or \(s=0\) or, if \(\lambda\neq 0\), \(3\leq n\leq 6\) and \(\vartheta_{s}\) is bounded._
**Proof.** As mentioned in Remark 7, if \(\lambda=0\) then \(M\equiv\mathbb{R}^{n}\) and the result is already contemplate in Theorem 1 of [1] for \(n\geq 3\) and in Theorem 1 of [19] if \(n=2\). The case \(s=0\) is trivial. Assume then \(s>0\) and \(\lambda\neq 0\).
Let \(\rho>0\) be such that \(\partial U\subset\mathfrak{B}_{\rho}\left(0\right)\), where \(\mathfrak{B}_{\rho}:=\mathfrak{B}_{\rho}\left(0\right)\) is the open geodesic ball of \(\mathbb{R}^{n}\), centered at origin of \(\mathbb{R}^{n}\) and of radius \(\rho\). Let \(\Lambda_{\rho}=M-\overline{\mathfrak{B}_{\rho}\left(0\right)}\). From Proposition 8, there is \(v_{\rho}\in C^{\infty}\left(\Lambda_{\rho}\right)\) solution of (5) relatively to \(\Lambda_{\rho}\), with \(v_{\rho}|_{\partial\Lambda_{\rho}}=0\), satisfying what is stated in (29). Since
\[\lim_{d_{M}\left(q,\partial\Lambda_{\rho}\right)\rightarrow\infty}|\nabla v_{ \rho}\left(q\right)|=0, \tag{33}\]
we can choose \(k>\rho\) such that
\[|\nabla v_{\rho}|_{\partial\mathfrak{B}_{k}\left(0\right)}\leq\frac{s}{2}, \tag{34}\]
where \(\mathfrak{B}_{k}\left(0\right)\) is the open geodesic ball of \(\mathbb{R}^{n}\) centered at origin and of radius \(k\). Let \(U_{k}=\mathfrak{B}_{k}\left(0\right)\cap U\) and define
\[T_{k}:=\left\{\begin{array}{c}t\geq 0\;;\exists\ w_{t}\in C^{2,\alpha}\left( \overline{U}_{k}\right);\mathfrak{M}\left(w_{t}\right)=0\\ \sup_{\overline{U}_{k}}|\nabla w_{t}|\leq s,\ w_{t}|_{\partial U}=0,w_{t}|_{ \partial\mathfrak{B}_{k}\left(0\right)}=t\end{array}\right\}. \tag{35}\]
Note that the constant function \(w_{0}\equiv 0\) on \(\overline{U}_{k}\) satisfies all the condition in (35), then \(T_{k}\neq\emptyset\). Moreover, \(\sup T_{k}<\infty\) since
\[\sup_{\overline{U}_{k}}|\nabla w_{t}|\leq s\]
for all \(t\in T_{k}\). Now, since the maximum principle and comparison principle are applicable relatively to the operator \(\mathfrak{M}\), we can use the same approach used in the proof of Theorem 1 of [1] to show that \(t_{k}:=\sup T_{k}\in T_{k}\),
\[\sup_{\overline{U}_{k}}\|\nabla w_{t_{k}}\|=s,\ \sup_{\partial\mathfrak{B}_{k}} |\nabla w_{t_{k}}|\leq s/2\]
and, since \(\partial U_{k}=\partial U\cup\partial\mathfrak{B}_{k}\left(0\right)\), to conclude that
\[\sup_{\overline{U}_{k}}|\nabla w_{t_{k}}|=\sup_{\partial U}|\nabla w_{t_{k}}| =s. \tag{36}\]
The proof of these facts follow essentially the same steps of the aforementioned theorem (see p. 3067 and 3068 of [1]), and then we will not do it here. Now, taking \(k\rightarrow\infty\) and from diagonal method, we obtain a subsequence of \(\left(w_{t_{k}}\right)\) which converges uniformly in the \(C^{2}\) norm in compact subsets of \(\overline{U}\) to a function \(\vartheta_{s}\in C^{2,\alpha}\left(\overline{U}\right)\) satisfying \(\mathfrak{M}\left(\vartheta_{s}\right)=0\) in \(U\), \(\vartheta_{s}|_{\partial U}=0\), which is non-negative and such that \(\sup_{\overline{U}}|\nabla\vartheta_{s}|=\sup_{\partial U}|\nabla\vartheta_{s }|=s\). In particular, from regularity elliptic PDE theory ([12]), we have \(\vartheta_{s}\in C^{\infty}\left(U\right)\).
We will show now that if \(n=2\) then \(\vartheta_{s}\) is unbounded and, if \(n\geq 3\), we have either (31) or (32).
Let \(\varrho>0\) be the radius of the smallest open geodesic ball of \(\mathbb{R}^{n}\) which contain \(M-U\), centered at origin of \(\mathbb{R}^{n}\) and denote such ball by \(\mathfrak{B}_{\varrho}\left(0\right)\). We have \(\partial U\subset\overline{\mathfrak{B}_{\varrho}\left(0\right)}\) and we can conclude that \(\partial U\cap\partial\mathfrak{B}_{\varrho}\neq\emptyset\). Let \(\Lambda_{\varrho}=M-\overline{\mathfrak{B}_{\varrho}\left(0\right)}\). From Proposition 8, there is \(v_{\varrho}\in C^{\infty}\left(\Lambda_{\varrho}\right)\), \(v_{\rho}|_{\partial\Lambda_{\varrho}}=0\), solution of (5), satisfying what is stated in (29) and (30) relatively to \(\Lambda_{\varrho}\) if \(n\geq 3\). Otherwise, if \(n=2\), \(v_{\varrho}\) is unbounded and satisfies the equalities in (29).
Let \(q_{0}\in\partial U\cap\partial\mathfrak{B}_{\varrho}\). Since
\[\lim_{d_{M}\left(q,\partial\Lambda_{\varrho}\right)\to 0}|\nabla v_{ \varrho}\left(q\right)|=+\infty,\,\sup_{\overline{U}}|\nabla\vartheta_{s}|= \sup_{\partial U}|\nabla\vartheta_{s}|=s<+\infty\]
and \(v_{\varrho}\left(q_{0}\right)=\vartheta_{s}\left(q_{0}\right)=0\), it follows that there is an open set \(V_{q_{0}}\) in \(U\cap\Lambda_{\varrho}\), with \(q_{0}\in\partial V_{q_{0}}\), such that \(\vartheta_{s}<v_{\varrho}\) in \(V_{q_{0}}\). We claim that \(V_{q_{0}}\) is unbounded.
Suppose that \(V_{q_{0}}\) is bounded. Since \(\vartheta_{s}|_{\partial V_{q_{0}}}=v_{\varrho}|_{\partial V_{q_{0}}}\), it follows that \(\vartheta_{s}|_{\overline{V}_{q_{0}}}\) and \(v_{\varrho}|_{\overline{V}_{q_{0}}}\) are distinct solutions to the Dirichlet problem
\[\left\{\begin{array}{c}\mathfrak{M}(f)=0\text{ in }V_{q_{0}},\,f\in C^{2}\left( V_{q_{0}}\right)\cap C^{0}(\overline{V}_{q_{0}})\\ f|_{\partial V_{q_{0}}}=\vartheta_{s}|_{\partial V_{q_{0}}}\end{array}\right., \tag{37}\]
a contradiction, since a solution of (37) is unique if the domain is bounded. It follows that \(V_{q_{0}}\) is unbounded. Note that we can have two possibility for \(\partial V_{q_{0}}\): either \(\partial V_{q_{0}}\) is bounded (in this case \(V_{q_{0}}=\Lambda_{\varrho}\)) or \(\partial V_{q_{0}}\) is unbounded (in this case, setting \(\Sigma=\partial V_{q_{0}}\), we have \(\Sigma\subset\overline{U}\cap\overline{\Lambda}_{\varrho}\) a complete \(\left(n-1\right)\)-dimensional manifold of \(M\)).
Assume first that \(n\geq 3\). In this case \(v_{\varrho}\) is bounded. If \(\partial V_{q_{0}}\) is bounded, as in this case \(V_{q_{0}}=\Lambda_{\varrho}\), this means that \(\vartheta_{s}<v_{\varrho}\) on \(\Lambda_{\varrho}\) and so we have (31). If \(\partial V_{q_{0}}\) is unbounded, as \(\vartheta_{s}|_{\partial V_{q_{0}}}=v_{\varrho}|_{\partial V_{q_{0}}}\), we conclude that
\[\lim_{d(q)\rightarrow+\infty}\vartheta_{s}|_{\partial V_{q_{0}}}\left(q\right) =h\left(n,\varrho\right).\]
Assume now that \(n=2\). Note first that on \(\Lambda_{\varrho}\subset\mathbb{R}^{2}\) it is well know that, for all \(s>0\), there is a half catenoid \(v_{\varrho,s}\in C^{2}\left(\overline{\Lambda}_{\varrho}\right)\) which is unbounded (logarithmic growth), which satisfies \(v_{\varrho,s}|_{\partial\Lambda_{\varrho}}=0\) and \(\left|\overline{\nabla}v_{\varrho,s}\right|=s\) on \(\partial\Lambda_{\varrho}\) (\(=\partial\mathfrak{B}_{\varrho}\)). The same arguments used in Proposition 8 give us that \(v_{\varrho,s}\) is solution for the Dirichlet problem (5) for zero boundary data relatively to \(\Lambda_{\varrho}\) and satisfies \(\left|\nabla v_{\varrho,s}\right|=s\) on \(\partial\Lambda_{\varrho}\), since \(\mathfrak{B}_{\varrho}\) is centered at origin of \(\mathbb{R}^{2}\). As \(\sup_{\partial U}\left|\nabla\vartheta_{s}\right|=s\) and \(\partial U\cap\partial\mathfrak{B}_{\varrho}\neq\emptyset\), there is \(0<s^{\prime}<s\) such that \(v_{\varrho,s^{\prime}}\in C^{2}\left(\overline{\Lambda}_{\varrho}\right)\) as described above satisfies \(v_{\varrho,s^{\prime}}<\vartheta_{s}\) in some open set \(V\subset U\cap\Lambda_{\varrho}\). The same arguments used before to prove that \(V_{q_{0}}\) is unbounded give us that \(V\) is unbounded and we see that if \(\partial V\) is bounded then \(V=\Lambda_{\varrho}\). Thus, if \(\partial V\) is bounded, we have \(v_{\varrho,s^{\prime}}<\vartheta_{s}\) on \(\Lambda_{\varrho}\) and then, \(\vartheta_{s}\) is unbounded. If \(\partial V\) is unbounded, as \(v_{\varrho,s^{\prime}}|_{\partial V}=\vartheta_{s}|_{\partial V}\), we have
\[\lim_{d(q)\rightarrow+\infty}\vartheta_{s}|_{\partial V}\left(q\right)=+\infty\]
since
\[\lim_{d(q)\rightarrow+\infty}v_{\varrho,s^{\prime}}|_{\partial V}\left(q\right) =+\infty.\]
and, therefore, \(\vartheta_{s}\) is unbounded.
Now we will prove the last affirmation of the proposition.
As \(\vartheta_{s}\in C^{2,\alpha}\left(\overline{U}\right)\) is a solution of the Dirichlet problem (5) with \(\vartheta_{s}|_{\partial U}=0\), it follows from Proposition 3 of [20] that \(u=\vartheta_{s}\circ\pi\in C^{2,\alpha}\left(\overline{\Omega}\right)\) satisfies \(\mathcal{M}\left(u\right)=0\) in \(\overline{\Omega}\) with \(u|_{\partial\Omega}=0\) and \(\left|\overline{\nabla}u\left(p\right)\right|=\left|\nabla\left(\vartheta_{s} \circ\pi\right)\left(p\right)\right|\), \(p\in\Omega\), where \(\mathcal{M}\) is the operator defined in (1). Note that we have necessarily \(\Omega\) unbounded
with \(d_{E}\left(p\right)\rightarrow\infty\) if only if \(d\left(\pi\left(p\right)\right)\rightarrow\infty\), where \(d_{E}\) is the Euclidean distance in \(\mathbb{R}^{n+1}\) to \(\partial\Omega\). Suppose that
\[\lim_{d_{E}\left(p\right)\rightarrow\infty}\left|\overline{\nabla}u\left(p \right)\right|\neq 0.\]
Then there is \(\varepsilon>0\) and a sequence \(\left(p_{n}\right)\) in \(\Omega\), with \(d_{E}\left(p_{n}\right)\rightarrow\infty\) when \(n\rightarrow\infty\) such that \(\left|\overline{\nabla}u\left(p_{n}\right)\right|\geq\varepsilon\) for all \(n\) large enough, \(n\geq n_{0}\). For each \(n\in\mathbb{N}\), define
\[\Omega_{n}=\left\{p\in\mathbb{R}^{n+1};p+p_{n}\in\Omega\right\}\]
and consider the sequence of functions \(u_{n}:\Omega_{n}\subset\mathbb{R}^{n+1}\longrightarrow\mathbb{R}\) given by \(u_{n}\left(p\right)=u\left(p+p_{n}\right)\). Note that \(0\in\Omega_{n}\) for all \(n\) since \(\left(p_{n}\right)\subset\Omega\). Also
\[\mathbb{R}^{n+1}=\bigcup_{n\in\mathbb{N}}\Omega_{n}.\]
Indeed, given \(w\in\mathbb{R}^{n+1}\), if the sequence \((w+p_{n})_{n}\) were contained in \(\mathbb{R}^{n+1}-\Omega\), since \(\pi\left(\mathbb{R}^{n+1}-\Omega\right)\) is compact, we would have \(d_{E}\left(w+p_{n}\right)\leq R\) for all \(n\), for some \(R>0\), a contradiction, since \(d_{E}\left(p_{n}\right)\rightarrow\infty\). It follows that, as \(u_{n}\left(0\right)=u\left(p_{n}\right)\), for all \(n\geq n_{0}\) we have \(\left|\overline{\nabla}u_{n}\left(0\right)\right|\geq\varepsilon\). Note that \((u_{n})\) is uniformly bounded since, by hypothesis, \(n\geq 3\) and \(\vartheta_{s}\) is bounded. Then \((u_{n})\) has a subsequence \((u_{n_{k}})\) which converges uniformly on compact subsets of \(\mathbb{R}^{n+1}\) to a bounded function \(\widetilde{u}\) defined on the whole \(\mathbb{R}^{n+1}\) which satisfies \(\mathcal{M}\left(\widetilde{u}\right)=0\). Assume that dimension of \(M\) satisfies \(3\leq n\leq 6\). Then \(\Omega\subset\mathbb{R}^{m}\), \(4\leq m\leq 7\). From Bersntein Theorem extended to \(\mathbb{R}^{m}\), \(2\leq m\leq 7\), by Simons ([21]) - which is false for \(m\geq 8\) ([2]) - it follows that \(\widetilde{u}\) has to be constant. Therefore, we cannot have \(\left|\overline{\nabla}u_{n_{k}}\left(0\right)\right|\geq\varepsilon\) for all \(n_{k}\geq n_{0}\), a contradiction. Hence
\[\lim_{d_{E}\left(p\right)\rightarrow\infty}\left|\overline{\nabla}u\left(p \right)\right|=0.\]
From Proposition 3 of [20] we have \(\left|\overline{\nabla}u\right|=\left|\nabla\vartheta_{s}\right|\circ\pi\) and, therefore
\[\lim_{d(q)\rightarrow\infty}\left|\nabla\vartheta_{s}\left(q\right)\right|=0.\]
**Remark 10**: _If \(\lambda=0\) and \(n=2\), it was proved in [19] that, setting_
\[\vartheta_{\infty}\left(p\right):=\lim_{s\rightarrow\infty}\vartheta_{s} \left(p\right),p\in U\text{,} \tag{38}\]
\(\vartheta_{\infty}\in C^{2}\left(U\right)\cap C^{0}\left(\overline{U}\right)\) _and is an unbounded solution of the Dirichlet problem (5) with \(\vartheta_{\infty}|_{\partial U}=0\), which satisfies_
\[\lim_{d_{E}\left(p\right)\rightarrow\infty}\left|\overline{\nabla}u\left(p \right)\right|=0.\]
_If \(\lambda=0\) and \(n\geq 3\), it was proved in [1] that \(\vartheta_{\infty}\), as defined in (38), is in \(C^{2}\left(U\right)\), is a bounded solution of the Dirichlet problem (5) and its graph is contained in a \(C^{1,1}\)-manifold \(\Upsilon\subset\overline{U}\times\mathbb{R}\) such that \(\partial\Upsilon=\partial\Omega\)._
**Proposition 11**: _Let \(G,\)\(\pi\) and \(M=\left(\mathbb{R}^{n},\left\langle,\right\rangle_{G}\right)\), \(n\geq 3\), as defined in (2), (3) and (4), respectively. Let \(U\) be an exterior \(C^{2,\alpha}\)-domain in \(M\), \(U=\pi\left(\Omega\right)\), where \(\Omega\subset\mathbb{R}^{n+1}\) is a \(G\)-invariant \(C^{2,\alpha}\)-domain and let \(d=d_{M}\left(.,\partial U\right)\). Assume that \(M-U\) satisfies the interior sphere condition of radius \(r>0\). Let \(\mathcal{L}=\mathcal{L}\left(r,n,\lambda\,a\right)\) be as given in (11), where \(\lambda\) and \(a\) are given in the definition of \(G\). Then, given \(c\in\left[0,\mathcal{L}\right]\), there is \(w_{c}\in C^{2}\left(U\right)\cap C^{0}\left(\overline{U}\right)\) solution of the Dirichlet problem (5) relatively to \(U\), with \(w_{c}|_{\partial U}=0\), which satisfies_
\[\underset{d(q)\rightarrow\infty}{\lim}\underset{d(q)\rightarrow\infty}{w_{c }}\left(q\right)=c.\]
_In particular, if \(c\in\left[0,\mathcal{L}\right)\), then \(w_{c}\in C^{2,\alpha}\left(\overline{U}\right)\)._
**Proof.** If \(\lambda=0\) the result is already contemplate in Theorem 1 of [1]. Assume \(\lambda\neq 0\). Given \(c\in\left[0,\mathcal{L}\right)\), define
\[\digamma=\left\{\begin{array}[c]{c}f\in C^{0}\left(\overline{U}\right);\text{ $f$ is subsolution relative to $\mathfrak{M}$}\\ f|_{\partial U}=0\text{ and }\underset{d(q)\rightarrow\infty}{\lim}\sup f \left(q\right)\leq c\end{array}\right\}. \tag{39}\]
Note that \(\digamma\neq\emptyset\) since \(f_{0}\equiv 0\in\digamma\). From comparison principle we have \(f\leq c\) for all \(f\in\digamma\). From Perron method applied relatively to operator \(\mathfrak{M}\) ([12] - Section 2.8, [20] - Section 3), we conclude that
\[w_{c}\left(q\right):=\sup\left\{f\left(q\right);\text{ $f\in\digamma$}\right\}, \text{ $q\in\overline{U}$},\]
is in \(C^{\infty}\left(U\right)\) and satisfies \(\mathfrak{M}\left(w_{c}\right)=0\) in \(U\). We will show now that
\[\underset{d(q)\rightarrow\infty}{\lim}\,w_{c}\left(q\right)=c. \tag{40}\]
Consider \(\alpha>0\) such that \(\mathfrak{B}_{\alpha}\left(0\right)\), the geodesic ball of \(\mathbb{R}^{n}\) center at origin of \(\mathbb{R}^{n}\), of radius \(\alpha>0\), contain \(M-U\) and be such that \(v_{\alpha}\left(\infty\right)>c\), where \(v_{\alpha}\) is as defined in (7) (note that \(v_{\alpha}\left(\infty\right)=\alpha h\left(n,1\right)\) and \(h\left(n,1\right)>0\)). Define now \(f\in C^{0}\left(\overline{U}\right)\) by
\[f\left(q\right)=\left\{\begin{array}[c]{c}0\text{ if }q\in\overline{U}\cap \mathfrak{B}_{\alpha}\left(0\right)\\ \max\{0,v_{\alpha}\left(q\right)-\left(v_{\alpha}\left(\infty\right)-c\right) \}\text{, if }q\in M-\mathfrak{B}_{\alpha}\left(0\right).\end{array}\right.\]
From Proposition 8 we have \(f\) a non-negative (generalized) subsolution to the Dirichlet problem (5) (for zero boundary data) relatively to \(U\), which satisfies
\[\underset{d(q)\rightarrow\infty}{\lim}\text{$f\left(q\right)=c.$} \tag{41}\]
It follows that \(f\in\digamma\) and that \(f\leq w_{c}\leq c\). Then we have (40).
Given \(q_{0}\in\partial U\), by hypothesis there is a geodesic open ball of \(M\), say \(B_{r}\), of radius \(r>0\), contained in \(M-U\) and such that \(\partial B_{r}\) is tangent to \(\partial U\) (\(=\partial\left(M-U\right)\)) at \(q_{0}\). From Corollary 6, there is \(W\in C^{0}\left(M-B_{r}\right)\) a (generalized) supersolution relatively to the operator \(\mathfrak{M}\) on \(M-B_{r}\) such that \(c\leq W\left(\infty\right)=\mathcal{L}\), with \(W|_{\partial B_{r}}=0\), which is \(C^{1}\) in a neighborhood of \(\partial B_{r}\) in \(M-B_{r}\) and such that
\[\lim_{d_{M}\left(q,\partial B_{r}\right)\to 0}|\nabla W\left(q\right)|=+\infty.\]
From the comparison principle, since \(\overline{U}\subset M-B_{r}\), it follows that on \(\overline{U}\), we have \(0\leq w_{c}\leq W\). As \(q_{0}\) is arbitrary, we conclude that \(w_{c}\in C^{0}\left(\overline{U}\right)\) with \(w_{c}|_{\partial U}=0\).
Assume that \(0\leq c<\mathcal{L}\). Let \(\delta=\left(c+\mathcal{L}\right)/2\). Let \(L\left(\sigma\right):=\mathcal{L}\left(\sigma,n,\lambda,a\right)\), \(\sigma\in\left(0,r\right]\), where \(\mathcal{L}\) is given by (11). Since \(L\in C^{0}(0,r]\), either there is \(\sigma_{0}\in\left(0,r\right)\) such that \(L\left(\sigma_{0}\right)=\delta\), or \(\delta<L(\sigma)\) for all \(\sigma\in\left(0,r\right)\). Take \(r^{\prime}=\sigma_{0}\) in the first case and \(r^{\prime}\) any point in \(\left(0,r\right)\) in the second case. Let \(B_{r^{\prime}}\) be the open geodesic ball of \(M\), \(B_{r^{\prime}}\subset B_{r}\), with the same center of \(B_{r}\). Consider the correspondent \(t_{0}>0\) and the function \(w\in C^{2}\left(\Lambda_{0}\right)\cap C^{0}\left(\overline{\Lambda}_{0}\right)\), \(w|_{\partial B_{r^{\prime}}}=0\), where \(\Lambda_{0}=\left\{q\in M-B_{r^{\prime}};d_{M}\left(q,\partial B_{r^{\prime}} \right)\leq t_{0}\right\}\) is such that \(\mathfrak{M}\left(w\right)\leq 0\), as given in Corollary 6. If the second case occurs, the height of this \(w\) is greater than \(\delta\) at its correspondent distance \(t_{0}\) to \(\partial B_{r^{\prime}}\) and, as \(w\) is radial with respect to the center of \(B_{r^{\prime}}\), there is \(t^{\prime}_{0}<t_{0}\) such that, at the distance \(t^{\prime}_{0}\) of \(\partial B_{r^{\prime}}\), the height of \(w\) is \(\delta\). In any case, there is \(0<t^{\prime}_{0}\leq t_{0}\) such that, setting
\[W_{r^{\prime}}\left(q\right)=\left\{\begin{array}{l}w\left(q\right)\text{ if }q\in\Lambda^{\prime}_{0}\\ \delta\text{ if }q\in M-\Lambda^{\prime}_{0}\end{array}\right.\]
where
\[\Lambda^{\prime}_{0}:=\left\{q\in\Lambda;d\left(q\right)\leq t^{\prime}_{0} \right\}.\]
satisfies the same properties of the function \(W\) as given in Corollary 6. Note that it is possible to translate the graph of \(W_{r^{\prime}}\) in the \(\partial/\partial t\)-direction (\(e_{k}\)-direction) in a way that its height at infinity is in \(\left[c,\delta\right)\) and such that \(\Gamma\), the intersection of the hypersurface resulting of this displacement with \(\left\{t\geq 0\right\}\) is such that
\[\partial\Gamma=\Gamma\cap B_{r}=\partial B_{r^{\prime\prime}},\]
with \(B_{r^{\prime\prime}}\subset B_{r}\) a geodesic open ball of \(M\) with the same center of \(B_{r}\) and radius \(r^{\prime\prime}\), being \(r^{\prime}<r^{\prime\prime}<r.\) Now, move \(\Gamma\) keeping \(\partial\Gamma\) on \(M\) and the center of \(\partial\Gamma\) on the geodesic of \(M\) linking the center of \(B_{r}\) to \(q_{0}\in\partial U\), until \(\partial\Gamma\) touches \(\partial U\) at \(q_{0}\) and call \(\widetilde{\Gamma}\) this final hypersurface. Observe that
such displacement is an isometry in \(M\times\mathbb{R}\). Denote by \(\widetilde{B}_{r^{\prime\prime}}\) the geodesic ball contained in \(B_{r}\) of radius \(r^{\prime\prime}\) such that \(\partial\widetilde{B}_{r^{\prime\prime}}=\partial\widetilde{\Gamma}\). We have then that \(W_{r^{\prime\prime}}:M-\widetilde{B}_{r^{\prime\prime}}\longrightarrow\mathbb{R}\) is a (generalized) supersolution relatively to \(\mathfrak{M}\). Moreover, since our translation in \(\partial/\partial t\) direction is small enough, \(W_{r^{\prime\prime}}\) satisfies \(\mathfrak{M}\left(W_{r^{\prime\prime}}\right)=0\) in \(\overline{\Lambda}_{0}^{\prime\prime}\), where \(\Lambda_{0}^{\prime\prime}\) is a neighborhood in \(U\) such that \(q_{0}\in\partial\Lambda_{0}^{\prime\prime}\). In particular, \(W_{r^{\prime\prime}}\in C^{\infty}\left(\overline{\Lambda}_{0}^{\prime\prime }\right)\). From comparison principle, since \(0\leq w_{c}|_{\partial U}\leq W_{r^{\prime\prime}}|_{\partial U}\) and \(c\leq W_{r^{\prime\prime}}\left(\infty\right)\), we conclude that \(0\leq w_{c}\leq W_{r^{\prime\prime}}\) on \(\overline{U}\). As \(w_{c}\left(q_{0}\right)=W_{r^{\prime\prime}}\left(q_{0}\right)\) and \(W_{r^{\prime\prime}}\in C^{\infty}\left(\overline{\Lambda}_{0}^{\prime\prime }\right)\) it follows that \(w_{c}\in C^{1}\left(\overline{U}\right)\) and, from elliptic PDE regularity theory ([12]), it follows that \(w_{c}\in C^{2,\alpha}\left(\overline{U}\right)\cap C^{\infty}\left(U\right)\).
|
2310.12260 | Measuring Thermal Profiles in High Explosives using Neural Networks | We present a new method for calculating the temperature profile in high
explosive (HE) material using a Convolutional Neural Network (CNN). To
train/test the CNN, we have developed a hybrid experiment/simulation method for
collecting acoustic and temperature data. We experimentally heat cylindrical
containers of HE material until detonation/deflagration, where we continuously
measure the acoustic bursts through the HE using multiple acoustic transducers
lined around the exterior container circumference. However, measuring the
temperature profile in the HE in experiment would require inserting a high
number of thermal probes, which would disrupt the heating process. Thus, we use
two thermal probes, one at the HE center and one at the wall. We then use
finite element simulation of the heating process to calculate the temperature
distribution, and correct the simulated temperatures based on the experimental
center and wall temperatures. We calculate temperature errors on the order of
15{\deg}C, which is approximately 12% of the range of temperatures in the
experiment. We also investigate how the algorithm accuracy is affected by the
number of acoustic receivers used to collect each measurement and the
resolution of the temperature prediction. This work provides a means of
assessing the safety status of HE material, which cannot be achieved using
existing temperature measurement methods. Additionally, it has implications for
range of other applications where internal temperature profile measurements
would provide critical information. These applications include detecting
chemical reactions, observing thermodynamic processes like combustion,
monitoring metal or plastic casting, determining the energy density in thermal
storage capsules, and identifying abnormal battery operation. | John Greenhall, David K. Zerkle, Eric S. Davis, Robert Broilo, Cristian Pantea | 2023-10-18T18:49:21Z | http://arxiv.org/abs/2310.12260v1 | # Measuring Thermal Profiles in High Explosives using Neural Networks
###### Abstract
We present a new method for calculating the temperature profile in high explosive (HE) material using a Convolutional Neural Network (CNN). To train/test the CNN, we have developed a hybrid experiment/simulation method for collecting acoustic and temperature data. We experimentally heat cylindrical containers of HE material until detonation/deflagration, where we continuously measure the acoustic bursts through the HE using multiple acoustic transducers lined around the exterior container circumference. However, measuring the temperature profile in the HE in experiment would require inserting a high number of thermal probes, which would disrupt the heating process. Thus, we use two thermal probes, one at the HE center and one at the wall. We then use finite element simulation of the heating process to calculate the temperature distribution, and correct the simulated temperatures based on the experimental center and wall temperatures. We calculate temperature errors on the order of 15\({}^{\circ}\)C, which is approximately 12% of the range of temperatures in the experiment. We also investigate how the algorithm accuracy is affected by the number of acoustic receivers used to collect each measurement and the resolution of the temperature prediction. This work provides a means of assessing the safety status of HE material, which cannot be achieved using existing temperature measurement methods. Additionally, it has implications for range of other applications where internal temperature profile measurements would provide critical information. These applications include detecting chemical reactions, observing thermodynamic processes like combustion, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, and identifying abnormal battery operation.
## 1 Introduction
Noninvasive measurement of internal temperature distribution is critical to a range of applications, including detecting chemical reactions, observing thermodynamic processes like combustion, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, identifying abnormal battery operation, and assessing the safety status of high explosives (HE). Currently, there are no good noninvasive techniques for measuring temperature distribution in a sealed container.
Classical thermometry techniques are limited to measuring the outside temperature of the container or require puncturing the container, which can interfere with the process being monitored and pose a safety hazard. Additionally, these techniques are typically limited in the number of internal locations where temperature can be measured, and the embedded instruments can interfere with the process being monitored. Alternatively, acoustic techniques have been developed to enable measuring temperature distributions at an arbitrary number of internal points without interfering with the physical process.[1, 2, 3, 4, 5] These techniques are based on acoustic Time-of-Flight (ToF) measurements using an array of acoustic transducers. One at a time, each transducer transmits an acoustic burst, which then propagates through the material to the receivers. The time required for the acoustic bursts to travel between each transmitter/receiver pair is dependent on the sound speed throughout the material, which is dependent on the temperature distribution. The temperature is measured using either a 2-step or 3-step process. In the 2-step process, the sound speed distribution is calculated directly from the measured waveforms using techniques such as full-waveform inversion[6, 7, 8] or a convolutional encoder-decoder network,[9] and then
temperature is determined from sound speed using an empirical model for the given material. In the 3-step process, the ToF is measured from the waveforms, the sound speed distribution is determined using reverse-time migration [10, 11, 12]. and then the temperature is calculated from an empirical model. However, demonstration of the existing acoustic methods is limited to measuring temperature in single-phase (gas, liquid, or solid) materials, and the techniques require the transducers to be in direct contact with the material.
In contrast, many applications require temperature distribution measurements in other materials, and they require a noninvasive measurement, i.e. transducers must measure through the container walls. In this case, some of the acoustic burst energy propagates through the internal material as a bulk wave, while the remaining energy travels through the container walls as guided waves. As a result, the guided waves interfere with the bulk waves, which inhibits implementing waveform inversion or reverse-time migration. When measuring highly attenuating materials, lower acoustic frequencies are required, which increases the burst durations and further increases the overlap between different arrivals. In previous work, bulk wave arrivals were isolated by using cross-correlation with broad-band chirps [13] or using a Convolutional Neural Network (CNN) [14]. However, to measure sound speed, and, thus temperature, these techniques still require the use of reverse-time migration, which can be highly sensitive to the initial sound speed estimate and to error in the estimated arrival time.
To overcome the limitations of existing temperature measurement techniques, we present a novel technique based on time-domain acoustic measurements processed via CNN. In contrast with traditional temperature sensors and existing acoustic methods, our technique enables measuring the temperature profile through a material, it is noninvasive, and it works for challenging, highly attenuating materials such as HE. To train and test the technique, we utilize a novel mixture of experimental and simulated data to provide acoustic and temperature profile measurements, respectively. We conduct experiments/simulations of a cylindrical container filled with HE (pentolite 50/50) as it is heated to the point of detonation or deflagration to provide a variety of thermal profiles. This technique enables measuring real-time temperature profiles in a material noninvasively, which is not possible using existing techniques. In addition to monitoring the safety status of HE, this technique could be invaluable for a wide range of other applications including, assessing capacity in thermal storage systems, quantifying performance in molten salt reactors, measuring chemical kinetics, and monitoring material composition in various industrial processes, to name a few.
## 2 Methods
### Experimental acoustic and thermocouple measurements
The goal of this work is to use a CNN to measure the temperature profile within the HE based on the acoustic bursts transmitted between an acoustic transmitter (Tx) and one or more receivers (Rx). To enable CNN training and testing, we will acquire hybrid experimental/simulation data. Figure 1 shows the experimental data collection process. A cylindrical container (Al-6061) with dimensions, 144 mm inner diameter (\(2R\)), 6.4 mm thickness, 200 mm height is equipped with 16 piezoelectric transducers (STEMINC SMD07T05R411), evenly spaced around the container circumference, and two thermocouples are inserted into the HE at the wall (\(r=R\)) and center (\(r=0\)) at approximately the same height as the acoustic transducers (Figure 1(a)). Over the course of an experiment, heaters placed at the bottom of the container gradually heat the HE until it detonates or deflagrates. During each experiment, we collect a set of acoustic (Figure 1(b)-(c)) and thermocouple measurements (Figure 1(d)) at 2 min intervals. Due to the high acoustic attenuation within the HE, must select a relatively low excitation frequency [14]. We utilize a Gaussian burst with 10 V\({}_{\text{PP}}\) amplitude, 350 kHz center frequency, 150 kHz standard deviation. Figure 1(b) shows a cross-section of the acoustic waves propagating through the HE and container from one transmitter (Tx) to the remaining 15 receivers (Rx). Figure 1(c) shows an example acoustic measurement, which consists of 15 waveforms, transmitted from one Rx and received from the remaining Rx, with lines indicating the theoretical arrival
times for the first bulk (red) and guided waves (green). At each time step in the experiment, we repeat this acoustic measurement, using each of the Tx, one at a time.
## 0.2 Simulated temperature profiles in HE
The thermocouple data provides temperature information at two locations \(r=0\) and \(R\) (Figure 1(d)), but measuring the temperature profile with a useful amount of radial resolution would require a significant number of thermocouples that would interfere with the HE heating process. Thus, to acquire temperature profiles, we employ axisymmetric numerical simulations in COMSOL, based on an existing HE modeling methodologies which account for the heat transfer, phase change, species transfer, and natural convection within the HE [15, 16]. Figure 2 shows the numerical simulation setup for the axisymmetric HE container. We utilize the built-in heat transfer module to simulate the HE and container temperatures as they are heated from approximately 20 \({}^{\circ}\)C by the heater, which is represented by ramping up and then holding the
Figure 1: Experimental data collection. (a) A container filled with HE is heated from below. (b) A cross-section shows the acoustic bulk and guided waves transmitted/received between 16 acoustic transducers. (c) Example acoustic measurement from one Tx to 15 Rx. (d) Two thermocouples measure HE temperature at the wall and center of the container over the course of the experiment.
Figure 2: Simulated HE heating. (a) An axisymmetric finite element model used to compute the temperature distribution at the transducer cross-section as a function of radial position \(r\). (b) Selected example radial temperature profiles at various experiment times. (c) Colorplot showing the temperature as a function of radial position \(r\) and time over the course of the experiment.
temperature at the heater boundary to 180 \({}^{\circ}\)C. Pentolite 50/50 consists of TNT (50%), which has a melting temperature of approximately 80 \({}^{\circ}\)C and PETN (50%), which melts at approximately 140 \({}^{\circ}\)C. Thus, when the temperature 80\({}^{\circ}\)C\(<\)T\(<\)140\({}^{\circ}\)C the TNT melts, and the embedded PETN particles begin to sink. When the temperature exceeds 140\({}^{\circ}\)C, the PETN also melts, and the two species can diffuse into one another. This results in gradients in the material concentrations, which we represent using the species transport and laminar flow mixture model modules in COMSOL.
After completing the simulation, we select a line from \(r=0\) to \(r=R\), at the same height that the acoustic transducers are mounted. Figure 2(b) shows several radial temperature profiles at various steps throughout the experiment. Figure 2(c) shows a colorplot of the temperature as a function of radial position and time throughout the experiment.
## 3 Machine learning with hybrid measurements
Prior to performing ML, we preprocess the experimental and simulated measurements as illustrated in Figure 3. Each acoustic measurement consists of an \(N_{t}\)\(\times\)\(N_{Rx}\) array of waveforms, where \(N_{Rx}\) is the number of waveforms used, and each waveform is a time series of length \(N_{t}\). We investigate the effect of the number \(N_{Rx}\) of Rx measurements used, where we select \(N_{Rx}\) to be an odd number of Rx opposing Tx. We then reduce the noise amplitude and emphasize acoustic signals that are similar to the excitation \(x_{ex}\) by cross-correlating the raw waveforms \(X_{w}\) with the excitation signal to get \(X_{cc}=X_{w}*x_{e}\), where * is the cross-correlation operator.[17] We then reduce the number of peaks and the range of feature scales within the experimental acoustic measurements, by computing the envelopes \(X_{e}\)
\[X_{e}=|H(X_{cc})(t)|, \tag{1}\]
where \(H(\cdot)(t)\) denotes the Hilbert transform. The input signal \(X\) to the CNN model is then created by normalizing the envelopes based on the standard deviation for each Rx
\[X=\frac{\sqrt{N_{t}}(X_{e}-\bar{X}_{e})}{\sqrt{\sum(X_{e}-\bar{X}_{e})^{2}}}, \tag{2}\]
where the bar \(\bar{X}_{e}\) indicates the mean value of \(X_{e}\) over time for a single Rx value.
Figure 3: Preprocessing and machine learning steps for hybrid measurements from experiment and simulation.
To preprocess the temperature profiles, we need to correct for error between the temperatures from experiment and simulation. These are typically due to differences in HE material properties due to the casting process, inconsistent input power, non-axisymmetric components, defects, or physics in the experiment, or electrical noise in the heaters or thermocouples. Figure 3(right) shows an example of experimental (solid) and simulated (dashed) temperature profiles at the wall (blue) and HE center (orange). To account for differences, we correct the simulated temperature profiles based on the experimental thermocouple temperatures at the boundaries. We first calculate parameters to shift and scale the initial uncorrected simulation temperatures \(T\)(\(r\),\(s\)) at position \(r\) and experiment step \(s\) to match the experimental temperatures at the container boundaries (\(r=0\), \(R\)). To simplify the formulae, we adopt a subscript \(r\) notation, which indicates a term that is a variable of \(r\) is being evaluated at a boundary, e.g. \(T_{r}(s)\) where \(r=0\) or \(R\) indicates the center or wall boundaries, respectively. The corrected simulated temperatures \(T_{r}(s)\) at the boundaries can be calculated as
\[T_{r}(s)=a_{r}\cdot\{T_{r}^{\prime}(c\cdot[s-d])-b_{r}\}. \tag{3}\]
Here, temperature scale \(a_{r}(s)\) and shift \(b_{r}(s)\) and time scale c(\(s\)) and shift d(\(s\)), and temperature coefficients are linearly interpolated between the boundary values at \(r=0\) and \(R\). We group the scale and shift coefficients at the boundaries into a single set \(\theta=\{a_{0},b_{0},a_{R},b_{R},c,d\}\), and the optimal \(\theta^{*}\) is computed by minimizing the mean-squared error over experiment steps \(s\) between the corrected simulated temperatures and the experimental temperatures at the boundaries,
\[\theta^{*}=\underset{\theta}{\operatorname{argmin}}\sum_{r=0,R}\left\|T_{r}- \widehat{T}_{r}\right\|^{2}. \tag{4}\]
Here, \(\widehat{T}_{r}\) is the experimental boundary temperatures. Finally, we will investigate the effect of the temperature resolution, i.e. the number \(N_{pts}\) of radial points at which the CNN estimates the temperature. To train the CNN using different values of \(N_{pns}\), we can simply interpolate the radial temperature profiles from COMSOL at \(N_{pts}\) locations in \(0\leq r\leq R\).
As a result of the hybrid experimental/simulated data process, we obtain input data \(X\) and output temperature profile data \(T\) that can be used to train and test the CNN. Figure 3(bottom) shows the CNN architecture, which consists of a series of CNN blocks, each comprising of a 2D convolution (Conv) layer, a rectified linear unit (ReLU) activation layer, and a pooling (MaxPool) layer. At each Conv block, input data (think acoustic time-series data from multiple receivers) is convolved by a series of convolutional filters (see reference for detailed formulae [18]). The intent is for the filters to identify patterns within each acoustic signal and between neighboring signals and then transform the input signals to accentuate useful signal features and suppress signal noise. Next, the convolved data is passed to the ReLU layer, which introduces nonlinearities into the model that increases learning speed and performance [19, 20]. The data passes through a MaxPool layer, which effectively returns a summary of the input data, which has been reduced in size so as to reduce the CNN model complexity and reduce the model sensitivity to slight shifts in the input data. We implement three CNN blocks, where the Conv layers consist of 8*2\({}^{(l-1)}\) filters with dimension 16\(\times\)2 for each layer \(l=1\), 2, and 3. By utilizing multiple CNN blocks in series, it is possible to reduce complex acoustic signals to one or more extracted features that represent the critical information conveyed by the acoustic signal. After the final CNN block, we then flatten the signal to a 1D array, apply a Dropout layers to ensure that the network does not rely too heavily on any one neuron during training, and then use a dense Output layer that applies a linear transformation between the flattened features and the estimated temperatures. This produces an \(N_{pts}\times\)1 output \(T\) representing temperatures at each of the radial points.
Because of small differences in the transducer geometry, material properties, positions, and adhesive layer dimensions and material properties, there are variations in the transfer functions between pairs of transducers. Our goal is to develop a temperature measurement method that is robust to these differences, as well as differences in the container and HE. Thus, we divide the data into sets, where each set consists of the measurements from a single transmitter to all receivers for all measurements in a single heating experiment. As a result, the combination of Tx/Rx transmission functions is unique between data sets. We then group the data sets randomly into 10 folds to test using k-folds cross-validation, wherein we train the model on all but one folds and test on the excluded fold, for each combination of training/testing folds.
## 3 Results
We perform the cross-validation procedure, training on all-but-one fold and estimating the temperature distributions on the remaining fold, for each combination of folds. Figure 4(a) shows some example radial temperature profiles from a single test set, i.e. the acoustic measurements from a single Tx over the course of a single heating experiment. Here, we have selected the results using \(N_{Rx}=3\) opposing receivers and a special resolution of \(N_{ps}=25\) radial points between \(r=0\)-\(R\). We plot the radial temperature distributions at several experiment steps \(s\) as the HE was heated from approximately 20 \({}^{\circ}\)C (\(s_{0}\), dark blue) until detonation/deflagration (\(s_{max}\), red), where the dashed and solid lines indicate the temperature profiles estimated by the CNN and the "true" temperature profiles simulated in COMSOL. We observe small errors between the estimated and true temperature profiles, which are likely due to differences between the axisymmetric simulated temperatures and the experimental temperatures, as well as differences in the transducer-wall coupling between Tx-Rx data sets used in training versus testing. Despite the small errors, we observe that the CNN is able to closely estimate the temperature trends, i.e. where the temperature slope is steep/flat. This is an important finding because it indicates where the solid-liquid HE transition occurs, which provides critical information about the status of the HE.
Figure 4: CNN testing results. (a) Example estimated (solid) and true (dashed) temperature profiles at several times throughout one experiment. (b) Mean temperature error vs number of radial points \(N_{ps}\) at which temperature was estimated for different numbers of receivers \(N_{Rx}\).
In addition to a qualitative comparison, evaluated the effect of the number of radial points \(N_{pts}\) and number of receivers \(N_{Rx}\) on the Root-Mean Squared Error (RMSE) between the true temperatures and those estimated by the CNN. For each we tested combinations of (\(N_{ps}\), \(N_{Rx}\)) values in the ranges \(5\leq N_{pts}\leq 50\) in steps of \(5\) and \(1\leq N_{Rx}\leq 9\) for odd numbers \(N_{Rx}\) of transducers opposing Tx. For each (\(N_{ps}\), \(N_{Rx}\)) combination, we retrain the model 10 times for each combination of training/testing folds, resulting in 10 RMSE values per (\(N_{ps}\), \(N_{Rx}\)) combination. Figure 4(b) shows the mean (lines) and standard deviation (shaded) RMSE value as a function of \(N_{ps}\) for several values of \(N_{Rx}\). We observe that RMSE decreases from \(17^{\circ}\)C to \(15^{\circ}\)C on average, when the number \(N_{Rx}\) of Rx increases from one to three. The decrease in RMSE is likely due to the fact that using measurements from additional Rx increases the amount of pertinent information provided to the CNN. The RMSE further decreases to \(14^{\circ}\)C by further increasing \(N_{Rx}=5\), for \(N_{pts}\leq 10\), but other numbers of points result in an increase in RMSE from the models using the same \(N_{ps}\) and \(N_{Rx}=1\) or \(3\). Subsequent increases to \(N_{Rx}=7\) and \(9\) were found to increase the RMSE on average to values of \(24^{\circ}\)C and \(39^{\circ}\)C, respectively. Here, increasing \(N_{Rx}\) increases the available information at the cost of increasing the number of trainable CNN parameters. CNN training consists of using a gradient-based adaptive momentum (Adam) convex optimization algorithm. In general, the training process is a non-convex optimization problem, which means that the Adam solver will find a locally-optimal combination of CNN parameters, but it may not find the combination that is globally-optimal. Increasing the number of CNN parameters increases the dimensionality of the optimization problem, which decreases the likelihood that the globally-optimal set of parameters will be found. Thus, by increasing \(N_{Rx}\) we balance the benefit of introducing additional information about the system with the increased dimensionality. For \(N_{Rx}<5\), the additional information is more beneficial than the increase in dimensionality, while for \(N_{Rx}>5\), the increase in dimensionality is more detrimental. Additionally, for \(N_{Rx}>5\), the additional information comes from transducers that are further from the opposing Rx. As a result, there is more interference between guided and bulk waves, and the amplitude of the first guided wave relatively high, as shown in Figure 1(c).
We note that the training and testing data was all measured or simulated on containers with nominally identical geometry and filled with nominally identical HE. The time required for the bursts to propagate between a Tx/Rx pair depends on the sound speed, which is temperature dependent, and the distance between the Tx/Rx. Thus, it is unlikely that the CNN, as presented in this manuscript, would be successful at estimating the temperature profile in a container with a significantly different shape or size. It may be possible to account for the size of the container by stretching/contracting the measured waveforms, but this is left for future work. Additionally, the dependence of the sound speed on temperature will differ between HE materials, which would introduce errors in the exact temperature values. However, most HE materials follow similar sound speed-temperature trends, i.e. decreasing sound speed with increasing temperature. Thus, it is likely that the CNN could measure the temperature profile trend, which could help identify if there was a solid-liquid transition (orange and yellow lines in Figure 4(a)), single phase with a temperature gradient (light blue line in Figure 4(a)), or constant temperature (red and dark blue lines in Figure 4(a)). Again, confirming and quantifying the performance of the CNN with different HE materials in training/testing is left for future work.
## 4 Conclusion
We present a novel technique for measuring internal temperature profiles by combining time-domain acoustic measurements and CNN processing. In contrast with existing temperature measurement methods our technique measures the interior temperature profile noninvasively, instead of requiring transducers to be placed inside or penetrate the container or being limited to measuring exterior surface temperatures. The technique is demonstrated on HE-filled containers as they are heated externally from ambient temperature until detonation/deflagration. Here, we introduce a hybrid measurement process, where we collect acoustic
measurements experimentally, and measure the temperature profiles via finite element simulation. We then introduce a CNN that estimates the temperature at a specified number of points within the container based on the acoustic signals from one or more acoustic receiver. We observe that the CNN accurately estimates the temperatures, and it captures the temperature trends, which can provide critical information about phase, thermal gradient, etc. in the HE. Additionally, we find that increasing the number of receivers used measure the acoustic burst has competing effects of providing additional information about the temperature profile at the cost of increasing the model complexity. We observe the lowest RMSE of 15\({}^{\circ}\)C between the true and estimated temperatures by using three opposing receivers. In this study, training and testing data consisted of experiments and simulations on containers with nominally identical dimensions and materials. In the future, we will extend the data set to a range of dimensions and HE materials. We anticipate that this will require normalizing data to account for changes in shape. Additionally, it will increase the error in the absolute temperatures measured between different HE materials but will likely still provide crucial information such as whether or not there is a liquid-solid HE interface (is the HE partially melted?).
Thus, this work presents the first demonstration of using acoustics to measure internal thermal profiles in high-attenuation materials, through the material container. This technique has implications in a variety of applications including, assessing the safety status of HE materials, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, and identifying abnormal battery operation, to name a few.
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001).
|
2301.04343 | Simulations of Precessing Jets and the Formation of X-shaped Radio
Galaxies | Jet precession is sometimes invoked to explain asymmetries in radio galaxy
(RG) jets and "X/S/Z-shape" radio galaxies, caused by the presence of a binary
black hole companion to the source active galactic nucleus (AGN) or by
accretion instabilities. We present a series of simulations of radio galaxy jet
precession to examine how these sources would evolve over time, including a
passive distribution of cosmic ray electrons (CRe) so we can model radio
synchrotron emissions and create synthetic radio maps of the sources. We find
that a single source viewed from different angles can result in differing RG
morphological classifications, confusing physical implications of these
classifications. Additionally, the jet trajectories can become unstable due to
their own self-interactions and lead to "reorientation events" that may look
like the effects of external dynamics such as shocks, winds, or cold fronts in
the medium. Finally, something akin to an "Odd Radio Circle" may be observed in
the case of viewing the radio remnant of such a precessing source from a line
of sight near the precession axis. | Chris Nolting, Jay Ball, Tri M. Nguyen | 2023-01-11T07:41:11Z | http://arxiv.org/abs/2301.04343v2 | # Simulations of Precessing Jets and the Formation of X-shaped Radio Galaxies
###### Abstract
Jet precession is sometimes invoked to explain asymmetries in radio galaxy (RG) jets and "X/S/Z-shape" radio galaxies, caused by the presence of a binary black hole companion to the source active galactic nucleus (AGN) or by accretion instabilities. We present a series of simulations of radio galaxy jet precession to examine how these sources would evolve over time, including a passive distribution of cosmic ray electrons (CRe) so we can model radio synchrotron emissions and create synthetic radio maps of the sources. We find that a single source viewed from different angles can result in differing RG morphological classifications, confusing physical implications of these classifications. Additionally, the jet trajectories can become unstable due to their own self-interactions and lead to "reorientation events" that may look like the effects of external dynamics such as shocks, winds, or cold fronts in the medium. Finally, something akin to an "Odd Radio Circle" may be observed in the case of viewing the radio remnant of such a precessing source from a line of sight near the precession axis.
0000-0002-8800-7880]Chris Nolting
0000-0002-4880-7880]Jay Ball
## 1 Introduction
Radio galaxies consist of an active galactic nucleus (AGN) and a pair of anti-parallel jets of high temperature, low density, supersonic plasma that can propagate to scales larger than the host galaxy. In reality, and especially in denser environments like galaxy clusters, RGs are often distorted, with broken symmetries caused by their interactions with their immediate surroundings. These interactions can be related to motion of the host galaxy relative to a cluster or group center due to an orbit or due to bulk motions within the local medium itself (Begelman et al., 1979; Jones et al., 2017). Another way for these symmetries to be broken is through nonuniform activity in the host AGN, either through variability in the jet properties or changes in its direction.
One class of RG asymmetries that is common and being studied more frequently is the so called "X-shaped" RG (Leahy and Parma, 1992; Bhukta et al., 2022). These RGs have two pairs of radio lobes that are misaligned from each other. Sometimes one pair is designated as the "main lobes" and the other pair as "wings" or "side-lobes," with the designation based on the detection of active radio jets or radio hot spots in the main lobes, or from surface brightness or spectral aging considerations.
There have been multiple suggested theories for the formation of such structures. Some suggest that RG wings can form through the backflow of radio plasma that rebounds laterally off the hot gaseous halo of the host galaxy (Leahy and Williams, 1984; Cotton et al., 2020; Ignesti et al., 2020). Others invoke a "spin-flip" in the AGN due to a merger event between a binary supermassive black hole (BSBH) system and a sudden reorientation of the spin axis of the active nucleus (Zier and Biermann, 2001; Gopal-Krishna et al., 2012). It may also be possible for both nuclei in a BSBH system to be accreting, in which case two sets of jets may exist. These may both be active during the same period or undergoing some cadence of activity depending on their respective accretion histories, creating multiple sets of lobes (Gopal-Krishna et al., 2012).
Lastly, and most relevant to what we will discuss here, is the idea that the jet axis of the RG may change or precess over time. This precession may lead to the formation of X-shaped RGs if the precession angle is large so that the current axis is well separated from the side lobes. Smaller angles may lead to "S-shaped" or "Z-shaped" RGs, which are designations sometimes given to RGs that show curvature or sharp turns in their jets or lobes (Riley, 1972).
There are many examples of S-shaped radio sources that could be fit by a projected helical precessing jet such as 4C35.06 in Abell 407 (Biju et al., 2014), J1328+2752 (Nandi et al., 2021), or Hydra A (Taylor et al., 1990). On smaller scales, SS433 is a striking example of a precessing X-ray binary jet system (Abell and Margon, 1979; Monceau-Baroux et al., 2014). The precession of AGN jets on large scales has been studied extensively as a method of creating X-shaped RGs (Ekers et al., 1978; Rubinur et al., 2017). In particular, (magneto)hydrodynamical simulations have proven useful in
characterizing the evolution and morphology of such jets (Smith and Donohoe, 2019; Horton et al., 2020; Giri et al., 2022).
Precession of an RG jet may occur for a number of reasons. Again invoking a BSBH system, if either nucleus is substantially accreting and hosts an active radio jet, then the jet may precess around the orbital axis of the BSBH system, assuming their is a misalignment between the orientation of the binary orbit and the black hole spin axis (Begelman et al., 1980). Alternatively, precession may be related to accretion instabilities such as Lense-Thirring or the related Bardeen Peterson effect (Bardeen and Petterson, 1975; Nandi et al., 2021). These instabilities may cause the precession of the accretion disk if its angular momentum is misaligned with the angular momentum of the central spinning black hole. If the disk orientation controls the jet axis, then this would induce a precession in the jet axis. This precession mechanism is seen in some general relativistic accretion disk simulations (e.g., Liska et al., 2018).
Regardless of the mechanism for the precession of the jet, we present the results of simulations in which we assume jets do precess, and discuss the consequences of that precession for the radio observables from these simulated RGs. One of our main goals is to use these simulations to invert this process and be better able to understand the dynamical state of observed radio jets that appear to be precessing.
The remainder of this paper is organized as follows: Section 2 outlines the geometry of the interaction we are examining and the underlying physics. Section 3 describes the simulations, including methods and specific parameters for each simulation. We describe the results of the simulations in section 4 and summarize the major findings in section 5.
## 2 Jet Physics
### Jet Precession
The basic geometry of the type of interactions we explore in this paper are illustrated in Figure 1. The jet is launched from a cylindrical region in which the cylinder axis is tilted by \(\psi\) degrees from the z-axis of the simulation, which we will refer to as the precession angle. The other angles in Figure 1 refer to the viewing angles used in the synthetic radio observations we present in Section 4. \(\theta\) is the spherical coordinate polar angle, also measured from the z-axis. Observations are taken at four \(\theta\) angles: \(90^{\circ}\), \(45^{\circ}\), \(30^{\circ}\), and \(0^{\circ}\). \(\phi\) is the azimuthal angle, and observations are taken at four \(\phi\) angles from \(0^{\circ}\leq\phi\leq 135^{\circ}\) in \(\Delta\phi=45^{\circ}\) intervals.
The jet will precess through a full \(2\pi\) radians within one precession period, \(\tau\). This can also be expressed as an angular velocity of precession, \(\omega_{prec}=2\pi/\tau\).
### Jet Propagation & Bending
As the jet precesses, it will bend due to its interaction with the medium. The character of this bending will be determined by a combination of the momentum in the jet and the properties of the precession of its launching angle.
The forward propagation of a jet can be described by a momentum balance in the frame of the head of the jet. As derived in Jones et al. (2017), we find that in the absence of relative motion between the surrounding medium and the jet source, the propagation of the head
Figure 1: Diagram of the jet precession. The precession angle \(\psi\) is the angle between the jet axis and the precession axis, and the diagram uses a fiducial value of \(\psi=30^{\circ}\). The polar viewing angle, \(\theta\), is the angle between the precession axis and the viewing angle, with angles used in Figure 3 shown. Not shown are the azimuthal viewing angles which are rotated around the precession axis in increments of \(\Delta\phi=45^{\circ}\). A cyan arrow indicates the direction that the jet axis precesses.
of the jet will follow:
\[M_{h}\approx M_{j}\sqrt{\frac{A_{j}}{A_{a}}}\sqrt{\frac{P_{j}}{P_{a}}}, \tag{1}\]
where \(M_{h}=v_{h}/c_{s,a}\) is the Mach number of the head of the jet propagating at velocity \(v_{h}\) relative to the ambient medium that has a sound speed of \(c_{s,a}=\sqrt{\gamma P_{a}/\rho_{a}}\). \(M_{j}=v_{j}/c_{s,j}\) is the Mach number of the material within the jet that has velocity \(v_{j}\) and has an internal sound speed of \(c_{s,j}=\sqrt{\gamma P_{j}/\rho_{j}}\). \(P_{(j/a)}\) and \(\rho_{(j/a)}\) are the thermal pressure and density of the (jet/ambient) plasma, respectively, and we use an adiabatic index of \(\gamma=5/3\). \(\sqrt{A_{j}/A_{h}}\) is a ratio of "effective areas" over which the momentum balance is taken for the jet and ambient plasma. In previous simulations of steady, non-precessing jets, we have empirically found \(\sqrt{A_{j}/A_{h}}\sim 1/2\)(Jones et al., 2017). This ratio departs from unity mainly due to the existence of backflow of jet plasma away from the region of the jet head. In the case where the jet is in pressure equilibrium with its surroundings, i.e. \(P_{j}=P_{a}\), Equation 1 can be expressed as:
\[v_{h}\approx v_{j}\sqrt{\frac{A_{j}}{A_{h}}}\sqrt{\frac{\rho_{j}}{\rho_{a}}}, \tag{2}\]
which is a useful metric when the jet head crosses a density discontinuity.
When there is some relative motion between the jet and its environment, the propagation will be affected. If that motion has a component transverse to the jet propagation direction, then the jet may be deflected or bent by the ram pressure it experiences. The classic "cartoon" formula for the distance, \(\ell_{b}\), over which the jet is bent backwards by a "wind" is (Begelman et al., 1979; Jones et al., 2017):
\[\ell_{b}\sim 2r_{j}\frac{\rho_{j}v_{j}^{2}}{\rho_{a}v_{a}^{2}}, \tag{3}\]
where \(r_{j}\) is the cross-sectional radius of the jet and \(v_{a}\) is the ambient velocity that provides the ram pressure to bend the jet. Equation 3 assumes a constant \(v_{a}\), but if we assume this "wind" comes from the precession of the jet through a uniform medium at rest, then the relevant ambient velocity is defined by the angular velocity of the precession of the jet source. \(v_{a}(y)=\omega_{prec}\cdot y\sin\psi=2\pi y\sin\psi/\tau\), where \(y\) is the distance along the jet direction from the jet source to the point of interest. Since the ambient velocity varies along the jet, we need to revisit the arguments that lead to equation 3. Usually, this begins by assuming that the ram pressure acts to redirect the jet, without compressing or accelerating the jet plasma. This leads to an incompressible Euler equation:
\[\frac{Dv_{j}}{Dt}=\frac{\partial\psi\hskip-5.0pt\raisebox{0.0pt}{/}}{\partial t }+(v_{j}\cdot\nabla)v_{j}=-\frac{1}{\rho_{j}}\frac{\partial P}{\partial x} \approx-\frac{1}{\rho_{j}}\frac{\rho_{a}v_{a}^{2}}{2r_{j}}, \tag{4}\]
where we assume that the jet bending leads to a steady state (\(\partial v_{j}/\partial t=0\)) and that the ram pressure, \(P=\rho_{a}v_{a}^{2}\), acts across the diameter of the jet, \(2r_{j}\). If we look at the component of the jet velocity perpendicular to the jet launching axis (\(v_{j,\perp}\)) and integrate along the jet length until the jet has been bent into the direction of the "wind," i.e. \(v_{j,\perp}=v_{j}\):
\[\int_{0}^{v_{j}}dv=\frac{1}{2r_{j}}\frac{\rho_{a}}{\rho_{j}v_{j}}\left(\frac{ 2\pi\sin\psi}{\tau}\right)^{2}\int_{0}^{\ell_{b}}y^{2}dy, \tag{5}\]
\[\ell_{b}\sim\left(r_{j}\frac{3\rho_{j}v_{j}^{2}\tau^{2}}{2\pi^{2}\rho_{a}\sin^ {2}\psi}\right)^{1/3}, \tag{6}\]
showing that the bending length of the jet will be a function of the jet properties, the precession angle and period, and the density of the medium.
Now, using Equation 2, we can compare this length to the distance that the head of the jet would travel before the precession has significantly changed its launching direction (ignoring for now how the bending affects this propagation). For this duration, we will take the fraction of the total period that it takes the core of the jet to transverse one jet diameter. This will depend on the shape and size of the jet launching cylinder:
\[\Delta t=2r_{j}/(2\pi\Delta y\sin\psi), \tag{7}\]
where \(\Delta y\) is the half length of the cylinder (from the center to one end). The ratio \(\ell_{b}/(v_{h}\Delta t)\) is then a good measure of whether or not the jet will create a well defined curved jet, or if the precession will be too extreme and cause the jet to break up or be unable to propagate to large scales.
## 3 Numerical Methods
The simulations reported here used the Eulerian WOMBAT ideal 3D MHD code (see, e.g. Mendygral et al. (2012); Nolting et al. (2019)) on a uniform, Cartesian grid employing an adiabatic equation of state with \(\gamma=5/3\). The simulations utilized the \(2^{nd}\) order TVD algorithm with constrained transport (CT) magnetic field evolution as in Ryu et al. (1998). Specific simulation setups are introduced in SS3.1 and listed in Table 1.
Along with the fluid, we track a population of passive cosmic ray electrons (CRe) to allow for the calculation of a synchrotron emissivity anywhere within
the simulation volume1. The CRe momentum distribution, \(f(p)\), was tracked using the conservative, Eulerian "coarse grained momentum volume transport" CGMV algorithm in Jones & Kang (2005). \(f(p)\) spanned the range \(10\lesssim p/(m_{e}c)\approx\Gamma_{e}\lesssim 1.7\times 10^{5}\) (so, energies 5 MeV \(\lesssim E_{CRe}\approx\Gamma_{e}m_{e}c^{2}\lesssim 90\) GeV) with uniform logarithmic momentum bins, \(1\leq k\leq 8\). Inside a given momentum bin, \(k\), \(f(p)\propto p^{-q_{k}}\), with \(q_{k}\) being bin dependent and evolving in time and space. \(\Gamma_{e}\) here represents CRe Lorentz factors.
Footnote 1: Except for a negligible ICM population included to avoid numerical singularities in the CRe transport algorithm, all CRe were injected onto the computational domain via the jet launch cylinder.
At the jet cylinder, the CRe momentum distribution had a power law form with index \(q=q_{0}=4.2\) across the full momentum range simulated. This was chosen because this translates to a radio synchrotron spectral index of \(\alpha=\alpha_{0}=0.6\) (\(I_{\nu}\propto\nu^{-\alpha}\)), which is a good match for many RGs near their sources. The synchrotron emission and spectra reported here are calculated numerically using \(f(p)\) over the full momentum range specified above using the standard synchrotron emission kernel for isotropic electrons in a local magnetic field (e.g., Blumenthal & Gould, 1970). In our analysis we calculated synthetic synchrotron emission at frequencies 300 MHz \(\leq\nu\leq 600\)MHz. This emission, as it turns out, comes predominantly from regions with magnetic field strengths \(\sim\) a few \(\mu\)G, so mostly reflect CRe energies \(\gtrsim\) 1 GeV (\(\Gamma_{e}\sim 10^{3}\)-\(10^{4}\)) (well inside our distribution).
We included adiabatic, as well as radiative (synchrotron and inverse Compton) CRe energy changes outside of shocks, along with test-particle diffusive shock (re)acceleration (DSA) at any shocks encountered. We did not include \(2^{nd}\) order turbulent CRe reacceleration or CRe energy losses from Coulomb collisions with ambient plasma. The former depends on uncertain kinetic scale turbulence behaviors beyond the scope of this study, while the latter is most relevant for CRe with energies well below those responsible for the radio synchrotron emission computed in this work (Sarazin, 1999). CRe radiative losses combine synchrotron with inverse Compton (iC) scattered CMB radiation. The simulations reported here assumed a low redshift, \(z=0.02\). The resulting radiative lifetime can be written
\[\tau_{rad}\approx 215\frac{1}{\Gamma_{e4}\left[1+B_{3.4}^{2}\right]}\ \mathrm{Myr}, \tag{8}\]
where \(\Gamma_{e4}=\Gamma_{e}/10^{4}\) and \(B_{3.4}=B/(3.4\mu\mathrm{G})\). The first term in the denominator on the RHS reflects inverse Compton (iC) losses at z = 0.02, while the second represents synchrotron losses. Thus, we can see that for \(\Gamma_{e}\sim 10^{4}\), of primary interest for the radio emission in this work, \(\tau_{rad}\sim 200\) Myr, and that iC losses are predominant.
DSA of the CRe was implemented at shock passage by setting \(q_{k,out}=\min(q_{k,in},3\sigma/(\sigma-1))\) immediately postshock, where \(\sigma\) is the code-evaluated compression ratio of the shock. This simple treatment is appropriate in the CRe energy range covered, since likely DSA acceleration times to those energies are much less than a typical time step in the simulations (\(\Delta t\gtrsim 10^{4}\) yr). Since our CRe have no dynamical impact, we treat the total CRe number density, \(n_{CRe}\), as arbitrary. Consequently, while we compute meaningful synchrotron brightness and spectral distributions from our simulations, synchrotron intensity normalizations are arbitrary.
### Simulation Setups
In this work we present a suite of simulations that explore the parameter space of jet precession angles and precession periods. Each simulation consists of a jet with constant density, pressure, and velocity being injected into a medium that is initially homogeneous and unmagnetized. This medium is an oversimplification of the intracluster medium (ICM), but it is useful for easier interpretation of the observable signatures of the precession that we wish to study. The dynamics of the simulations we present here are scale free, however a scale is set when introducing CRe radiative timescales (e.g., Equation 8). Because of this, we will quote the scale of the system that is consistent with the chosen radiation timescales.
Inside the simulation volume, a cylindrical region was updated at each time step with the jet properties described in Section 3.1. This cylinder had a radius of \(r_{j}=3\)kpc and length \(l_{j}=4\)kpc. The jet cylinder was surrounded by a 2 zone coaxial collar, within which the state transitioned from the jet properties to the local ambient conditions. Inside, the jet density and pressure were kept constant, and the velocity was ramped up along the cylinder's length from its midpoint (where the velocity reverses). A toroidal magnetic field was also maintained withing the jet cylinder, with \(B_{\phi}=B_{j}(r/r_{j})\hat{\phi}\). In these jets, \(\beta_{p}=8\pi P_{j}/B_{j}^{2}=75\), giving the magnitude of the field at launch to be \(B_{j}\approx 0.54\mu\)G. Due to this relatively high \(\beta_{p}\), the fields are initially dynamically sub-dominant to the gas pressure, which remains true throughout the simulation duration in all cases.
To create the precession of the jet axis, the jet is initialized \(\psi\) degrees away from the precession axis, defining the precession angle. Every time step, the jet cylin
der is rotated azimuthally around the precession axis by \(\Delta\phi=2\pi\Delta t/\tau\), where \(\Delta t\) is the length of the time step and \(\tau\) is the jet precession period.
Each simulation had a uniform initial density of \(\rho_{ICM}=2.17\times 10^{-28}\) g cm\({}^{-3}\), and pressure of \(P_{ICM}=8.8\times 10^{-13}\) dynes cm\({}^{-2}\) and was initially unmagnitized and at rest with respect to the jet source. This gave the medium an initial temperature of 2.5keV.
Table 1 gives a list of parameters for the 12 simulations, including domain size, precession period, and precession angle. Three precession periods and four precession angles were included in this study. Simulation names include information about the precession period (number following "P" in the name) and the precession angle (number following "A" in the name) for ease of understanding when referring to individual simulations.
### Synthetic Observations
The radio images we report here are calculated from the CRe momentum distribution \(f(p)\) multiplied by the appropriate synchrotron emissivity functions integrated across the full momentum range, as specified above. This emissivity takes into account the strength and orientation of the local magnetic field, as well as the history of the CRe energy changes (radiative, adiabatic, & DSA). The emissivity is calculated as (Ginzburg and Syrovatskii, 1965; Longair, 2011):
\[j_{\perp}(\nu) =\sqrt{\frac{\nu\nu_{B,\perp}}{8}}\frac{e^{2}}{c}\int_{x_{1}}^{x_{ 2}}\frac{n(x)[F(x)+G(x)]}{x^{3/2}}\mathrm{d}x\, \tag{9a}\] \[j_{\parallel}(\nu) =\sqrt{\frac{\nu\nu_{B,\perp}}{8}}\frac{e^{2}}{c}\int_{x_{1}}^{x_ {2}}\frac{n(x)[F(x)-G(x)]}{x^{3/2}}\mathrm{d}x\,\] (9b) \[j(\nu) =j_{\perp}(\nu)+j_{\parallel}(\nu)=\sqrt{\frac{\nu\nu_{B,\perp}} {2}}\frac{e^{2}}{c}\int_{x_{1}}^{x_{2}}\frac{n(x)F(x)}{x^{3/2}}\mathrm{d}x\,\] (9c) \[F(x) =x\int_{x}^{\infty}K_{5/3}(z)\mathrm{d}z\approx 1.78\] \[\times\left(\frac{x}{1-0.4\exp{(-5x)}}\right)^{1/3}\exp{(-x)}\,\] (9d) \[G(x) =xK_{2/3}(x)\approx 1.56613\times\frac{x^{1/3}}{\exp{(x)}+0.427687}\, \tag{9e}\]
where \(F(x)\) and \(G(x)\) are functions that describe the synchrotron spectrum of a single CRe, \(K_{5/3}\) and \(K_{2/3}\) are modified Bessel functions of the second kind, \(j_{\perp}(\nu)\) and \(j_{\parallel}(\nu)\) are the two orthogonal polarizations of the emissivity, \(\nu\) is the frequency of the radiation, \(\nu_{B\perp}=eB_{\perp}/2\pi m_{e}c\) is the electron gyrofrequency, and \(e\) is the elementary charge. In this case, \(B_{\perp}\) is the local magnetic field projected onto the plane of the sky. The integration variable \(x=(2\nu/3\nu_{B\perp}\gamma^{2})\), with \(\gamma\) being the electron Lorentz factor. The integration limit \(x_{1}[x_{2}]\) corresponds to the high[low] energy (high[low] \(\gamma\)) end of the integration range, covering our available range \(10<p/(m_{e}c)<1.7\times 10^{5}\). A change of variables from integrating over \(\gamma\) to integrating over \(x\) introduces the factor of \(x^{-3/2}\) in the integrand and a coefficient of \((-1/2)\sqrt{2\nu/3\nu_{B\perp}}\). We also present these equations in Gaussian units, rather than the SI units used by Longair (2011). Equations (9d) and (9e) include our own approximations to the synchrotron functions used in our calculations. These approximations are based on those given in Rybicki and Lightman (1986), but combine the small \(x\) and large \(x\) approximations to produce a single fit that is accurate to a few percent (Nolting, 2020).
Using equations (9a) - (9e) we calculate the total synchrotron emissivity as well as polarized emissivities. Then, we perform radiative transfer integration along a defined line of sight to create images of Stokes I, Q, or U.
A radio spectral index is calculated from any two radio maps at different frequencies with the radio spectral index, \(\alpha\) (e.g., \(f\propto\nu^{\alpha}\)), at each pixel being
\[\alpha=\frac{\log_{10}(I(\nu_{2})-\log_{10}(I(\nu_{1})))}{\log_{10}(\nu_{2})- \log_{10}(\nu_{1})}. \tag{10}\]
In this work, the two frequencies used to generate spectral index maps were \(\nu_{2}=600\) MHz and \(\nu_{1}=300\) MHz.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} Simulation & Nx & Ny & Nz & Precession & Precession \\ Name & & & & Period & Angle \\ \hline
**P3A10** & 288 & 288 & 624 & 3.2Myr & \(10^{\circ}\) \\
**P3A20** & 288 & 288 & 624 & 3.2Myr & \(20^{\circ}\) \\
**P3A30** & 288 & 288 & 624 & 3.2Myr & \(30^{\circ}\) \\
**P3A45** & 360 & 600 & 600 & 3.2Myr & \(45^{\circ}\) \\
**P32A10** & 288 & 288 & 624 & 31.7Myr & \(10^{\circ}\) \\
**P32A20** & 288 & 288 & 624 & 31.7Myr & \(20^{\circ}\) \\
**P32A30** & 288 & 288 & 448 & 31.7Myr & \(30^{\circ}\) \\
**P32A45** & 360 & 600 & 600 & 31.7Myr & \(45^{\circ}\) \\
**P95A10** & 288 & 288 & 1040 & 95.1Myr & \(10^{\circ}\) \\
**P95A20** & 288 & 288 & 624 & 95.1Myr & \(20^{\circ}\) \\
**P95A30** & 288 & 288 & 448 & 95.1Myr & \(30^{\circ}\) \\
**P95A45** & 360 & 600 & 600 & 95.1Myr & \(45^{\circ}\) \\ \end{tabular}
\end{table}
Table 1: Names, domain size, and precession parameters for each simulation.
Lastly, all radio images presented here are convolved with a gaussian convolution kernel. The resolution for most images is 2.355 arcsec Full Width Half Max (FWHM), or a gaussian standard deviation of 1 arcsecond, while one figure is convolved to a lower resolution of 11.775 arcsec FWHM, or 5 arcseconds gaussian standard deviation.
## 4 Discussion
### Effects of Precession Period and Precession Angle
To study the effects of precession period and precession angle, we ran a total of 12 simulations varying these parameters. The resulting morphological and radio spectral properties from these simulations are presented in figure 2. We show each simulation at time equal to one precession period as well as at t = 96 Myr for comparison. The labels at the bottom of each panel describe which simulation is shown and the simulation time represented. The simulations in the left two columns have a precession period of 3.2Myr, the 3rd and 4th column have a precession period of 31.7 Myr, and the 5th column has a precession period of 95.1 Myr. When comparing the jets all at the same dynamical stage (after 1 precession period) the reader should examine columns 1, 3, and 5 together. When comparing at the same time, the reader should instead compare columns 2, 4, and 5.
As Figure 2 demonstrates, these systems evolve quite differently in terms of their dynamics. The jets with large precession angles do not extend as far in any one direction and are instead spread over a wider area, with cocoons of larger lateral extent. Similarly, jets with shorter precession periods are also more centrally condensed as they are unable to deposit momentum in a sustained direction long enough to substantially push back the denser ICM plasma.
We can quantify the changes in dynamics using the ratio \(\ell_{b}/(v_{h}\Delta t)\), as described in section 2.2. With our chosen jet injection parameters and equation 2, the propagation rate of the head of the jet through the ambient medium was 3.45 kpc/Myr. Given, this, we calculated the ratio \(\ell_{b}/(v_{h}\Delta t)\) in Table 2. By comparing the value of this ratio to the corresponding images in Figure 2, we can see that precessing jets with higher values of this ratio will be more tightly wound and generally have propagated less far from the source. Conversely, precessing jets with lower values of this ratio will be less tightly wound and propagate farther before curving and bending backward. While this ratio does not have a clear critical value that determines when a jet will break up from precession or display a clear s-shape morphology, it is still useful for comparing how different precession parameters lead to differing morphology. In particular, the dependencies on \(\tau\) and \(\psi\) are \(\ell_{b}/(v_{h}\Delta t)\propto(\sin\psi/\tau)^{1/3}\).
Taking this ratio, as well as the general trends with precession period from Figure 2, it is clear that as the precession period decreases, the jet cocoons are more condensed and the jets more dramatically wound up. In simulations not presented here with shorter periods, the jet began to break up in our simulations. These jets may even have trouble breaking out of the interstellar medium of the host galaxy, which we do not include in these simulations.
Material surrounding the jet source is a combination of jet backflow during the initial stages of jet launching as well as material that has bled off from the jet as it precesses around. This material at late times radiatively steepens to \(\alpha\approx-1.5\), while fresh material injected by the precessing jet remains spectrally flat. In the cases with large precession angle, there is more significant mixing of the older and newer jet material, as the cocoons are smaller and the jet injection covers a larger solid angle during their precession.
Examining the 5th column of Figure 2, with precession period P \(\sim\) 95 Myr, we can see that as the precession period begins to approach the cooling timescale for the CRe, the surface brightness of the material away from the current jet direction drops off. If the period becomes too long, the CRe will age enough that they may no longer be detectable. This would leave the observer with little or no evidence of precession, and we would undercount precessing sources on the long period end of the distribution. However, only the longest lived AGN will reach such sustained jet ages, with most having lifetimes \(>100\) Myr (e.g., Turner & Shabala, 2015). If the jet precession has such a long period but isn't active for a large enough fraction of the precession period, the source will likely be indistinguishable from a non-precessing source. Because of this, radiative cooling may not be a major constraint on observing precessing radio jets.
### Effects of Viewing Angle
Within a single snapshot of a precessing jet system, its radio morphology may appear significantly different depending on the viewing angle. Figure 3 explicitly shows this for the P32A30 simulation. Each panel within this figure shows the P32A30 simulation at the same time (82 Myr), but from 12 perspectives. In the online version, this Figure is animated to show the evolution of this simulation in each of these 12 viewing angles simultaneously. At any given frame in the animation, one can see how the morphology changes with viewing angle. The differences are so stark that there are frames in which
Figure 2: Radio spectral index (600-300MHz, cut at 1\(\mu\)Jy beam\({}^{-1}\) at 300MHz) with 300 MHz radio brightness contours (levels = [1, 10, 100]\(\mu\)Jy beam\({}^{-1}\)) for each simulation presented here. Radio images are convolved to 1” resolution. Each row shows simulations with different precession angles (row 1: 10\({}^{\circ}\), 2: 20\({}^{\circ}\), 3: 30\({}^{\circ}\), 4: 45\({}^{\circ}\)). Columns show simulations with differing precession periods, or the same simulations at different times (period for column 1 & 2: 3.2 Myr, 3 & 4: 31.7 Myr, 5: 95.1 Myr) (column 1, 3, & 5 are shown at approximately 1 precession period; columns 2, 4, & 5 are shown at 96 Myr simulation time).
Figure 3: Radio spectral index (600-300MHz, cut at 1\(\mu\)Jy beam\({}^{-}1\) at 300MHz) with 300 MHz radio brightness contours (levels = [1, 10, 100]\(\mu\)Jy beam\({}^{-1}\)) for the P32A30 simulation at t = 82 Myr from a variety of viewing angles. Radio images are convolved to 2.355” FWHM resolution. Viewing angles are listed in the inset for each panel. For ease of visualization, the polar viewing angles are included in the diagram Figure 1. The polar angle is varied by column while the azimuthal angle is varied by row. In the online version, an animated figure is available (8 seconds, spanning 100 Myr simulation time) that follows the evolution of the source from each viewing angle simultaneously.
the radio source would very likely be classified in different ways, and lead to a misunderstanding of the physics of the source's evolution. For example, in the frame at \(t=82\) Myr, the first six frames (counting from left to right, top to bottom) and the ninth frame show signs of precession with curved jets. In the \(7^{th}\) and \(10^{th}\) frames, the jet appears mostly straight, with some deflection at the farthest points. In the \(8^{th}\) frame \(11^{th}\) frame, the jets appear to have been disrupted or sharply bent by some (perhaps external) dynamics. And the radio source in the \(12^{th}\) would likely be unclassifiable. By providing the animation of the evolution of this source from many angles, we hope that radio observers can compare these images to detected radio galaxies to help identify possible precessing sources. We also hope to display the full range of complex morphologies that precessing jets may take on.
### Jet Reorientation Events
In each simulation, the precessing jets would sometimes undergo a dramatic and sudden change in their propagation direction. This "reorientation event" occurred roughly after one precession period had elapsed in the simulation. As the jet precesses initially, it bends strongly due to ram pressure from its interaction with the ICM. However, after one precession period has elapsed, the jet encounters a region of space in which it has previously deposited low density plasma. As the jet encounters this density discontinuity, the velocity of the head of the jet suddenly increases, as can be inferred from Equation 2. Additionally, the ram pressure that leads to the bending of the jet is greatly reduced, leading to an increase in the bending length (Equation 6). Both of these effects lead to the jet becoming straighter and propagating to further distances before it then begins to interact with the higher density ICM when it reaches the outskirts of the low density cavity. This process is illustrated in Figure 4, which shows the beginning of the reorientation event, in which the jet trajectory suddenly changes by \(>90^{\circ}\). This type of sudden change in jet propagation direction is often attributed to either changes in the jet properties or duty cycle, or to external cluster dynamics such as shocks or other waves traveling through the medium. However, none of these effects are included in our simulations. These jet reorientation events can come about from jet precession alone, and the self interaction of the jet with its own previous activity.
### Odd Radio Circles (ORCs)
Odd Radio Circles (ORCs) are recent and mysterious radio sources. They are circles of steep spectrum, diffuse radio emission. Some (but not all) ORCs have a detected galaxy near their center. There have been a variety of possible physical explanations as to their origin, including a spherical shock from a starburst wind, a supermassive black hole merger, or radio galaxy lobes
\begin{table}
\begin{tabular}{c|c|c|c|c} Simulation & \(\Delta t\) & \(v_{h}\Delta t\) & \(\ell_{b}\) & \(\frac{\ell_{b}}{v_{h}\Delta t}\) \\ Name & & & & \\ \hline
**P3A10** & 2.93Myr & 10.1kpc & 19.5kpc & 1.9 \\
**P3A20** & 1.5Myr & 5.2kpc & 12.4kpc & 2.38 \\
**P3A30** & 1.02Myr & 3.52kpc & 9.6kpc & 2.73 \\
**P3A45** & 0.72Myr & 2.5kpc & 7.6kpc & 3.04 \\
**P32A10** & 29.1Myr & 100.4kpc & 89.7kpc & 0.89 \\
**P32A20** & 14.8Myr & 51.1kpc & 57.1kpc & 1.12 \\
**P32A30** & 10.1Myr & 34.9kpc & 44.3kpc & 1.27 \\
**P32A45** & 7.1Myr & 24.5kpc & 35.2kpc & 1.44 \\
**P95A10** & 87.2Myr & 300.8kpc & 186.6kpc & 0.62 \\
**P95A20** & 44.3Myr & 152.8kpc & 118.8kpc & 0.78 \\
**P95A30** & 30.3Myr & 104.5kpc & 92.2kpc & 0.88 \\
**P95A45** & 21.4Myr & 73.8kpc & 73.2kpc & 0.99 \\ \end{tabular}
\end{table}
Table 2: Values of jet bending analysis variables from section 2. \(\Delta t\) comes from Equation 7, \(v_{h}\) from Equation 2, and \(\ell_{b}\) refers to the definition in Equation 6.
Figure 4: Radio spectral index (600-300MHz, cut at 1\(\mu\)Jy beam\({}^{-}1\) at 300MHz) with 300 MHz radio brightness contours (levels = [1, 10, 100]\(\mu\)Jy beam\({}^{-1}\)) for the P32A30 simulation at t = 34 Myr. Radio images are convolved to 2.355” FWHM resolution. At this time, the jet is undergoing a “jet reorientation event” in which the jet trajectory changes by more than 90\({}^{\circ}\) over about 2.5 Myr. This can be seen here as the very sharp bend at the tip of the jet head at coordinates (x,y) \(\approx\) (-5, 80) kpc. In the online version, an animated figure (5 seconds) shows the full duration of the reorientation event from t = 30 Myr until t = 38 Myr.
seen end-on. Our simulations may lend some support to this last suggestion. Figure 5 shows the P3A30 simulation along a line of sight parallel to the precession axis at two times. The jets turn off just after the time in the left panel. The right panel shows the radio emission about 40 Myr later. The radio morphology is similar to that of ORCs. In the online version of this article, an animated version of this figure is available, which shows the evolution up from 0-160 Myr, with the jet powering off around t=100 Myr. Just after the jet turns off, there is a full ring of emission visible. However, the radio spectrum is still relatively flat with \(\alpha\approx-0.8\). The radio ring ages in place, without expanding further, as it is in pressure balance with its surroundings and the jets are no longer injecting momentum into the region. As the plasma radiatively ages and spectrally steepens, the ring becomes broken. There is a period of time in which the ring is still mostly intact and visible and it approaches the spectral index observed for some ORCs \(\alpha\approx-1.2\)(Norris et al., 2022).
If this mechanism for explaining some ORCs holds, then it may be interesting to reexamine existing observations of S-shaped RGs and burn out the images looking for regions of low level emission surrounding the visible structures as a possible analog to the existing ORCs. This low level emission may not appear as circular from viewing angles misaligned from the precession axis, but it may represent the same material. Additionally, searches for AGN counterparts in galaxies near the centers of ORCs or VLBI observations looking for any current small scale jets could help determine if such a formation mechanism is possible.
## 5 Summary
We have presented a series of simulations that look into the properties of precessing radio jets. These jets take on a wide variety of morphologies, depending on the properties of the precession itself as well as on the angle at which the source is viewed. In particular, the effects of viewing angle can greatly affect the classification of these sources. In many viewing angles, precession is not an obvious mechanism, as the jet may appear straight in projection or with very complex and messy morphology.
Additionally, we observed some interesting and unexpected dynamics in our simulations, including self induced "reorientation events," in which the jet trajectory quickly changed as the jet encounters a region filled with previous jet material of lower density than the ambient medium. This led to sharp turns in the jet which could be misinterpreted as due to external dynamics affecting the jet.
Lastly, we point out the similarities between ORCs and the remnants of a precessing jet seen end-on. This could be a possible explanation for some ORCs. Followup observations of existing ORCs looking AGN counterparts could help determine if this is likely.
Figure 5: Radio spectral index (600-300MHz, cut at 100\(\mu\)Jy beam\({}^{-}1\) at 300MHz) with 300 MHz radio brightness contours (levels = [100, 400, 1600]\(\mu\)Jy beam\({}^{-1}\)) for the P32A30 simulation at 98 Myr (left) and 140 Myr (right) both viewed from above the jet precession axis (\(\theta=0^{\circ}\)). Radio images are convolved to 11.775” FWHM resolution. The jet turns off at 100 Myr and the plasma is allowed to radiatively cool. In the online version, a single-panel animated figure (10 seconds) follows the evolution of the source from the beginning of the simulation until t = 160 Myr.
## Acknowledgments
CN acknowledges funding through the NSF grant AST-1907850 as well as from Los Alamos National Laboratory through the LDRD program and NASA programs through the Astrophysical Theory Program. JB and TN were supported by NSF grant AST-1907850. We thank L. Rudnick, T. Jones, and P. C. Fragile for useful discussions. The simulations presented here were run and analyzed at the College of Charleston on their high-performance Linux cluster ([https://hpc.cofc.edu](https://hpc.cofc.edu)) as well as utilizing the Los Alamos National Laboratory Institutional Computing Program.
|
2302.12071 | Periodic orbits in a near Yang-Mills potential | We consider the orbits in the Yang-Mills (YM) potential V=1/2 x2 y2 and in
the potentials of the general form Vg=1/2 [{\alpha} (x2 +y2)+x2 y2]. The stable
period-9 (number of intersection with the x-axis, with ) orbit found in the YM
potential is a bifurcation of a basic period-9 orbit of the Vg potential for a
value of {\alpha} slightly above zero. This basic period-9 family and its
bifurcations exist only up to a maximum value of {\alpha}={\alpha}max. We
calculate the Henon stability index of these orbits. The pattern of the
stability diagram is the same for all the symmetric orbits of odd periods
3,5,7,9 and 11. We also found the stability diagrams for asymmetric orbits of
period 2,3,4,5 which have again the same pattern. All these orbits are unstable
for {\alpha}=0 (YM potential). These new results indicate that in the YM
potential the only stable orbits are those of period-9 and some orbits with
multiples of 9 periods. | George Contopoulos, Mirella Harsoula | 2023-02-23T14:51:48Z | http://arxiv.org/abs/2302.12071v1 | # Periodic orbits in a near Yang-Mills potential
###### Abstract
We consider the orbits in the Yang-Mills (YM) potential \(V=\frac{1}{2}x^{2}y^{2}\) and in the potentials of the general form \(V_{g}=\frac{1}{2}[\alpha(x^{2}+y^{2})+x^{2}y^{2}]\). The stable period-9 (number of intersection with the \(x\)-axis, with \(\dot{y}>0\)) orbit found in the YM potential is a bifurcation of a basic period-9 orbit of the \(V_{g}\) potential for a value of \(\alpha\) slightly above zero. This basic period-9 family and its bifurcations exist only up to a maximum value of \(\alpha=\alpha_{max}\). We calculate the Henon stability index of these orbits. The pattern of the stability diagram is the same for all the symmetric orbits of odd periods 3,5,7,9 and 11. We also found the stability diagrams for asymmetric orbits of period 2,3,4,5 which have again the same pattern. All these orbits are unstable for \(\alpha=0\) (YM potential). These new results indicate that in the YM potential the only stable orbits are those of period-9 and some orbits with multiples of 9 periods.
## 1 Introduction
The Yang-Mills (YM hereafter) potential:
\[V=\frac{1}{2}x^{2}y^{2} \tag{1}\]
has attracted much interest in the years around 1990. For some time it was thought that the YM potential is of Anosov type, i.e. it does not have any stable periodic orbits ([1], [2], [3]). However, Dahlqvist and Russberg [4] (DR hereafter) have found a stable periodic orbit in this system, thus proving that the YM potential has both order and chaos and it is definitely not Anosov. In fact, because of the symmetries, the stable periodic orbits are four, formed by rotating the original orbit by \(90^{o}\), \(180^{o}\) and \(270^{o}\) (Fig.1). Moreover, every orbit can be also described in two opposite directions (with opposite velocities) and therefore the total number of stable periodic orbits is eight. In Fig. 1 we also plot the curves of zero velocity (\(CZV\) with equation \(y=\pm 1/x\)) (black) inside which all orbits are limited. Note that the stable orbits come close to the zero velocity curves but they do not reach them.
Some people considered extensions of the YM potential and studied the periodic orbits in this extension. In particular, Dahlqvist and Russberg in [5] considered potentials of the form \(V=(x^{2}y^{2})^{1/\alpha^{\prime}}\) with \(\alpha^{\prime}>1\) and \(\alpha^{\prime}<1\), but their main interest was the quantization of the orbits. On the other hand, Sohos et al. in [3] considered the Hamiltonian:
\[H=\frac{1}{2}(\dot{x}^{2}+\dot{y}^{2})+\frac{1}{2}[\alpha(x^{2}+y^{2})+x^{2}y ^{2}]=E \tag{2}\]
for large \(\alpha\), which is useful in galactic dynamics. They found the bifurcations of some simple periodic orbits for various values of \(\alpha\). They noticed that all of them form
Figure 1: The period-9 stable periodic orbits of the potential (1) together with the curves of zero velocity.
cascades of bifurcations as \(\alpha\) decreases. Thus, they all lead to infinities of unstable periodic orbits as \(\alpha\) approaches zero. However, these authors did not consider all the bifurcations of the period-1 periodic orbits.
We studied (in [6]) the Hamiltonian:
\[H=\frac{1}{2}(\dot{x}^{2}+\dot{y}^{2}+x^{2}+y^{2})+\epsilon x^{2}y^{2} \tag{3}\]
which is equivalent to eq. (2) if we set \(\alpha=1/\sqrt{2\epsilon}\). This paper was devoted to the form of the asymptotic curves of the main unstable periodic orbits.
In the present paper we study the extension of the DR stable periodic orbits for the Hamiltonian (2), which we call _generalized_ YM potential, for small values of \(\alpha\). We found that these stable YM orbits bifurcate from another stable periodic orbit for \(\alpha\approx 10^{-4}\). We call this family "basic" period-9 periodic orbit, because all the other period-9 families bifurcates from it. This stable family exists only up to \(\alpha\approx 16.8\times 10^{-4}\) and joins there a similar but unstable periodic orbit at a tangent bifurcation. Thus the YM stable periodic orbit cannot be joined by successive bifurcations to any family of periodic orbits that exist for large \(\alpha\) (of order \(O(1)\)).
A more detailed study of the Hamiltonian (2) for large \(\alpha\) is useful for galactic dynamics because it can describe the central parts of some galaxies and it will be the subject of a future paper.
In this paper we study the periodic orbits of the Hamiltonian (2) for small values of \(\alpha\) (near zero) and find their stability and their bifurcations as \(\alpha\) decreases. The energy is identified with the Hamiltonian (2). In our study we take \(E=1/2\) for the total energy.
The paper is structured as follows: In section 2 we describe the periodic orbits of the YM potential. In section 3 we give the stability of the periodic orbits of the generalized Hamiltonian (2) as a function of the parameter \(\alpha\). In section 4 we give other families of orbits for values of \(\alpha\) close to zero. Finally in section 5 we draw our conclusions.
## 2 Periodic orbits of the YM potential
The stable periodic orbit found by DR was considered by them to be of period 11, by counting the intersections with the axes \(x=0\) and \(y=0\), but also with the line \(x=y\). However, if we count only the intersections with the axis \(y=0\) going upwards (i.e. with \(\dot{y}>0\)) we must consider this periodic orbit as of period-9. The initial conditions of this orbit are (\(x_{0}\)=3.14640122769753, \(\dot{x}_{0}\) =0.0017931934191, \(y_{0}\)=0, \(\dot{y}_{0}\)=0.99999839222738). If we take for initial conditions (\(x^{\prime}_{0}\)=\(x_{0}\), \(y^{\prime}_{0}\)=\(y_{0}\), \(\dot{x}^{\prime}_{0}\)= - \(\dot{x}_{0}\), \(\dot{y}^{\prime}_{0}\)=-\(\dot{y}_{0}\) we have exactly the same periodic orbit described in the opposite direction. In Fig. 2a the Poincare surface of section (\(x\), \(\dot{x}\), for \(\dot{y}>0\)) of the stable period-9 orbit is plotted for the YM potential and the order of the successive iterations is marked with numbers. The second stable period-9 orbit (\(x^{\prime}_{0}\), \(y^{\prime}_{0}\), \(\dot{x}^{\prime}_{0}\), \(\dot{y}^{\prime}_{0}\)) is symmetric to the first one with respect to the axis \(\dot{x}=0\).
Apart from the two stable period-9 orbits, there are another ten unstable period-9 orbits in the YM potential shown in a zoom in of the Poincare surface of section (in black, blue, magenta and orange in Fig. 2b). The black, blue and magenta period-9
orbits (apart from the points shown in Fig. 2b with \(\dot{x}\)=0 ) have symmetrical points, about the \(\dot{x}=0\) axis, on the phase space and they intersect the \(x-axis\) perpendicularly at their maximum \(x=x_{max}\) (\(\dot{x}=0\)). The red and the orange orbits do not intersect the \(x-axis\) perpendicularly at their maximum \(x=x_{max}\), but they have \(\dot{x}_{x_{max}}>0\) or symmetrically \(\dot{x}_{x_{max}}<0\). The red, blue, magenta and orange period-9 orbits are bifurcated from the basic (black) period-9 orbit for values of \(\alpha\neq 0\) in the generalized Hamiltonian (2), as we will show below. In Fig 2c a greater zoom in of the phase space is focused only on the stable period-9 orbits. The stable periodic orbits are surrounded by sets of invariant curves (islands of stability). Every island consists of invariant curves but among them there are high order stable and unstable periodic orbits with periods multiples of 9. Around the islands of stability there is chaos.
Figure 3: Two period-9 orbits of the YM potential. (a) The basic unstable period-9 orbit (black). The order of intersection of the trajectory by the \(x\)-axis upwards is shown in numbers. (b) One of the two unstable period-9 orbits (blue) bifurcated from the basic one. The colors of the orbits are the same with their corresponding points on the phase space in Fig. 2b.
Figure 2: The Poincaré surface of section \((x,\dot{x})\) for \(y=0\) and \(\dot{y}>0\) of the YM potential. (a) A stable period-9 orbit. The order of the successive iterations is marked (b) A zoom in of the phase space around \(\dot{x}=0\) where all the 12 period-9 orbits of the YM potential are marked. (c) A greater zoom in of the phase space focused only on the stable period-9 orbits and the surrounding invariant curves (red) and the basic unstable period-9 (black) orbit.
In Fig. 3a we plot the basic unstable period-9 orbit (black). The order of intersection of the trajectory by the \(x\)-axis upwards is shown in numbers. The orbit is reflected in the \(CZV\) at two points and returns through exactly the same path. It also crosses perpendicularly the \(x-axis\) at its maximum \(x\). In Fig 3b one of the two unstable period-9 orbits (blue), bifurcated from the basic one (see Figs. 5, 6) is plotted. This orbit is not reflected back to the same path, but after reaching the farthest point on the left, close to the \(CZV\), it returns through another path. The colors of the orbits are the same with their corresponding points on the phase space in Fig. 2b.
When \(\alpha\) increases slightly above \(\alpha=0\) the two islands of stability of Fig. 2c approach each other and come close to the basic unstable periodic orbit. In Fig. 4a we plot the Poincare surface of section for \(\alpha=0.9\times 10^{-4}\). The basic period-9 periodic orbit is unstable for \(\alpha=0\) (black). The red islands of stability correspond to the two stable period-9 periodic orbits bifurcated from the basic one. There are invariant curves surrounding both islands of stability. In Fig. 4b we plot the Poincare surface of section for \(\alpha=10^{-4}\). The two islands of stability of Fig. 4a have joined into one island around the basic period-9 family which has become now stable. Around the basic stable periodic orbit there are islands of stability as well as stable periodic orbits with periods myltiples of 9.
## 3 Stability of the periodic orbits
The stability of the periodic orbits is found by calculating the Henon index \(HI\) ([7]). This is related to the eigenvalues \(\lambda\) of the periodic orbit by the relation:
\[\lambda=HI\pm\sqrt{(HI)^{2}-1} \tag{4}\]
The orbit is stable if \(|HI|<1\) and unstable if \(|HI|>1\). When \(HI=+1\) an equal period
Figure 4: A zoom in of the Poincaré surface of section \((x,\dot{x})\), for \(y=0\) and \(\dot{y}>0\) (a) for \(\alpha=0.00009\). The basic period-9 periodic orbit is unstable (black). The red islands of stability correspond to the two stable period-9 periodic orbits bifurcated from the basic one. (b) for \(\alpha=0.0001\). The basic period-9 periodic orbit is stable. Two stable period-36 orbits are marked in blue and red around the basic period-9 orbit.
bifurcation is generated and when \(HI=-1\) a double period bifurcation is generated.
The stability curve (\(HI=f(\alpha)\)) of the _basic_ period-9 orbit is given in Fig. 5 (black). This orbit is stable in some small intervals of \(\alpha\). It disappears at a maximum \(\alpha=\alpha_{max}\approx 16.8\times 10^{-4}\), having \(HI=+1\), where it joins another period-9 orbit (magenta) at a tangent bifurcation. This last period-9 orbit is always unstable.
Each time the \(HI\) curve of the basic period-9 orbit crosses the \(HI=+1\) axis from stability to instability (as \(\alpha\) decreases) two orbits of the same period (9) are bifurcated from it. The stability curves of these bifurcations are shown in orange and red. The corresponding orbits are stable close to the bifurcation point and then (by decreasing \(\alpha\)) they reach a minimum \(HI\). The minimum of the red curve is on the axis \(HI=-1\). For \(\alpha=0\) (YM potential) the \(HI\) of the red curve is a little greater than -1 and so the period-9 orbits (corresponding to the red branch of the stability curve) are stable. When their \(HI\) curves cross again the axis \(HI=+1\) (for smaller negative \(\alpha\)), two more stable orbits of the same period (9) are bifurcated that become unstable for smaller \(\alpha\) and never again become stable. A similar pattern is formed by the \(HI\) curves of the orange families. However, the orbits of these families are unstable for \(\alpha=0\).
On the other hand, when the \(HI\) curve of the basic period-9 orbit crosses the \(HI=+1\) axis from instability to stability (as \(\alpha\) decreases) two orbits of the same period (9) are bifurcated from it (blue curve) which are always unstable for smaller values of \(\alpha\).
The characteristics (\(x=f(\alpha)\)) of the various families of period-9 orbits are shown in Fig.6. We follow the same color notation as in Fig. 5. We notice that there is only one unstable period-9 orbit (magenta) having a tangent bifurcation with the basic one. In Fig. 6 we see only one small red curve (corresponding to the stable periodic orbit of the YM potential), but there is also another one with the same \(x\) and symmetric \(\dot{x}\) (Fig. 2b).
Figure 5: The \(HI\) as a function of the parameter \(\alpha\) of the Hamiltonian (2) for the period-9 orbits.
Finally there are six period-9 orbits (orange). Two of them bifurcate from the basic period-9 orbit and four more bifurcate from the first two. We see the three of them in Fig.6 and there are three more with the same \(x\) and symmetric \(\dot{x}\). The red and orange period-9 bifurcations do not intersect perpendicularly the \(x\)-axis near their maximum \(x=x_{max}\), but they form families with \(\dot{x}>0\) and symmetric ones with \(\dot{x}<0\) (see Fig. 2b).
Besides the period-9 periodic orbits we have also period-18 bifurcations, each time the period-9 stability curves cross the \(HI=-1\) axis or become tangent to it. Some examples are shown in Fig. 7. The For \(\alpha=0\) (YM potential) these orbits are unstable.
Figure 6: The characteristics \(x=f(\alpha)\) of all the period-9 family of orbits.
Figure 7: The \(HI\) as a fuction of the paremeter \(\alpha\) of three period-9 families (one black and two red) and of some period-18 bifurcations (green).
## 4 Other families of orbits for the generalized YM potential
As we can see in Fig.3a, the basic period-9 periodic orbit (black) reaches the \(CZV\) and returns along the same path. The equation of the \(CZV\), for \(\alpha=0\), is given by the relations \(yx=\pm 1\), while for \(\alpha\neq 0\) it is given as follows:
\[y=\frac{\pm\sqrt{\Delta}}{2(\alpha+x^{2})},\ \ \ \ \ \Delta=-4(\alpha+x^{2})( \alpha x^{2}-1) \tag{5}\]
We have tried to find periodic orbits of various multiplicities, by taking initial conditions along the \(CVZ\). Using this method we have found the period-7 periodic orbit for \(\alpha=0\) (Fig. 8a) and the period-11 periodic orbit for \(\alpha=-0.008\) (fig. 8b). The period-7 orbit is similar to the period-9 orbit (Fig. 3a), but has fewer oscillations. The period-11 orbit has more oscillations than the period-9 orbit.
Figure 8: (a) The unstable period-7 periodic orbit for \(\alpha=0\) (b) The unstable period-11 periodic orbit for \(\alpha=-0.008\).
Figure 9: The \(HI\) as a function of the parameter \(\alpha\) of the Hamiltonian (2) for the period-7, and period-11 periodic orbits.
We plot the \(HI\) of the period-7 families as functions of the parameter \(\alpha\) in Fig.9a and the \(HI\) of the period-11 family in Fig. 9b. We observe that the basic period-7 and period-11 families (black) as well as all their bifurcations follow the same pattern as the period-9 families of orbits (compare with Fig. 5). The only difference is that the basic period-7 family and its bifurcations are stable for values of \(\alpha\) much larger than the ones of the period-9 family and the period-11 family and its bifurcations are stable for negative values of \(\alpha\). The same pattern of the \(HI=f(\alpha)\) curves is repeated for all the odd period periodic orbits (period-5 and 3 for positive larger values of \(\alpha\) and for periods larger than 11 for negative values of \(\alpha\) ).
By taking initial conditions along the \(CZV\) curve for various values of \(\alpha\) we have found some asymmetric orbits (with initial conditions \(y=0\), \(\dot{y}\neq 0\)) of various periods. These orbits do not intersect the \(x-axis\) perpendicularly at their maximum \(x=x_{max}\). In Figure 10 the aymmetric orbits with periods 5,4,3,2 are plotted. The corresponding \(HI\) diagrams as functions of the parameter \(\alpha\) of these orbits are shown in Fig. 11. We observe that they all follow the same pattern, but for different ranges of the parameter \(\alpha\). The period-5 family of orbits exists only for negative values of the parameter \(\alpha\).
All these families disappear with a tangent bifurcation at a maximum \(\alpha=\alpha_{max}\) and they have one bifurcation of a family of the same period (green curve) for smaller values of \(\alpha\). All these families reach the \(CVZ\) and they are reflected back through the same path. They are all unstable for \(\alpha=0\) (YM potential).
Figure 10: The asymmetric orbits of (a) period-5 for \(\alpha=-0.008\) (b) period-4 for \(\alpha=0.0\) (c) period-3 for \(\alpha=0.0\) and (d) period-2 for \(\alpha=0.0\) with the corresponding \(CZV\)s.
## 5 Conclusions
In the present paper we considered the orbits in potentials of the form \(V=\frac{\alpha}{2}(x^{2}+y^{2})+\frac{1}{2}x^{2}y^{2}\), with small \(\alpha\), which is a generalized potential of the Yang-Mills (YM) potential \(V_{YM}=\frac{1}{2}x^{2}y^{2}\).
We found first the periodic orbits intersecting the \(x-axis\) 9 times upwards (with \(\dot{y}>0\)). In the YM potential there are 8 stable period-9 orbits. But close to every stable orbit there exist another 10 unstable period-9 orbits.
All these period-9 orbits are bifurcations of a basic period-9 family. The basic family is stable in two intervals of the values of \(\alpha\) (close to \(\alpha=0\)). This family disappears at a tangent bifurcation with another period-9 family (which is unstable for all values of \(\alpha\)), at a maximum \(\alpha=\alpha_{max}\) and having there Henon index \(HI=+1\). Thus these period-9 families do not extend to large values of \(\alpha\).
The stability of the various periodic orbits is given by the Henon Index (\(HI\)). The various period-9 bifurcations appear when the \(HI\) curves of these orbits intersect the \(HI=+1\) axis. We also give the characteristics (\(x=f(\alpha)\)) of the various families.
We also found periodic orbits of other multiplicities. We give the stability curves of the period-7 and period-11 periodic orbits which have the same pattern as the stability curve of the period-9 orbit, but they are displaced in the values of \(\alpha\). They all disappears at a tangent bifurcation of the basic family with another unstable family at a maximum \(\alpha=\alpha_{max}\). All the periodic families having odd periods (and intersecting perpendicularly the \(x-\)axis) have the same pattern of their stability curves. The periodic orbits with period 11 and larger values than 11 exist only for negative values of \(\alpha\) while the period-5 and period-3 orbits exist for larger positive values of \(\alpha\) than the period-7 orbits. All
Figure 11: The \(HI\) as functions of the parameter \(\alpha\) of the asymmetric orbits of period 5,4,3,2. They all follow the same pattern and they disappear with a tangency at a maximum \(\alpha\) between the stable and a corresponding unstable family.
these periodic orbits are unstable for \(\alpha=0\) (YM potential).
Furthermore, we found asymmetric periodic orbits of periods 2,3,4 and 5 that have two limiting points on the curves of zero velocity (\(CZV\)). They all have some intervals of the values of \(\alpha\) where they are stable and they all disappear at a tangent bifurcation with a periodic orbit of the same period at a maximum \(\alpha\) as shown at their \(HI\) diagrams. They are all unstable at the value \(\alpha=0\) (YM potential). Therefore, it seems that the only stable families of the YM potential are those of period-9 and its bifurcations with periods multiples of 9.
**References**
|
2307.06802 | Decomposing Finite Languages | The paper completely characterizes the primality of acyclic DFAs, where a DFA
$\mathcal{A}$ is prime if there do not exist DFAs
$\mathcal{A}_1,\dots,\mathcal{A}_t$ with $\mathcal{L}(\mathcal{A}) =
\bigcap_{i=1}^{t} \mathcal{L}({\mathcal{A}_i})$ such that each $\mathcal{A}_i$
has strictly less states than the minimal DFA recognizing the same language as
$\mathcal{A}$. A regular language is prime if its minimal DFA is prime. Thus,
this result also characterizes the primality of finite languages.
Further, the $\mathsf{NL}$-completeness of the corresponding decision problem
$\mathsf{PrimeDFA}_{\text{fin}}$ is proven. The paper also characterizes the
primality of acyclic DFAs under two different notions of compositionality,
union and union-intersection compositionality.
Additionally, the paper introduces the notion of S-primality, where a DFA
$\mathcal{A}$ is S-prime if there do not exist DFAs
$\mathcal{A}_1,\dots,\mathcal{A}_t$ with $\mathcal{L}(\mathcal{A}) =
\bigcap_{i=1}^{t} \mathcal{L}(\mathcal{A}_i)$ such that each $\mathcal{A}_i$
has strictly less states than $\mathcal{A}$ itself. It is proven that the
problem of deciding S-primality for a given DFA is $\mathsf{NL}$-hard. To do
so, the $\mathsf{NL}$-completeness of $\mathsf{2MinimalDFA}$, the basic problem
of deciding minimality for a DFA with at most two letters, is proven. | Daniel Alexander Spenner | 2023-07-13T15:13:58Z | http://arxiv.org/abs/2307.06802v1 | # Decomposing Finite Languages
###### Abstract
The paper completely characterizes the primality of acyclic DFAs, where a DFA \(\mathcal{A}\) is _prime_ if there do not exist DFAs \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) with \(\mathcal{L}(\mathcal{A})=\bigcap_{i=1}^{t}\mathcal{L}(\mathcal{A}_{i})\) such that each \(\mathcal{A}_{i}\) has strictly less states than the minimal DFA recognizing the same language as \(\mathcal{A}\). A regular language is prime if its minimal DFA is prime. Thus, this result also characterizes the primality of finite languages.
Further, the NL-completeness of the corresponding decision problem Prime-DFAs\({}_{\text{fin}}\) is proven. The paper also characterizes the primality of acyclic DFAs under two different notions of compositionality, union and union-intersection compositionality.
Additionally, the paper introduces the notion of _S-primality_, where a DFA \(\mathcal{A}\) is S-prime if there do not exist DFAs \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) with \(\mathcal{L}(\mathcal{A})=\bigcap_{i=1}^{t}\mathcal{L}(\mathcal{A}_{i})\) such that each \(\mathcal{A}_{i}\) has strictly less states than \(\mathcal{A}\) itself. It is proven that the problem of deciding S-primality for a given DFA is NL-hard. To do so, the NL-completeness of 2Minimal-DFA, the basic problem of deciding minimality for a DFA with at most two letters, is proven.
Deterministic finite automaton (DFA), Regular languages, Finite languages, Decomposition, Primality, Minimality
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Definition
* 2.2 Definition
* 2.3 Definition
* 2.4 Definition
* 2.5 Definition
* 2.6 Definition
* 2.7 Definition
* 2.8 Definition
* 2.9 Definition
* 2.10 Definition
* 2.11 Definition
* 2.12 Definition
* 2.13 Definition
* 2.14 Definition
* 2.15 Definition
* 2.16 Definition
* 2.17 Definition
* 2.18 Definition
* 2.19 Definition
* 2.20 Definition
* 2.21 Definition
* 2.22 Definition
* 2.23 Definition
* 2.24 Definition
* 2.25 Definition
* 2.26 Definition
* 2.27 Definition
* 2.28 Definition
* 2.29 Definition
* 2.30 Definition
* 2.31 Definition
* 2.32 Definition
* 2.33 Definition
* 2.34 Definition
* 2.35 Definition
* 2.36 Definition
* 2.37 Definition
* 2.38 Definition
* 2.39 Definition
* 2.40 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.49 Definition
* 2.41 Definition
* 2.42 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.49 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.41 Definition
* 2.41 Definition
* 2.42 Definition
* 2.43 Definition
* 2.44 Definition
* 2.45 Definition
* 2.46 Definition
* 2.47 Definition
* 2.48 Definition
* 2.49 Definition
* 2.49 Definition
* 2.41 Definition
to be new [4]. We use this result to establish complexity boundaries for S-Prime-DFA, a modification of Prime-DFA using the size of the given DFA, not its index.
Related WorkThe notion of intersection compositionality was introduced in [12], where the aforementioned complexity boundaries were established. They already considered language fragments, analyzing safety DFAs and permutation DFAs. This line of research was followed up in [9, 10], which focused on unary DFAs and permutation DFAs, respectively.
The intersection decomposition of automata can be motivated by LTL model checking, where the validity of a specification, given as an LTL formula, is checked for a system. The automata-based approach entails translating the specification into a finite automaton [20]. Since the LTL model checking problem is PSpace-complete in the size of the LTL formula [1], it is desirable to decompose the formula into a conjunction of subformulas. This can also be understood as decomposing the finite automaton corresponding to the formula.
Another application of intersection decomposition arises in the field of automaton identification. The basic task here is, given a set of labeled words, to construct a finite automaton conforming to this set [6]. An interesting approach is to construct multiple automata instead of one, which can lead to smaller and more intuitive solutions [13].
An alternative notion of compositionality uses concatenation. Here, a language \(L\) is composite if there exist two non-trivial languages \(L_{1},L_{2}\) with \(L=L_{1}L_{2}\). The concatenation primality problem for regular languages is PSpace-complete [14]. The restriction to finite languages is known to be NP-hard [18], while the conjectured NP-completeness of this restriction remains open [17, 15, 21].
ContributionsIn Section 3 we completely characterize the intersection compositionality of ADFAs and thereby of finite languages. We expand on this by proving the NL-completeness of Prime-DFA\({}_{\text{fin}}\) in Section 4, thus showing that finite languages are significantly easier to handle under intersection compositionality than under concatenation compositionality. We characterize the union and union-intersection compositionality of finite languages in Section 5, where we also prove the existence of languages that are union-intersection composite but both union prime and intersection prime.
In Section 6 we introduce the problem S-Prime-DFA, which is analogous to Prime-DFA but uses the size for the definition of compositionality, not the index. We prove that S-Prime-DFA is in ExpSpace and is NL-hard. We also prove these boundaries for 2Prime-DFA and 2S-Prime-DFA, the restrictions of the respective problems to DFAs with at most two letters. To establish these boundaries we prove the NL-completeness of 2Minimal-DFA.
Detailed proofs of these results are provided in the appendix.
## 2 Preliminaries
A _deterministic finite automaton_ (DFA) is a 5-tuple \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\), where \(Q\) is a finite set of states, \(\Sigma\) is a finite non-empty alphabet, \(q_{I}\in Q\) is an initial state, \(\delta:Q\times\Sigma\to Q\) is a transition function, and \(F\subseteq Q\) is a set of accepting states. As usual, we extend \(\delta\) to words: \(\delta:Q\times\Sigma^{*}\to Q\) with \(\delta(q,\varepsilon)=q\) and \(\delta(q,\sigma_{1}\ldots\sigma_{n})=\delta(\delta(q,\sigma_{1}\ldots\sigma_{ n-1}),\sigma_{n})\). For \(q\in Q\), the DFA \(\mathcal{A}^{q}\) is constructed out of \(\mathcal{A}\) by setting \(q\) as the initial state, thus \(\mathcal{A}^{q}=(Q,\Sigma,q,\delta,F)\).
The _run_ of \(\mathcal{A}\) on a word \(w=\sigma_{1}\ldots\sigma_{n}\) starting in state \(q\) is the sequence \(q_{0},\sigma_{1},q_{1},\ldots,\sigma_{n},q_{n}\) with \(q_{0}=q\) and \(q_{i}=\delta(q_{i-1},\sigma_{i})\) for each \(i\in\{1,\ldots,n\}\). The _initial run_ of \(\mathcal{A}\) on \(w\) is the run of \(\mathcal{A}\) on \(w\) starting in \(q_{I}\). The run of \(\mathcal{A}\) on \(w\) starting in \(q\) is _accepting_ if \(q_{n}\in F\), otherwise it is _rejecting_. The DFA \(\mathcal{A}\)_accepts_\(w\) if the initial run of \(\mathcal{A}\) on \(w\) is accepting. Otherwise,
it _rejects_\(w\). The _language_\(\mathcal{L}(\mathcal{A})\) of \(\mathcal{A}\) is the set of words accepted by \(\mathcal{A}\). We say that \(\mathcal{A}\)_recognizes_\(\mathcal{L}(\mathcal{A})\). A language is _regular_ if there exists a DFA recognizing it. Since we only consider regular languages, we use the terms language and regular language interchangeably.
The _size_\(|\mathcal{A}|\) of \(\mathcal{A}\) is the number of states in \(Q\). The DFA \(\mathcal{A}\) is _minimal_ if \(\mathcal{L}(\mathcal{A})\neq\mathcal{L}(\mathcal{B})\) holds for every DFA \(\mathcal{B}\) with \(|\mathcal{B}|<|\mathcal{A}|\). It is well known that for every regular language \(L\) there exists a canonical minimal DFA recognizing \(L\). The _index_\(\operatorname{ind}(L)\) of \(L\) is the size of this canonical minimal DFA. The index of \(\mathcal{A}\) is the index of the language recognized by \(\mathcal{A}\), thus \(\operatorname{ind}(\mathcal{A})=\operatorname{ind}(\mathcal{L}(\mathcal{A}))\). Note that \(\mathcal{A}\) is minimal iff \(|\mathcal{A}|=\operatorname{ind}(\mathcal{A})\).
We borrow a few terms from graph theory. Let \(q_{0},\sigma_{1},q_{1},\ldots,\sigma_{n},q_{n}\) be the run of \(\mathcal{A}\) on \(w=\sigma_{1}\ldots\sigma_{n}\) starting in \(q_{0}\). Then \(q_{0},\ldots,q_{n}\) is a _path_ in \(\mathcal{A}\) from \(q_{0}\) to \(q_{n}\). The _length_ of this path is \(n\). Thus, for two states \(q,q^{\prime}\) there exists a path from \(q\) to \(q^{\prime}\) in \(\mathcal{A}\) of length \(n\) iff there exists a \(w\in\Sigma^{n}\) with \(\delta(q,w)=q^{\prime}\). The state \(q^{\prime}\) is _reachable from_\(q\) if there exists a path from \(q\) to \(q^{\prime}\). Otherwise, \(q^{\prime}\) is _unreachable from_\(q\). Obviously, if \(q^{\prime}\) is reachable from \(q\) then there exists a path from \(q\) to \(q^{\prime}\) of a length strictly smaller than \(|\mathcal{A}|\). We say that \(q^{\prime}\) is _reachable_ if it is reachable from \(q_{I}\). Otherwise, it is _unreachable_. A _cycle_ in \(\mathcal{A}\) is a path \(q_{0},\ldots,q_{n}\) in \(\mathcal{A}\) where \(q_{0}=q_{n}\) and \(n\in\mathbb{N}_{\geq 1}\). The DFA \(\mathcal{A}\) is _acyclic_ (ADFA) if every cycle in \(\mathcal{A}\) begins in a rejecting sink. Clearly, a DFA recognizes a finite language iff its minimal DFA is acyclic.
We call a DFA \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\)_linear_ if for every \(q,q^{\prime}\in Q\) with \(q\neq q^{\prime}\) either \(q^{\prime}\) is reachable from \(q\) or \(q\) is reachable from \(q^{\prime}\), but not both. Thus, in a linear DFA reachability induces a linear order over the states. Obviously, every linear DFA has exactly one sink. Furthermore, a minimal ADFA \(\mathcal{A}\) is linear iff \(|\mathcal{A}|=n+2\), where \(n\) is the length of the longest word in \(\mathcal{L}(\mathcal{A})\).
Consider a word \(w=\sigma_{1}\ldots\sigma_{n}\in\Sigma^{n}\). A word \(wv\) with \(v\in\Sigma^{+}\) is an _extension_ of \(w\). A word \(\sigma_{1}\ldots\sigma_{i}\sigma_{i+l}\ldots\sigma_{n}\) with \(i\in\{0,\ldots,n-2\},l\in\{2,\ldots,n-i\}\) is a _compression_ of \(w\). An ADFA \(\mathcal{A}\) has the _compression-extension-property_ (CEP) if for every \(w\in\mathcal{L}(\mathcal{A})\) with \(|w|=n\), where \(n\) is the length of the longest word in \(\mathcal{L}(\mathcal{A})\), there exists a compression \(w^{\prime}\) of \(w\) such that every extension of \(w^{\prime}\) is rejected by \(\mathcal{A}\).
We introduce a type of DFA already inspected in [12]. A regular language \(L\subseteq\Sigma^{*}\) is a _safety language_ if \(w\notin L\) implies \(wy\notin L\) for every \(y\in\Sigma^{*}\). A DFA \(\mathcal{A}\) is a _safety DFA_ if \(\mathcal{L}(\mathcal{A})\) is a safety language. A regular language \(L\subseteq\Sigma^{*}\) is a _co-safety language_ if the complement language \(\overline{L}\) of \(L\) is a safety language. A DFA \(\mathcal{A}\) is a _co-safety DFA_ if \(\mathcal{L}(\mathcal{A})\) is a co-safety language. Clearly, every non-trivial minimal safety DFA has exactly one rejecting state, and this state is a sink. Conversely, every non-trivial minimal co-safety DFA has exactly one accepting state, and this state is a sink.
We introduce the notions intersection compositionality and primality of DFAs and languages, following the definitions in [12]: For \(k\in\mathbb{N}_{\geq 1}\), a DFA \(\mathcal{A}\) is _\(k\)-decomposable_ if there exist DFAs \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) with \(\mathcal{L}(\mathcal{A})=\bigcap_{i=1}^{t}\mathcal{L}(\mathcal{A}_{i})\) and \(|\mathcal{A}_{i}|\leq k\) for each \(i\in\{1,\ldots,t\}\), where \(t\in\mathbb{N}_{\geq 1}\). We call such DFAs \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) a _\(k\)-decomposition_ of \(\mathcal{A}\). We call \(\mathcal{A}\)_composite_ if \(\mathcal{A}\) is \(k\)-decomposable for a \(k<\operatorname{ind}(\mathcal{A})\), that is, if it is \((\operatorname{ind}(\mathcal{A})-1)\)-decomposable. Otherwise, we call \(\mathcal{A}\)_prime_.
We use compositionality or \(\cap\)-compositionality when referring to intersection compositionality.
When analyzing the compositionality of a given DFA \(\mathcal{A}\), it is sufficient to consider minimal DFAs \(\mathcal{B}\) strictly smaller than the minimal DFA of \(\mathcal{A}\) with \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\). Thus, we define \(\alpha(\mathcal{A})=\{\mathcal{B}\mid\mathcal{B}\) is a minimal DFA with \(\operatorname{ind}(\mathcal{B})<\operatorname{ind}(\mathcal{A})\) and \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\}\). Obviously, the DFA \(\mathcal{A}\) is composite iff \(\mathcal{L}(\mathcal{A})=\bigcap_{\mathcal{B}\in\alpha(\mathcal{A})}\mathcal{ L}(\mathcal{B})\). We call a word \(w\in(\bigcap_{\mathcal{B}\in\alpha(\mathcal{A})}\mathcal{L}(\mathcal{B})) \setminus\mathcal{L}(\mathcal{A})\) a _primality witness_ of \(\mathcal{A}\). Clearly, the DFA \(\mathcal{A}\) is composite iff \(\mathcal{A}\) has no primality witness.
We extend the notions of \(k\)-decompositions, compositionality, primality and primality witnesses to regular languages by identifying a regular language with its minimal DFA.
We denote the problem of deciding primality for a given DFA with Prime-DFA. We denote the restriction of Prime-DFA to DFAs recognizing a finite languages with Prime-DFA\({}_{\text{fin}}\). Prime-DFA is in ExpSpace and is NL-hard [12].
We denote the connectivity problem in directed graphs, which is NL-complete [8], with STCON. We denote the restriction of STCON to graphs with a maximum outdegree of two with 2STCON. Clearly, 2STCON is NL-complete as well. We denote the problem of deciding minimality for a given DFA with Minimal-DFA. For \(k\in\mathbb{N}_{\geq 2}\), the problem kMinimal-DFA is the restriction of Minimal-DFA to DFAs with at most \(k\) letters. As mentioned in Section 1, the NL-completeness of kMinimal-DFA for \(k\in\mathbb{N}_{\geq 3}\) is folklore, while the NL-hardness of 2Minimal-DFA appears to be open.
## 3 Compositionality of Finite Languages
We characterize the compositionality of ADFAs and thereby of finite languages by proving:
Consider a minimal ADFA \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\) recognizing a non-empty language. Then \(\mathcal{A}\) is prime iff \(\mathcal{A}\) is linear and:
1. \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) for some \(\sigma\in\Sigma\), where \(n\in\mathbb{N}\) is the length of the longest word in \(\mathcal{L}(\mathcal{A})\), or
2. \(\mathcal{A}\) is a safety DFA and \(\mathcal{A}\) does not have the CEP.
To prove Theorem 3.1 we will consider five cases in turn.
First, if the ADFA \(\mathcal{A}\) is not linear we essentially have a surplus of states, allowing us to construct one DFA rejecting overlong words and one specific DFA for each of the remaining words also rejected by \(\mathcal{A}\). This approach fails with linear ADFAs. Nevertheless, we will come back to the idea of excluding words longer than a threshold value and tailoring a DFA for each word shorter than the threshold value which has to be rejected as well.
Second, if \(\mathcal{A}\) is linear and \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) holds the DFAs in \(\alpha(\mathcal{A})\) do not possess enough states to differentiate the words \(\sigma^{0},\dots,\sigma^{n}\) but have to accept \(\sigma^{n}\), which implies cyclic behavior on the words in \(\{\sigma\}^{*}\) from which primality follows.
Third, if there is no \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) and \(\mathcal{A}\) is not a safety DFA we can return to the idea of excluding words longer than a threshold value. For each of the words left to reject, it is possible to construct a DFA similar to \(\mathcal{A}\) but without the rejecting sink, which circles back to the rejecting non-sink.
Fourth, if there is no \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) and \(\mathcal{A}\) has the CEP we can utilize DFAs similar to \(\mathcal{A}\) possessing a rejecting sink, since the CEP allows us to skip over one state.
Fifth and finally, if \(\mathcal{A}\) is linear and \(\mathcal{A}\) is a safety DFA and does not have the CEP both of the above approaches fail. There is no state to circle back to, and for the word breaching the CEP skipping over states is not possible either, which implies primality.
Formalizing these five cases, we get:
Consider a minimal ADFA \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\) recognizing a non-empty language. Let \(n\in\mathbb{N}\) be the length of the longest word in \(\mathcal{L}(\mathcal{A})\). The following assertions hold:
1. \(\mathcal{A}\) is composite if \(\mathcal{A}\) is not linear.
2. \(\mathcal{A}\) is prime if \(\mathcal{A}\) is linear and \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) holds for some \(\sigma\in\Sigma\).
3. \(\mathcal{A}\) is composite if there is no \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) and \(\mathcal{A}\) is not a safety DFA.
4. \(\mathcal{A}\) is composite if there is no \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) and \(\mathcal{A}\) has the CEP.
5. \(\mathcal{A}\) is prime if \(\mathcal{A}\) is linear and \(\mathcal{A}\) is a safety DFA and \(\mathcal{A}\) does not have the CEP.
Formalizing the intuition given above for (a) and (b) is not too complex. Assertions (c)-(e) prove to be much harder. Thus, we commence by discussing (c) in Section 3.1 and (d) and (e) in Section 3.2. Henceforth, we consider a minimal ADFA \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\) recognizing the non-empty language \(L\) with \(\sigma^{n}\notin L\) for each \(\sigma\in\Sigma\), where \(n\in\mathbb{N}\) is the length of the longest word in \(L\). W.l.o.g. we assume \(Q=\{q_{0},\ldots,q_{n+1}\}\) with \(q_{j}\) being reachable from \(q_{i}\) for all \(i<j\), which implies \(q_{I}=q_{0}\) and \(q_{n}\in F\) with \(q_{n+1}\) being the rejecting sink. Finally, we define \(\Sigma_{i,j}=\{\sigma\in\Sigma\ \mid\ \delta(q_{i},\sigma)=q_{j}\}\).
### Linear non-safety ADFAs
We consider Claim 3.2 (c). Therefore, we assume that \(\mathcal{A}\) is not a safety DFA, which implies \(\{q_{n}\}\subseteq F\subset Q\setminus\{q_{n+1}\}\). Let \(d\in\{0,\ldots,n-1\}\) with \(q_{d}\notin F\).
We show the compositionality of \(\mathcal{A}\) by specifying an \((n+1)\)-decomposition of \(\mathcal{A}\). First, we construct DFAs rejecting words not in \(L\) that are not extensions of words \(u\in L,|u|=n\). Afterwards, we turn to such extensions, whose handling poses the main difficulty. Here, we first construct DFAs rejecting such extensions that are longer than a certain threshold value. For the remaining extensions we employ the idea of circling back to \(q_{d}\).
We begin by considering words not in \(L\) which are not extensions of words \(u\in L,|u|=n\). We introduce three DFA types handling these words.
First, let \(\mathcal{A}_{0}\) be the DFA constructed out of \(\mathcal{A}\) by removing \(q_{n}\), redirecting every transition \(q\to q_{n}\) to \(q_{0}\), and including \(q_{0}\) into the acceptance set. Clearly, \(\mathcal{A}_{0}\in\alpha(\mathcal{A})\) and \(\mathcal{A}_{0}\) rejects every \(w\notin L\) on which \(\mathcal{A}\) enters the rejecting sink prematurely, that is, without entering \(q_{n}\).
Second, let \(\hat{\mathcal{A}}_{d}\) be the DFA constructed out of \(\mathcal{A}\) by removing \(q_{n+1}\), redirecting every transition \(q_{i}\to q_{n+1}\) with \(i<n\) to \(q_{n}\) and every transition \(q_{n}\to q_{n+1}\) to \(q_{d}\). Clearly, \(\hat{\mathcal{A}}_{d}\in\alpha(\mathcal{A})\) and \(\hat{\mathcal{A}}_{d}\) rejects every \(w\notin L\) on which \(\mathcal{A}\) does not enter the rejecting sink.
Third, we construct DFAs rejecting extensions of words \(w\in L,|w|<n\) with \(\delta(q_{0},w)=q_{n}\). Let \(I=\{0,\ldots,n\}\). For each \(m\in\{1,\ldots,n-1\}\) let \(I_{m}=\{(i_{0},\ldots,i_{m})\in I^{m+1}\ \mid\ 0=i_{0}<\cdots<i_{m}=n\}\). For each \(\underline{i}\in I_{m}\) define \(\mathcal{A}_{\underline{i}}\) as in Figures 0(a) and 0(b). It is easy to confirm that each \(\mathcal{A}_{\underline{i}}\) is in \(\alpha(\mathcal{A})\) and rejects extensions of words on which \(\mathcal{A}\) visits the states \(q_{i_{0}},\ldots,q_{i_{m}}\).
Lemma 3.3 formalizes the results concerning \(\mathcal{A}_{0}\), \(\hat{\mathcal{A}}_{d}\) and \(\mathcal{A}_{\underline{i}}\):
The following assertions hold:
1. \(\mathcal{A}_{0},\hat{\mathcal{A}}_{d},\mathcal{A}_{\underline{i}}\in\alpha( \mathcal{A})\), where \(\underline{i}\in\bigcup_{m=1}^{n-1}I_{m}\).
2. Consider a word \(w\notin L\), where \(w\) is not an extension of a word \(u\in L,|u|=n\). Then \(w\notin\mathcal{L}(\mathcal{A}_{0})\cap\mathcal{L}(\hat{\mathcal{A}}_{d})\cap \bigcap_{m=1}^{n-1}\bigcap_{\underline{i}\in I_{m}}\mathcal{L}(\mathcal{A}_{ \underline{i}})\) holds.
Next, we turn to the extensions of words \(u\in L,|u|=n\). We begin by constructing DFAs that taken together reject every word strictly longer than \(n+(n-2)\). Then we turn to the remaining extensions one by one, of which only a finite number are left to reject.
Let \(\sigma\in\Sigma\). Since \(\sigma^{n}\notin L\), there exists a value \(i\in\{1,\ldots,n\}\) with \(\sigma\notin\Sigma_{i-1,i}\). Define \(\mathcal{A}_{\sigma,i}\) as in Figure 0(c). First, note that \(\mathcal{A}_{\sigma,i}\in\alpha(\mathcal{A})\) because a word rejected by \(\mathcal{A}_{\sigma,i}\) is strictly longer \(n\) or is of length \(n\) with letter \(\sigma\) at position \(i\). Next, consider a word \(w=\sigma_{1}\ldots\sigma_{m}\in\Sigma^{m}\) such that \(\sigma_{j}=\sigma\) for a \(j\in\{1,\ldots,m\}\) with \(j\geq i\) and \(m\geq j+(n-i)\). After reading the prefix \(\sigma_{1}\ldots\sigma_{j-1}\) the DFA \(\mathcal{A}_{\sigma,i}\) is at least in state \(q_{i-1}\). Thus, after reading \(\sigma_{1}\ldots\sigma_{j}\) it is at least in state \(q_{i}\) and will reject after reading \(n-i\) more letters. Since \(m\geq j+(n-i)\), we have \(w\notin\mathcal{L}(\mathcal{A}_{\sigma,i})\). Lemma 3.4 formalizes this result:
Let \(\sigma\in\Sigma\) and \(i\in\{1,\ldots,n\}\) with \(\sigma\notin\Sigma_{i-1,i}\). The following assertions hold:
1. \(\mathcal{A}_{\sigma,i}\in\alpha(\mathcal{A})\).
* _Let_ \(m\in\mathbb{N}\)_. Let_ \(w\in\sigma_{1}\ldots\sigma_{m}\in\Sigma^{m}\) _such that_ \(\sigma_{j}=\sigma\) _for a_ \(j\in\{1,\ldots,m\}\) _with_ \(j\geq i\) _and_ \(m\geq j+(n-i)\)_. Then_ \(w\) _is rejected by_ \(\mathcal{A}_{\sigma,i}\)_._
Now consider a word \(w=\sigma_{1}\ldots\sigma_{m}\in\Sigma^{m}\) with \(m\geq n+(n-1)\) and \(\sigma_{1}\ldots\sigma_{n}\in L\). Note that Lemma 3 implies \(w\notin\mathcal{L}(\mathcal{A}_{\sigma_{n},i})\) where \(i\in\{1,\ldots,n\}\) with \(\sigma_{n}\notin\Sigma_{i-1,i}\). With this limitation of length, we only need DFAs to reject the extensions of words \(u\in L,|u|=n\) with a maximum length of \(n+(n-2)\). Consider such an extension \(w=\sigma_{1}\ldots\sigma_{m}\in\Sigma^{m}\). That is, \(n+1\leq m\leq n+(n-2)\) and \(\sigma_{1}\ldots\sigma_{n}\in L\). This implies \(\sigma_{i}\in\Sigma_{i-1,i}\) for each \(i\in\{1,\ldots,n\}\) but provides no information about the \(\sigma_{i}\) with \(i\in\{n+1,\ldots,m\}\). Therefore, we construct DFAs rejecting every such extension not confirming to a certain structure. This structure will be key to the further DFA constructions.
For a word \(w\in\Sigma^{*}\), let \(\mathcal{A}_{w}^{!}\) be the DFA rejecting exactly the words containing \(w\) as a subsequence. Clearly, the following holds:
Let \(w\notin L,|w|=n\). Then \(\mathcal{A}_{w}^{!}\in\alpha(\mathcal{A})\) holds.
With the DFAs \(\mathcal{A}_{w}^{!}\) for every \(w\notin L,|w|=n\) in hand, we only have to consider extensions of words \(u\in L,|u|=n\) with a maximum length of \(n+(n-2)\) for which every subsequence of length \(n\) is in \(L\).
Let \(w=\sigma_{1}\ldots\sigma_{m}\) be an extension satisfying these conditions. We construct a DFA \(\tilde{\mathcal{A}}_{w}\in\alpha(\mathcal{A})\) rejecting \(w\). We utilize the rejecting state \(q_{d}\) and define \(\tilde{\mathcal{A}}_{w}=(\tilde{Q}_{w},\Sigma,q_{0},\tilde{\delta}_{w},\hat{F} _{w})\) with \(\tilde{Q}=\{q_{0},\ldots,q_{n}\}\), \(\hat{F}_{w}=\tilde{Q}_{w}\setminus\{q_{d}\}\) and \(\tilde{\delta}_{w}(q_{0},w)=q_{d}\). Further, we have \(\tilde{\delta}_{w}(q_{0},v)=q_{d}\) for a \(v\in\Sigma^{*}\) only if \(\delta(q_{0},v)\in\{q_{d},q_{n+1}\}\), ensuring \(\tilde{\mathcal{A}}_{w}\in\alpha(\mathcal{A})\). In order to utilize \(q_{d}\) in this manner, the DFA \(\tilde{\mathcal{A}}_{w}\) simulates the behavior of \(\mathcal{A}\) for the states \(q_{0},\ldots,q_{d-1}\). The task then is to select the transitions of states \(q_{d},\ldots,q_{n}\).
If \(|\sigma_{d+1}\ldots\sigma_{m}|_{\sigma_{m}}\leq n-d\) the DFA \(\tilde{\mathcal{A}}_{w}\) can simply advance for occurrences of \(\sigma_{m}\) and
the first \(n-d-|\sigma_{d+1}\ldots\sigma_{m-1}|_{\sigma_{m}}\) occurrences of letters unequal to \(\sigma_{m}\). Thus, we only have to consider the case \(|\sigma_{d+1}\ldots\sigma_{m}|_{\sigma_{m}}>n-d\).
If \(\sigma_{n+1}\neq\sigma_{m}\) the DFA \(\tilde{\mathcal{A}}_{w}\) can advance for each letter in \(\Sigma\), ensuring \(\tilde{\delta}_{w}(q_{q},\sigma_{d+1}\ldots\sigma_{n})=q_{n}\). Further, we can define \(\tilde{\delta}_{w}(q_{n},\sigma_{n+1})=q_{n-[(m-1)-(n+2)+1]}\) and \(\tilde{\delta}_{w}(q_{n},\sigma_{m})=q_{d}\). Note that \(|\sigma_{n+2}\ldots\sigma_{m-1}|=(m-1)-(n+2)+1\). Since every subsequence of \(w\) of length \(n\) is in \(L\), we have \(\tilde{\delta}_{w}(q_{n-[(m-1)-(n+2)+1]},\sigma_{n+2}\ldots\sigma_{m-1})=q_{n}\).
The case \(\sigma_{n+1}=\sigma_{m}\) is more complex and needs a further case distinction, but the idea used above of circling back after reading an appropriate prefix can be employed again.
Lemma 3.6 summarizes these ideas:
Let \(w\in\Sigma^{*}\) with \(|w|>n\) such that \(w\in\mathcal{L}(\mathcal{A}_{v}^{!})\) for each \(v\notin L,|v|=n\) and \(w\in\bigcap_{\sigma\in\Sigma}\mathcal{L}(\mathcal{A}_{\sigma,i_{\sigma}})\), where for each \(\sigma\in\Sigma\) it is \(i_{\sigma}=\max(\{i\in\{1,\ldots,n\}\ \mid\ \sigma\notin\Sigma_{i-1,i}\})\). Then there exists a DFA \(\tilde{\mathcal{A}}_{w}\in\alpha(\mathcal{A})\) rejecting \(w\).
Lemmas 3.3-3.6 imply Claim 3.2 (c). To be more precise, we have \(\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{A}_{0})\cap\mathcal{L}(\tilde{ \mathcal{A}}_{d})\cap\bigcap_{m=1}^{n-1}\bigcap_{\tilde{\mathsf{i}}\in f_{m}} \mathcal{L}(\mathcal{A}_{\tilde{\mathsf{i}}})\cap\bigcap_{\sigma\in\Sigma} \mathcal{L}(\mathcal{A}_{\sigma,i_{\sigma}})\cap\bigcap_{w\in X^{!}}\mathcal{L }(\mathcal{A}_{w}^{!})\cap\bigcap_{w\in\tilde{X}}\mathcal{L}(\tilde{\mathcal{ A}}_{w})\), where \(X^{!}=\{w\in\Sigma^{n}\ |\ w\notin L\}\) and \(\tilde{X}\) is the set of all extensions \(w\) of words \(u\in L,|u|=n\) with \(|w|\leq n+(n-2)\) for which every subsequence of length \(n\) is in \(L\). This proves the compositionality of \(\mathcal{A}\) and thereby Claim 3.2 (c).
### Linear safety ADFAs
Next, we consider Claim 3.2 (d) and (e). For (d) we argue that \(\mathcal{A}\) is composite if it has the CEP, even if \(\mathcal{A}\) is a safety DFA, which makes circling back impossible. For (e) we argue that \(\mathcal{A}\) is prime if it is a safety DFA and it does not have the CEP.
First, we consider (d). We assume that \(\mathcal{A}\) has the CEP and argue that this implies compositionality. Note that we can reuse the DFAs \(\mathcal{A}_{0}\) and \(\mathcal{A}_{\tilde{\mathsf{i}}}\), while \(\tilde{\mathcal{A}}_{d}\) is not needed. This again leaves the task of rejecting the extensions of words \(w\in L,|w|=n\). But, since for every such word \(w=\sigma_{1}\ldots\sigma_{n}\) there now exist \(i\in\{0,\ldots,n-2\},l\in\{2,\ldots,n-i\}\) such that \(\delta(q_{0},\sigma_{1}\ldots\sigma_{i}\sigma_{i+1}\ldots\sigma_{n})\in\{q_{n},q_{n+1}\}\), we can construct a DFA \(\mathcal{A}_{i,l}\in\alpha(\mathcal{A})\) rejecting every extension of \(w\).
The DFA \(\mathcal{A}_{i,l}\) possesses states \(q_{0},\ldots,q_{i+l-2},q_{i+l},\ldots,q_{n+1}\). It simulates the behavior of \(\mathcal{A}\) for states \(q_{0},\ldots,q_{i-1}\), redirecting transitions \(q_{j}\to q_{i+l-1}\) to \(q_{i}\). From \(q_{i}\) it directly advances to \(q_{i+l}\) if a letter in \(\bigcup_{j=i+l}^{n+1}\Sigma_{i,j}\) is read, otherwise it advances to \(q_{i+1}\). The states \(q_{i},\ldots,q_{i+l-2}\) form a loop. For states \(q_{i+l},\ldots,q_{n}\), every transition leads to the direct successor state. The state \(q_{n+1}\) is a rejecting sink.
It is shown in the appendix that every extension of \(w\) is rejected by \(\mathcal{A}_{i,l}\), where \(i\) is the largest possible value belonging to \(w\), and that \(\mathcal{A}_{i,l}\in\alpha(\mathcal{A})\). Thus, \(\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{A}_{0})\cap\bigcap_{m=1}^{n-1} \bigcap_{\tilde{\mathsf{i}}\in f_{m}}\mathcal{L}(\mathcal{A}_{\tilde{\mathsf{ i}}})\cap\bigcap_{i=0}^{n-2}\bigcap_{l=2}^{n-i}\mathcal{A}_{i,l}\) holds, proving the compositionality of \(\mathcal{A}\) and thus (d).
Next, we consider (e) and assume that \(\mathcal{A}\) is a safety DFA and does not have the CEP. Thus, there is a \(w=\sigma_{1}\ldots\sigma_{n}\) such that \(\delta(q_{0},\sigma_{1}\ldots\sigma_{i}\sigma_{i+l}\ldots\sigma_{n})\notin\{q_{n},q_{n+1}\}\) holds for every \(i\in\{0,\ldots,n-2\},l\in\{2,\ldots,n-i\}\). This implies the existence of a letter \(\sigma\in\Sigma_{n-1,n}\) with \(\sigma\notin\Sigma_{j,n+1}\) for every \(j\in\{0,\ldots,n-1\}\). We show in the appendix that \(w\sigma\) is a primality witness of \(\mathcal{A}\), thus proving the primality of \(\mathcal{A}\) and thereby (e).
This completes our discussion of Claim 3.2 (a)-(e). Since they imply Theorem 3.1, we have characterized the compositionality of ADFAs and thereby of finite languages.
## 4 Complexity of Prime-DFA\({}_{\text{fin}}\)
After characterizing the compositionality of ADFAs and thereby of finite languages in Section 3, we now analyze the complexity of Prime-DFA\({}_{\text{fin}}\). We argue:
The problem Prime-DFA\({}_{\text{fin}}\) is NL-complete. The NL-completeness holds true even when restricting Prime-DFA\({}_{\text{fin}}\) to DFAs with at most two letters. \(\lrcorner\)
We begin by arguing that Prime-DFA\({}_{\text{fin}}\) is in NL, providing an NL-algorithm for Prime-DFA\({}_{\text{fin}}\) with Algorithm 1. The algorithm accepts in line 1 if the given DFA \(\mathcal{A}\) recognizes the empty language. Then lines 2-18 ensure that the minimal DFA belonging to \(\mathcal{A}\) is linear. Lines 19-22 ensure that \(\mathcal{A}\) is accepted if a letter \(\sigma\in\Sigma\) with \(\sigma^{n}\in L\) exists or else that \(\mathcal{A}\) is rejected if it is not a safety DFA. Finally, in lines 23-29 the CEP is checked for \(\mathcal{A}\).
The NL-hardness of Prime-DFA\({}_{\text{fin}}\) can be proven by L-reducing STCONDAG to Prime-DFA\({}_{\text{fin}}\), where STCONDAG is the restriction of STCON to acyclic graphs. The
L-reduction is similar to the L-reduction of STCON to the emptiness problem for DFAs.
## 5 Finite Languages under Different Notions of Compositionality
So far, we have only considered \(\cap\)-compositionality. Now we will define two further notions of compositionality and characterize the compositionality of finite languages for these notions.
For \(k\in\mathbb{N}_{\geq 1}\), a DFA \(\mathcal{A}\) is \(\cup\)-_decomposable_ (_\(k\)-DNF-decomposable_) if there exist DFAs \(\mathcal{A}_{1},\ldots,\mathcal{A}_{t}\) (\(\mathcal{A}_{1,1},\ldots,\mathcal{A}_{1,t_{1}},\ldots,\mathcal{A}_{s,1}, \ldots,\mathcal{A}_{s,t_{s}}\)) with \(\mathcal{L}(\mathcal{A})=\bigcup_{i=1}^{t}\mathcal{L}(\mathcal{A}_{i})\) (\(\mathcal{L}(\mathcal{A})=\bigcup_{i=1}^{s}\bigcap_{j=1}^{t_{i}}\mathcal{L}( \mathcal{A}_{i,j})\)) and \(|\mathcal{A}_{i}|<k\) for every \(i\) (\(|\mathcal{A}_{i,j}|<k\) for every pair \(i,j\)). The further concepts introduced in Definition 2.1 are defined analogously.
In [12], it is correctly remarked that many results for \(\cap\)-compositionality can be trivially transferred to \(\cup\)-compositionality. For example, the complexity boundaries for Prime-DFA established in [12] also hold for \(\cup\)-compositionality. This does not hold true for results concerning language fragments that are not closed under complement. In particular, the complement language of a finite language is not finite, but co-finite. Thus, characterizing the \(\cup\)-compositionality of finite languages is equivalent to characterizing \(\cap\)-compositionality of co-finite languages.
Also in [12], the notion of compositionality allowing both union and intersection is suggested. Note that DNF-compositionality enforces a structure similar to a disjunctive normal from, but is as strong as unrestricted union-intersection compositionality. It is correctly remarked in [12] that union-intersection compositionality - and thus, DNF-compositionality - is strictly stronger than \(\cap\)-compositionality. Obviously, it is also strictly stronger than \(\cup\)-compositionality. It is less obvious whether languages exist that are DNF-composite, but are neither \(\cap\)- nor \(\cup\)-composite. We will see that there are finite languages witnessing this.
The following result characterizes the \(\cup\)- and DNF-compositionality of finite languages:
Consider a minimal ADFA \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\) recognizing a non-empty language. Let \(n\in\mathbb{N}\) be the length of the longest word in \(\mathcal{L}(\mathcal{A})\). The following assertions hold:
1. \(\mathcal{A}\) is \(\cup\)-prime iff \(\mathcal{A}\) is linear.
2. \(\mathcal{A}\) is DNF-prime iff \(\mathcal{A}\) is linear and there exists a \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\).
These conditions are similar to the conditions in Theorem 3.1, but much simpler. Let \(\mathcal{A}\) and \(n\) be as required. It is easy to show \(\cup\)- and DNF-compositionality if \(\mathcal{A}\) is not linear.
The proof of \(\cup\)-primality if \(\mathcal{A}\) is linear relies on the observation that every minimal DFA \(\mathcal{B}\) with \(\mathcal{L}(\mathcal{B})\subseteq\mathcal{L}(\mathcal{A})\) and \(\operatorname{ind}(\mathcal{B})<\operatorname{ind}(\mathcal{A})\) has to have a rejecting sink. From this follows that no such DFA \(\mathcal{B}\) can accept a word \(w\in\mathcal{L}(\mathcal{A}),|w|=n\). Thus, \(\mathcal{A}\) is \(\cup\)-prime.
If \(\mathcal{A}\) is linear and there exists no \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) the DNF-compositionality of \(\mathcal{A}\) follows from [12, Example 3.2]. On the other hand, if \(\mathcal{A}\) is linear and there exists a \(\sigma\in\Sigma\) with \(\sigma^{n}\in\mathcal{L}(\mathcal{A})\) DNF-primality can be shown by adapting the proof of Claim 3.2 (b).
As mentioned, Theorems 3.1 and 5 immediately imply:
There exists a finite language that is DNF-composite but \(\cap\)- and \(\cup\)-prime.
To summarize, Theorems 3.1 and 5.2 characterize the \(\cap\)-, \(\cup\)- and DNF-compositionality of ADFAs and thus of finite languages. Obviously, this characterizes the \(\cap\)-, \(\cup\)- and DNF-compositionality of co-finite languages as well. The results further imply the existence of languages that are DNF-composite but \(\cap\)- and \(\cup\)-prime.
## 6 2Minimal-DFA and S-Prime-DFA
We defined compositionality using the index of the given DFA. Thus, the compositionality of a DFA \(\mathcal{A}\) is a characteristic of \(\mathcal{L}(\mathcal{A})\). Slightly changing the definition, using the size instead of the index, turns compositionality of \(\mathcal{A}\) into a characteristic of \(\mathcal{A}\) itself. It is interesting to analyze the effects of this change, which results in the notion of S-compositionality.
Many results known for compositionality hold for S-compositionality as well. The characterization of finite languages in Section 3 and other results concerning language fragments [12, 9, 10] are valid with only minor technical modifications. In fact, [9, 10] already implicitly used S-compositionality instead of compositionality without discussing the differences. The upper complexity boundary of Prime-DFA holds for S-Prime-DFA as well. But the known lower boundary, the NL-hardness of Prime-DFA, cannot simply be adapted for S-Prime-DFA. The lower boundary for S-Prime-DFA is connected to Minimal-DFA, since non-minimal DFAs are trivially S-composite. Note that Prime-DFA is connected to the emptiness problem for DFAs in a similar manner [12].
We begin by discussing Minimal-DFA, proving the NL-hardness of 2Minimal-DFA. Then we formally introduce S-compositionality and prove the NL-hardness of the restriction 2S-Prime-DFA and thereby of S-Prime-DFA as well. We also prove the NL-hardness of the restriction 2Prime-DFA, so far only known for the unrestricted problem Prime-DFA.
### NL-hardness of 2Minimal-DFA
As mentioned, the NL-hardness and thus NL-completeness of kMinimal-DFA for \(k\in\mathbb{N}_{\geq 3}\) is folklore, while the NL-hardness of 2Minimal-DFA appears to be open. We prove:
The problem 2Minimal-DFA is NL-hard and thus NL-complete. \(\lrcorner\)
The NL-hardness of 3Minimal-DFA can be proven by L-reducing 2STCON to 3Minimal-DFA. This known reduction uses an additional letter and cannot be used to prove the NL-hardness of 2Minimal-DFA. We give an L-reduction of 2STCON not using an additional letter, proving the NL-hardness and thus the NL-completeness of 2Minimal-DFA.
Let \((G,s,t)\) be an input for 2STCON. That is, \(G=(V,E)\) is a graph with a maximum outdegree of two and \(s,t\in V\) are nodes of \(G\). We construct a DFA \(\mathcal{A}=(Q,\Sigma,q_{I},\delta,F)\) with \(\Sigma=\{0,1\}\), which is minimal iff there exists a path in \(G\) from \(s\) to \(t\). If \(s=t\) such a path exists trivially and we can construct the minimal DFA for the empty language. Thus, we only have to consider the case \(s\neq t\). W.l.o.g. we assume \(V=\{0,\ldots,n-1\}\) and \(s=0,t=n-1\).
Let \(\mathcal{A}^{\prime}=(Q^{\prime},\Sigma,0,\delta^{\prime},F^{\prime})\) be the DFA constructed out of \(G\) in the usual manner, that is, by turning nodes into states, edges into transitions, setting the state \(0\) as the initial state and \(n-1\) as the only accepting state. For \(\mathcal{A}\), we introduce the new states \(p_{0},\ldots,p_{n-1}\), called \(p\)-states, the new states \(q_{0},\ldots,q_{n-1}\) and \(q^{\prime}_{0}\), called \(q\)-states, and for each \(i\in Q^{\prime}\) the states \(i^{\prime}_{0},i^{\prime}_{1},i_{0},i_{1}\). We call the states \(i,i^{\prime}_{0},i^{\prime}_{1},i_{0},i_{1}\) for \(i\in Q^{\prime}\)\(v\)-states. We say that states \(p_{i},q_{i},i,i^{\prime}_{0},i^{\prime}_{1},i_{0},i_{1}\) for an \(i\in Q^{\prime}\) are located on the same layer. Figure 2 specifies the DFA \(\mathcal{A}\) constructed for the L-reduction. We now discuss the key ideas of this construction.
First, note that the idea of the \(p\)- and \(q\)-states is similar to the known L-reduction of 2STCON to 3Minimal-DFA. The \(p\)-states are used to access every state in \(Q\), thus avoiding unreachable states. The \(q\)-states are used to allow the return to \(0\) from every state.
Second, we cannot use an additional letter to switch from \(p_{i}\) to \(i\) to \(q_{i}\). Thus, letter \(1\) is used to leave the \(p\)-states and to exit \(q_{0}\) to state \(0\). Letter \(0\) is used to advance to the next layer in both the \(p\)- and \(q\)-states. To allow switching from the \(v\)-states to the \(q\)-states, we introduce for each \(i\in Q^{\prime}\) a component consisting of \(i\) and the two branches \(i^{\prime}_{0},i_{0}\) and \(i^{\prime}_{1},i_{1}\)
The states \(i^{\prime}_{0},i^{\prime}_{1}\) are waiting states used to prove the non-equivalence of \(q\)- and \(v\)-states. The states \(i_{0},i_{1}\) implement on the one hand the original transitions in \(\mathcal{A}^{\prime}\), that is, \(\delta(i_{j},j)=\delta^{\prime}(i,j)\), and on the other hand the transitions into the \(q\)-states, that is, \(\delta(i_{j},1-j)=q_{i}\).
Third, an extra \(q\)-state \(q^{\prime}_{0}\) is introduced, which is only directly accessible from \(q_{0}\). Without \(q^{\prime}_{0}\) the situation \(\delta(1_{1},1)=0=\delta(q_{0},1)\) and \(\delta(1_{1},0)=q_{1}=\delta(q_{0},0)\) would be possible, immediately implying the non-minimality of \(\mathcal{A}\). The introduction of \(q^{\prime}_{0}\) solves this problem.
Note that there is a path from \(0\) to \(n-1\) in \(\mathcal{A}\) iff there is such a path in \(G\). Using this it follows that \(\mathcal{A}\) is minimal iff there exists a path from \(0\) to \(n-1\) in \(G\). Since \(\mathcal{A}\) can obviously be constructed in logarithmic space, the given construction is indeed an L-reduction of 2STCON to 2Minimal-DFA. Consequently, 2Minimal-DFA is NL-hard.
### Complexity of S-Prime-DFA
We end our discussion by using the construction presented in Section 6.1 to establish complexity boundaries for S-Prime-DFA. First, we define the notion of S-compositionality.
A DFA \(\mathcal{A}\) is _S-composite_ if there is a \(k\in\mathbb{N}_{\geq 1},k<|\mathcal{A}|\) such that \(\mathcal{A}\) is \(k\)-decomposable. Otherwise, \(\mathcal{A}\) is _S-prime_.
Figure 2: DFA \(\mathcal{A}\) constructed for the L-reduction of 2STCON to 2Minimal-DFA. The \(j\)-transitions exiting states of the form \(i_{j}\) are only indicated.
We denote the problem of deciding S-primality for a given DFA with S-Prime-DFA and the restriction of S-Prime-DFA to DFAs with at most \(k\in\mathbb{N}_{\geq 2}\) letters with kS-Prime-DFA.
Note that the proof used in [12] to show that Prime-DFA is in ExpSpace is applicable for S-Prime-DFA with only slight modifications. Next, note that the L-reduction of the emptiness problem for DFAs to Prime-DFA used in [12] to prove the NL-hardness of Prime-DFA relies on the fact that every DFA recognizing the empty language is prime. Thus, it is not easily adaptable for S-Prime-DFA. Instead, the NL-hardness of 2S-Prime-DFA is shown by using a reduction from 2STCON, which adapts the construction outlined in Section 6.1. We get:
The problems S-Prime-DFA and kS-Prime-DFA for \(k\in\mathbb{N}_{\geq 2}\) are in ExpSpace and they are NL-hard.
Further, we denote with kPrime-DFA the restriction of Prime-DFA to DFAs with at most \(k\in\mathbb{N}_{\geq 2}\) letters and remark that the results presented in [12] can be expanded to:
The problems Prime-DFA and kPrime-DFA for \(k\in\mathbb{N}_{\geq 2}\) are in ExpSpace and they are NL-hard.
This ends our discussion of the complexity of S-Prime-DFA and its restrictions, in which we have applied the construction outlined in Section 6.1 to prove NL-hardness.
## 7 Discussion
We studied the intersection compositionality, also denoted with \(\cap\)-compositionality, of regular languages. We added to the existing line of research focusing on fragments of the regular languages by analyzing the \(\cap\)-compositionality of ADFAs and thereby of finite languages. This research was in part motivated by existing results concerning the concatenation compositionality of finite languages.
We completely characterized the \(\cap\)-compositionality of ADFAs and thus finite languages. Using this characterization we proved the NL-completeness of Prime-DFA\({}_{\text{fin}}\). Thus, finite languages are significantly easier to handle under \(\cap\)-compositionality than under concatenation compositionality, where the respective primality problem for finite languages is NP-hard [18].
With notions of compositionality using union and both union and intersection already suggested in [12], we formally introduced the notions of \(\cup\)- and DNF-compositionality. We characterized the \(\cup\)- and DNF-compositionality of finite languages, which proved to be far simpler than the characterization of \(\cap\)-compositionality. These results also imply the characterization of the \(\cap\)-, \(\cup\)- and DNF-compositionality of co-finite languages.
This suggests that the key feature of finite languages regarding compositionality is not the finiteness of the languages per se, but rather the existence of only finitely many meaningfully different runs of the respective DFAs, a feature finite languages have in common not only with co-finite languages, but also with languages whose minimal DFAs allow for cycles in both accepting and rejecting sinks. A logical next step would therefore be the characterization of the compositionality of these DFAs.
We also note that in our proofs we employed \(\cap\)-compositionality results concerning a different language fragment, namely co-safety DFAs, studied in [12]. This suggests the possibility of employing the results concerning finite languages in future analyses and stresses the usefulness of working with language fragments. We provided one application of the
results concerning finite languages by using them to prove the existence of a language that is DNF-composite but \(\cap\)- and \(\cup\)-prime.
Furthermore, we presented a proof of the NL-hardness and thereby NL-completeness of the basic problem 2Minimal-DFA. While the NL-hardness of kMinimal-DFA for \(k\in\mathbb{N}_{\geq 3}\) is folklore, this result appears to be new.
We utilized this result to establish the known complexity boundaries of Prime-DFA for the here newly introduced problem S-Prime-DFA. We extended these results to the restrictions kPrime-DFA and kS-Prime-DFA for \(k\in\mathbb{N}_{\geq 2}\).
While it is interesting that a slight variation in the definition of \(\cap\)-compositionality, which does not touch the validity of most results, requires a whole new approach to establish the known lower complexity boundary, the big task of closing the doubly exponential complexity gap for Prime-DFA still remains. And now, this gap exists for S-Prime-DFA as well.
Therefore, with the analysis of language fragments, further notions of compositionality, and the complexity gaps for Prime-DFA and S-Prime-DFA, there is still need for further research.
|
2306.09818 | HiNeRV: Video Compression with Hierarchical Encoding-based Neural
Representation | Learning-based video compression is currently a popular research topic,
offering the potential to compete with conventional standard video codecs. In
this context, Implicit Neural Representations (INRs) have previously been used
to represent and compress image and video content, demonstrating relatively
high decoding speed compared to other methods. However, existing INR-based
methods have failed to deliver rate quality performance comparable with the
state of the art in video compression. This is mainly due to the simplicity of
the employed network architectures, which limit their representation
capability. In this paper, we propose HiNeRV, an INR that combines light weight
layers with novel hierarchical positional encodings. We employs depth-wise
convolutional, MLP and interpolation layers to build the deep and wide network
architecture with high capacity. HiNeRV is also a unified representation
encoding videos in both frames and patches at the same time, which offers
higher performance and flexibility than existing methods. We further build a
video codec based on HiNeRV and a refined pipeline for training, pruning and
quantization that can better preserve HiNeRV's performance during lossy model
compression. The proposed method has been evaluated on both UVG and MCL-JCV
datasets for video compression, demonstrating significant improvement over all
existing INRs baselines and competitive performance when compared to
learning-based codecs (72.3% overall bit rate saving over HNeRV and 43.4% over
DCVC on the UVG dataset, measured in PSNR). | Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull | 2023-06-16T12:59:52Z | http://arxiv.org/abs/2306.09818v3 | # HiNeRV: Video Compression with Hierarchical Encoding based Neural Representation
###### Abstract
Learning-based video compression is currently one of the most popular research topics, offering the potential to compete with conventional standard video codecs. In this context, Implicit Neural Representations (INRs) have previously been used to represent and compress image and video content, demonstrating relatively high decoding speed compared to other methods. However, existing INR-based methods have failed to deliver rate quality performance comparable with the state of the art in video compression. This is mainly due to the simplicity of the employed network architectures, which limit their representation capability. In this paper, we propose HiNeRV, an INR that combines bilinear interpolation with novel hierarchical positional encoding. This structure employs depth-wise convolutional and MLP layers to build a deep and wide network architecture with much higher capacity. We further build a video codec based on HiNeRV and a refined pipeline for training, pruning and quantization that can better preserve HiNeRV's performance during lossy model compression. The proposed method has been evaluated on both UVG and MCL-JCV datasets for video compression, demonstrating significant improvement over all existing INRs baselines and competitive performance when compared to learning-based codecs (72.3% overall bit rate saving over HNeRV and 43.4% over DCVC on the UVG dataset, measured in PSNR). The source code of HiNeRV will be made available at [https://github.com/hmkx/HiNeRV](https://github.com/hmkx/HiNeRV).
## 1 Introduction
Implicit neural representations (INRs) have become popular due to their ability to represent and encode various scenes [38], images [45] and videos [45; 10]. INRs typically learn a coordinate to value mapping (e.g. mapping a pixel or voxel index to its color and/or occupancy) to support implicit reconstruction of the original signal. While these representations are usually instantiated as multilayer perceptrons (MLPs), existing MLP-based network can only represent video content with a low reconstruction quality and speed [10]. To address this limitation, recent work has employed Convolutional Neural Networks (CNNs) to perform a frame index to video frame mapping [10; 30; 5; 26; 9]. These CNN-based INRs are capable of reconstructing video content with higher quality and with a faster decoding speed, when compared to MLP-based approaches [45]. Nonetheless, these enhanced INR-based algorithms remain significantly inferior to state-of-the-art standard-based [51; 46; 7] and learning-based codecs [27; 42; 28; 29; 36]. For example, none of these INR-based codecs can compete with HEVC x265 [2] (_veryslow_ preset) when employed for video compression.
Most INR-based models employ conventional convolutional layers or sub-pixel convolutional layers [43], which are less parameter efficient, and hence limit representation capacity within a given storage budget. In addition, in video INRs, most existing work employs Fourier-based positional encoding;
this has a long training time and can only achieve sub-optimal reconstruction quality [10; 30; 5; 9]. In video compression, the training of INR models is equivalent to the encoding process, implying that most INR-based codecs require a long encoding runtime to obtain a satisfactory rate-quality performance [38; 10]. However, we noted that some recent non-video INR models have utilized feature grids or a combination of grids and MLPs as the representation to speed up the convergence of INRs; this has improved the encoding speed by several orders of magnitude [15; 47; 39; 8].
In this paper, we propose a new INR model based on Hierarchically-encoded Neural Representation for video compression, HiNeRV. We replace commonly used sub-pixel conventional layers [43] in existing INRs for upsampling [10; 30; 5; 26; 9] by a new upsampling layer which embodies bilinear interpolation with hierarchical encoding that is sampled from multi-resolution local feature grids. These local grids offer increased parameter efficiency, as the number of parameters increases with the upsampling factor instead of the resolution. Moreover, the network is primarily based on MLPs and depth-wise convolutional layers (rather than dense convolutional layers). This enhances the representation capability and maximizes the performance for a given parameter count. This architectural design allows us to build a much deeper and wider network which offers a significantly better video encoding performance when compared to state-of-the-art INR-based coding approaches.
Furthermore, instead of learning a frame- [10] or patch-wise [5] representation, we show that by simply training with overlapped patches, HiNeRV can be seamlessly switched between both representation types, achieving a unified representation with improved performance compared to both frame-based and patch-based settings. This also provides flexibility for hardware implementation, where encoding and decoding processes can be performed either using frames to optimize the computational complexity, or as patches to minimize the memory footprint.
To achieve competitive coding performance, we also refine the model compression pipeline in [10], where pruning and fine-tuning are followed by model training, before quantization is applied. First, we used an adaptive pruning technique to reduce the negative impact of model pruning. Secondly, quantization-aware training is applied for fine-tuning the model performance before quantiszation. This enables lower bit depth quantization which achieves an improve rate-distortion trade-off.
The proposed method has been tested against existing INR-based video coding methods and state-of-the-art conventional and learning-based video codecs on the UVG [37] and MCL-JCV [50] datasets. Notwithstanding the fact that HiNeRV has not been end-to-end optimized for video compression (i.e. pruning, quantization and entropy coding are not included in the training loop), it still significantly outperforms all INR-based methods (e.g., 72.3% overall BD-rate gain over HNeRV on the UVG database, measured in PSNR), and is also competitive compared with existing conventional and end-to-end learning-based video codec (e.g., 43.4% over DCVC, 38.7% over x265 _veryslow_).
The primary contributions of this work are summarized below:
1) We propose HiNeRV, a new INR employing hierarchical encoding based neural representation.
2) We employ a unified representation by adding padding, which trades a small computation overhead for additional flexibility and performance gain.
3) We build a video codec based on HiNeRV and refine the model compression pipeline to better preserve the reconstruction quality of INRs by using adaptive pruning and quantisation-aware training.
Figure 1: (Left) Visual comparison between HNeRV [9] and HiNeRV (ours) for compressed content (cropped). HiNeRV offers improved visual quality with approximately half the bit rate compared to HNeRV (PSNR and bitrate values are for the whole sequence). (Right) Video regression with different epochs for a representation task. HiNeRV (ours) with only 37 epochs achieves similar reconstruction quality to HNeRV [9] with 300 epochs.
4) The compression performance of the proposed method is superior to existing INR models, and is comparable to many conventional/learning-based video coding algorithms. As far as we are aware, it is the first INR-based codec to significantly outperform HEVC (x265 _versylow_) [40].
## 2 Related work
### Video compression
Video compression has long been a fundamental task in the field of computer vision and multimedia processing. As alternatives to the already popularized conventional codec such as H.264/AVC [51], H.265/HEVC [46], and H.266/VVC [7], there has been a rapid increase in the adoption of deep learning techniques for video compression in recent years. This has typically involved replacing certain modules (e.g., motion compensation [3; 53], transform coding [54; 16] and entropy coding [6]) in the conventional pipeline with powerful learning-based models [52; 36].
In contrast, there has also been significant activity focused on the development of new coding architectures which allow end-to-end optimization. Lu et al. [33] proposed DVC, a differentiable version of H.264, that was further extended to enable major operations in both the pixel and the feature spaces [21][20]. An alternative approach has focused on conditional [27; 31] instead of predictive coding to reduce the overall bitrate by estimating the probability model over several video frames. Furthermore, the characteristics of the differentiable frameworks have been exploited by [19; 48; 23], where both encoder and decoder (typically signaled by a model stream containing updated parameters) are overfitted to the video data during the evaluation to further enhance compression efficiency.
While effective, with some recent work [52; 44; 29] claiming to outperform the latest compression standards, these methods still follow the pipeline of conventional codecs, which may constrain the development of neural video compression methods. Moreover, learning-based video compression methods tend to be much more computational complex and often yield much slower decoding speed than conventional codecs. This often renders them impractical for real-time applications, especially considering the prevalence of high-quality and high-resolution videos consumed nowadays.
### Implicit neural representation
Implicit neural representations (INRs) are being increasingly used to represent complicated natural signals such as images [45; 11; 13; 24], videos [45; 10], and vector-valued, volumetric content [38]. This type of approach benefits from incorporating positional encoding - a technique that embeds the positional input into a higher-dimensional feature space. Periodic functions [45; 34; 38] have first been utilized to improve the network's capability for learning high frequency information, and grid features [47; 39; 8] have then been applied to address their slow convergence speed and further improve the reconstruction quality.
More recently, Neural Representations for Videos (NeRV) [10] has re-formulated the INR for video signals to be frame-wise, achieving competitive reconstruction performance with very high decoding speed. NeRV approaches have inspired a trend of utilising CNNs to encode the RGB values of videos with 1D frame coordinates [10; 30; 26; 9] or with 3D patch coordinates [5], and have demonstrated promise in various video tasks, including denoising [10], frame interpolation [10; 30; 26; 9], inpainting [5; 9], super-resolution [12] and video compression [10; 30; 5; 26; 9].
When INRs are applied for image and video compression, they typically convert the signal compression task into a model compression problem by incorporating weight pruning, quantization and entropy coding [17]. NeRV [10] and related works [30; 5; 26; 9] adopt the above approach for video compression. Although these have demonstrated fast decoding capability, they do not yet achieve a rate-distortion performance comparable to either conventional or learning-based codecs.
## 3 Method
Following the approach adopted in previous work [45; 10], we consider a video regression task where a neural network encodes a video \(V\) by mapping coordinates to either individual frames, patches or pixels, where \(V\in\mathbb{R}^{T\times H\times W\times C}\), \(T\), \(H\), \(W\) and \(C\) are the number of frames in \(V\), the height, the width and the number of channels of the video frames, respectively.
Fig. 2 (top) illustrates the high level structure of the proposed model, HiNeRV, which contains a base encoding layer, a stem layer, \(N\) HiNeRV blocks, and a head layer. In HiNeRV, each RGB video frame is spatially segmented into patches of size \(M\times M\), where each patch is reconstructed by one forward pass. The model first takes a patch coordinate \((t,h,w)\) to compute the base feature maps \(X_{0}\) with size \(M_{0}\times M_{0}\times C_{0}\). Here we always refer the coordinates to the integer index, such that \(0\leq t<T\), \(0\leq h<\frac{H}{M}\) and \(0\leq w<\frac{W}{M}\). The following \(N\) HiNeRV blocks then upsample and process the feature maps progressively, where the \(n\)-th block produces the intermediate feature maps \(X_{n}\) that have the size \(M_{n}\times M_{n}\times C_{n}\) (\(M_{N}=M\)). Finally, a head layer is used to project the feature maps to the output, \(Y\), with the target size \(M\times M\times C\).
### Base encoding and stem
HiNeRV first maps the input _patch_ coordinates into the base feature maps, \(X_{0}\), by
\[X_{0}=F_{stem}(\gamma_{base}(t,h,w)). \tag{1}\]
To compute the base feature maps, we first calculate the _pixel_ coordinates (related to the corresponding video frame) of the patch. For a patch with size \(M_{0}\times M_{0}\), the frame-based pixel coordinates \((v_{frame},u_{frame})\) can be computed by the patch-based pixel coordinates \((v_{patch},u_{patch})\) for \(0\leq v_{patch},u_{patch}<M_{0}\), such that \(v_{frame}=h\times M_{0}+v_{patch}\) and \(u_{frame}=w\times M_{0}+u_{patch}\). Then, by using the pixel coordinates, the positional encoding \(\gamma_{base}(t,h,w)\) can be interpolated from the learned feature grids [26]. After that, we employ a stem convolutional layer \(F_{stem}\) with Layer Normalization [4] for projecting the feature maps to a desired number of channels \(C_{0}\).
It is noted that most existing INRs for video [10; 30; 5; 9] utilize the \(sin\) and \(cos\) functions to map coordinates into the positional encoding. However, such encoding contains only positional information, which requires additional layers (e.g. MLPs) to transform it into informative features. In contrast, we adopt feature grids [26] as they produce encodings that contain richer information than with MLPs for encoding generation. Specifically, we use the multi-resolution temporal grids that were introduced in FFNeRV [26], where the various feature grids have different temporal resolutions. In FFNeRV, linear interpolation over the temporal dimension is used to obtain a slice that is used as the input feature map. In our case, we use a similar approach for feature map interpolation, but extract patches from the interpolated maps.
Although both high temporal resolution and a large number of channels are desirable for enhancing the expressiveness of the feature grids, this can result in greater model sizes and hence higher bit rates when the model is used for compression tasks. To maintain a compact multi-resolution grid, we increase the number of channels when reducing the temporal resolution at each grid level, i.e. the size of a grid is \(\lfloor\frac{T_{grid}}{2^{l}}\rfloor\times H_{grid}\times W_{grid}\times(C_{ grid}\times 2^{l})\), for \(0\leq l<L_{grid}\). Here \(L_{grid}\) is the number of grids and \(T_{grid}\times H_{grid}\times W_{grid}\times C_{grid}\) is the size of the first level grid.
Figure 2: The architecture of HiNeRV.
### HiNeRV block
The obtained base feature maps are then processed by \(N\) HiNeRV blocks, which progressively upsample and process the feature maps. Specifically, the \(n\)-th HiNeRV block, where \(0<n\leq N\), upsamples the input feature maps \(X_{n-1}\) with size \(M_{n-1}\times M_{n-1}\times C_{n-1}\) through bilinear interpolation \(U_{n}\) with a scaling factor \(S_{n}\), such that \(M_{n}=M_{n-1}\times S_{n}\). We use bilinear interpolation, mainly due to its low computational cost and its capability to compute smooth upsampled maps. The HiNeRV block then computes a hierarchical encoding \(\gamma_{n}(t,h,w)\), matches its number of channels by a linear layer \(F_{enc}\) and adds it to the upsampled feature maps, where \(\gamma_{n}(t,h,w)\) matches the upsampled feature map size (see Section 3.4). Finally, it applies a set of network layers \(F_{n}\) to enhance the representation to obtain the output \(X_{n}\), where we specify the number of layers by \(D_{n}\). For all the HiNeRV blocks except the first one (\(1<n\leq N\)), the first layer in \(F_{n}\) also reduces the number of channels by a factor, \(R\), such that \(C_{n}=\lfloor\frac{C_{n}}{R^{n-1}}\rfloor\), to save the computational cost for processing high spatial resolution maps. The output of the \(n\)-th HiNeRV block, \(X_{n}\), has a size of \(M_{n}\times M_{n}\times C_{n}\), and can be written as
\[X_{n}= F_{n}(U_{n}(X_{n-1})+F_{enc}(\gamma_{n}(t,h,w))),0<n\leq N \tag{2}\]
Figure 2 (bottom-left) shows the structure of HiNeRV block. Due to its observed superior performance, we employ ConvNeXt [32] as the network block in \(F_{n}\), a combination of the MLP layer with depth-wise convolution. We also apply Layer Normalization [4] after interpolation and before the MLP layers, and we only use shortcut connections when the input and output dimensions are matched.
### Head layer
The final output patch \(Y\) is computed by applying a linear layer with sigmoid activation, denoted by \(F_{head}\), on the output of the \(N\)-th HiNeRV block,
\[Y=F_{head}(X_{N}) \tag{3}\]
### Upsampling with the hierarchical encoding
Existing NeRV-based approaches [10; 30; 5; 26; 9] use a sub-pixel convolutional layer [43] for feature map up-scaling. However, this has a high parameter complexity: \(K^{2}\times S^{2}\times C_{1}\times C_{2}\), where \(K\), \(S\), \(C_{1}\) and \(C_{2}\) represent the kernel size, the upsampling factor, the number of input and output channels, respectively. While neighboring features are highly correlated, the convolutional layer does not take this into account and learns a dense weight matrix to perform upsampling. This is inefficient especially when the model size is the concern for tasks like compression [10], because the low parameter efficiency limits the maximum depth and width of the networks, and thus the capacity.
Previous work [10] has shown that bilinear interpolation does not perform as well as convolutional layers. However, we observed that it is actually a better choice when the parameter count is fixed. By performing interpolation, we can utilize the saved parameter budget to build a network with higher capacity. Moreover, we introduce a novel encoding approach (hierarchical encoding) which further boosts the upsampling capability of bilinear interpolation. Performed locally, this encodes all output pixels in a hierarchical manner by stacking multiple upsampling layers. During upsampling, the upsampled feature maps are first produced through bilinear interpolation with an up-scaling factor \(S_{n}\). Then, for all frame-based pixel coordinates \((v_{frame},u_{frame})\) in the upsampled feature maps, we compute the corresponding local pixel coordinates by \(v_{local}=v_{frame}\mod S_{n}\) and \(u_{local}=u_{frame}\mod S_{n}\), and employ them to compute the encoding.
It is noted that the proposed encoding approach is similar to applying a convolutional layer over a constant feature map. To further enhance the capacity of this encoding, we model the feature grids as multi-temporal resolution grids [26], which can provide richer temporal information, similar to the base encoding. In the \(n\)-th HiNeRV block, there are \(L_{local}\) levels of grids and the \(l\)-level grid has a size of \(\lfloor\frac{T_{local}}{2^{2}}\rfloor\times S_{n}\times S_{n}\times(\lfloor \frac{C_{local}}{R^{n-1}}\rfloor\times 2^{l})\). The size of the local grids is scaled with the factor \(S_{n}\) and can be adjusted by the hyper-parameter \(T_{local}\) and \(C_{local}\). The number of channels of the grids is also scaled in proportion to the width of the HiNeRV block, i.e. by the reduction factor \(R\). To obtain the hierarchical encoding, we perform trilinear interpolation to extract encodings from all levels, then concatenate the encodings and apply a linear layer \(F_{enc}\) to match the encoding channels to the feature maps. To distinguish the grids for interpolating the hierarchical encoding from the one for the
base encoding, i.e. the _temporal grids_, we refer these grids as the _temporal local grids_, because the hierarchical encoding is interpolated from these grids by using the local pixel coordinates. In Section 4.3, we demonstrated that the hierarchical encoding contributes to the superior performance HiNeRV. The process of upsampling with local encoding is shown in Figure 2 (bottom right).
### Unifying frame-wise and patch-wise representations
Recent publications on INR for video can be classified into frame-wise [10; 30; 26; 9] or patch-wise representations [5]. Actually, in many of these networks, the initial feature maps can be easily computed either frame-wise or patch-wise, as the positional encoding depends only on the corresponding pixel coordinates. However, these two types of representations are not switchable because of boundary effects. In HiNeRV, we adopt a simple technique to unify frame-wise and patch-wise representations. When configuring HiNeRV as a patch-wise representation, we perform computation in overlapped patches, where we refer to the overlapped part as paddings, where the amount of padding pixels depends on various hyper-parameters (e.g. the kernel sizes and/or the number of bilinear interpolation/convolutional layers). Such overlapped patches have previously been used for tasks such as super-resolution [22], but have not been applied in NeRV-based methods. When performing encoding in patches and without proper padding, missing values for operations such as convolution can result in discontinuities between the patch boundaries. Moreover, networks trained in patches do not perform well when inferencing in frames due to boundary effects. In our implementation, we perform intermediate computation with padded patches and crop the non-overlapped parts as the output patches, while with the frame configuration, paddings are not required. By adding paddings, we ensure generation of the same output for both frame-wise and patch-wise configurations.
Although adding paddings introduces additional computational overheads, it does provide the following benefits: (i) it allows parallel computation within the same frame, and reduces the minimum memory requirement for performing forward and backward passes. This shares a design concept with conventional block-based video codecs, and can potentially benefit the compression of immersive video content with higher spatial resolutions (where performing frame-wise calculation may not be supported by the hardware); (ii) It improves the training efficiency when compared to a frame-wise representation, as we can randomly sample patches during training [22], which can better approximate the true gradient and speed up the convergence; (iii) It can also enhance the final reconstruction quality compared to a patch-wise representation without suffering boundary effects. By applying the above technique, HiNeRV can be flexibly configured as either frame-based or patch-based representation without retraining. Our ablation study verifies these benefits by comparing this approach to both the frame-based and patch-based variants (see Section 4.3).
### The model compression pipeline
To further enhance the video compression performance, we refined the original model compression pipeline in NeRV [10], which has been used in a series of related work [30; 5; 26; 9]. In [10], model training is followed by weight pruning with fine-tuning to lower model sizes. Post-training quantization is then utilized to reduce the precision of each weight, while entropy coding is further employed for lossless compression. In this work, we made two primary changes to this pipeline: (i) applying an adaptive weighting to the parameters in different layers for pruning, and (ii) using quantization-aware training with QuantNoise [14] to minimize the quantization error.
**Model Pruning** in NeRV [10; 30; 5; 26; 9] is typically performed globally with respect to the magnitude of individual weights, with fine-tuning applied after pruning [18]. While various non-magnitude based pruning methods exist (e.g. OBD [25]), here we developed a simple, modified magnitude-based method for network pruning. Intuitively, here we assume that wider layers have more redundancy within their parameters - hence pruning these layers tends to have less impact on the shallower layers. To alleviate the negative impact of pruning, we weight each neuron using both its L1 norm and the size of the corresponding layer. Specifically, for a layer with \(P\) parameters, \(\theta_{p},0<p\leq P\), we compute a score, \(\frac{|\theta_{p}|}{P^{\text{N}}}\), which reflects the neuron importance, where \(\lambda\) is a hyper-parameter (0.5). The pruning is then performed as usual. By applying this weighting scheme, layers with fewer parameters, such as depth-wise convolutional layers and output layers, have less chance to be pruned, while layers in the early stage of the network are more likely to be pruned.
**Weight quantization** plays an important role in model compression, significantly reducing final model size [10]. While the results in [10] have shown that the quantization error is not significant when reducing the weight bit depth to 8 bits, further increasing the quantization level is still meaningful for compression tasks. Unlike other related works [10; 30; 5; 26; 9] that adopted 8 bit quantization, we found that an improved rate-distortion trade-off can be achieved by using 6 bits quantization if a quantization-aware training methodology is applied. In particular, we perform a short fine-tuning with QuantNoise [14] after weight pruning, which can effectively reduce the quantization error.
## 4 Experiments
### Video representation
To evaluate the effectiveness of the proposed model we benchmarked HiNeRV against five related works: NeRV [10], E-NeRV [30], PS-NeRV [5], FFNeRV [26] and HNeRV [9] on the Bunny [1] (\(1280\times 720\) with 132 frames) and the UVG datasets [37] (7 videos at \(1920\times 1080\) with a total of 3900 frames). For each video, we trained all networks at multiple scales, and kept their number of parameters similar at each scale. Three scales were set up to target the S/M/L scales in NeRV [10] for the UVG datasets, while two different scales XXS/XS together with scale S were configured for the Bunny dataset. We reported the decoding time per frame (in seconds), measured with A100 GPU. The number of parameters corresponding to each scale are reported in Table 1 and 2.
For all models tested, we set the number of training epochs to 300 and batch size (in video frames) to 1. We used the same optimizer as in [10], and employed the same learning objectives for all NeRV-based methods as in the original literature [10; 30; 5; 26; 9]. For HiNeRV, we empirically found that it is marginally better to adopt a larger learning rate of \(2e-3\) with global norm clipping which is commonly used for Transformer-based networks [49]. We used the learning rate of \(5e-4\) which is a common choice for the other networks as in their original literature [10; 30; 5; 26]. We also adopted a combination of \(\ell 1\) loss and MS-SSIM loss (with a small window size of \(5\times 5\) rather than \(11\times 11\)) for HiNeRV, as we observed that the MS-SSIM loss with a small window size leads to a better performance. For HiNeRV and PS-NeRV [5], we randomly sample patches instead of frames during training, but we scale the number of patches in each batch to keep the same effective
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Model & Size & MACs & FPS & PSNR & Size & MACs & FPS & PSNR & Size & MACs & FPS & PSNR \\ \hline NeRV & 0.83M & 25G & 308.5 & 26.82 & 1.64M & 57G & 229.1 & 29.61 & 3.20M & 101G & 202.3 & 32.56 \\ E-NeRV & 0.88M & 26G & 254.3 & 29.03 & 1.65M & 101G & 175.8 & 31.75 & 3.31M & 104G & 174.8 & 36.69 \\ PS-NeRV & 0.90M & 29G & 22.81 & 28.47 & 1.68M & 238G & 96.1 & 30.31 & 3.35M & 240G & 96.0 & 34.78 \\ FFNeRV & 0.91M & 4G & 161.7 & 29.68 & 1.66M & 36G & 134.0 & 32.08 & 3.19M & 18G & 149.7 & 34.83 \\ HNeRV & 0.82M & 23G & 317.4 & 31.08 & 1.66M & 48G & 251.6 & 33.68 & 3.28M & 94G & 192.5 & 36.95 \\ HNeRV & 0.77M & 23G & 132.1 & **36.37** & 1.59M & 47G & 103.9 & **38.94** & 3.25M & 96G & 76.68 & **41.14** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Video representation results on the Bunny dataset [1] (for XXS/XS/S scales).
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Model & Size & MACs & FPS & Beauty & Bosph. & Honey. & Jockey & Ready. & Shake. & Yacht. & Avg. \\ \hline NeRV & 3.31M & 227G & 90.0 & 32.83 & 32.20 & 38.15 & 30.30 & 23.62 & 33.24 & 26.43 & 30.97 \\ E-NeRV & 3.29M & 230G & 75.9 & 31.33 & 33.38 & 38.87 & 30.61 & 24.53 & 34.26 & 26.87 & 31.75 \\ PS-NeRV & 3.24M & 538G & 42.6 & 32.94 & 32.32 & 38.39 & 30.38 & 23.61 & 33.26 & 26.33 & 31.13 \\ FFNeRV & 3.40M & 24G & 68.9 & 33.56 & 35.02 & 38.95 & 31.57 & 25.91 & 34.41 & 28.99 & 32.63 \\ HNeRV & 3.26M & 175G & 93.4 & 33.56 & 35.03 & 39.28 & 31.58 & 25.45 & 34.89 & 28.98 & 32.68 \\ HNeRV (ours) & 3.19M & 181G & 35.5 & **34.08** & **38.68** & **39.71** & **36.10** & **31.53** & **35.85** & **30.95** & **35.27** \\ \hline NeRV & 6.53M & 228G & 90.1 & 33.67 & 34.83 & 39.00 & 33.34 & 26.03 & 34.39 & 28.23 & 32.78 \\ E-NeRV & 6.54M & 245G & 74.6 & 33.97 & 35.83 & 39.75 & 33.56 & 26.94 & 35.57 & 28.79 & 33.49 \\ PS-NeRV & 6.57M & 564G & 42.0 & 33.77 & 34.84 & 39.02 & 33.34 & 26.09 & 35.01 & 28.43 & 32.93 \\ FFNeRV & 6.44M & 172G & 51.8 & 33.95 & 36.62 & 39.58 & 33.57 & 27.38 & 35.91 & 30.50 & 33.93 \\ HNeRV & 6.40M & 349G & 68.5 & 33.99 & 36.45 & 39.56 & 33.56 & 27.38 & 35.93 & 30.48 & 33.91 \\ HiNeRV (ours) & 6.49M & 368G & 29.1 & **34.33** & **40.37** & **39.81** & **37.93** & **34.54** & **37.04** & **32.94** & **36.71** \\ \hline NeRV & 13.01M & 230G & 89.8 & 34.15 & 36.96 & 39.55 & 35.80 & 28.68 & 35.90 & 30.39 & 34.49 \\ E-NeRV & 13.02M & 285G & 68.1 & 34.25 & 37.61 & 39.74 & 35.45 & 29.17 & 36.97 & 30.76 & 34.85 \\ PS-NeRV & 13.07M & 608G & 41.4 & 34.50 & 37.28 & 39.58 & 35.34 & 28.56 & 36.51 & 30.28 & 34.61 \\ FFNeRV & 12.66M & 87G & 63.3 & 34.28 & 38.48 & 39.74 & 36.72 & 30.75 & 37.08 & 32.36 & 35.63 \\ HNeRV & 12.87M & 701G & 52.7 & 34.30 & 37.96 & 39.73 & 35.47 & 29.67 & 37.16 & 32.31 & 35.23 \\ HiNeRV (ours) & 12.82M & 718G & 19.9 & **34.66** & **41.83** & **39.95** & **39.01** & **37.32** & **38.19** & **35.20** & **38.02** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Video representation results with the UVG dataset [37] (for S/M/L scales). Results are in PSNR.
batch size. It is noted that the original configuration of HNeRV [9] has an input/output size of \(1280\times 640\)/\(1920\times 960\) with strides (5, 4, 4, 2, 2)/(5, 4, 4, 3, 2), and we pad the frames of HNeRV in order to fit the \(1280\times 640\)/\(1920\times 1080\) videos. This was found to work better than changing the strides in HNeRV.
It can be observed (Table 1 and 2) that our proposed HiNeRV outperforms all benchmarked models at each scale on both Bunny [1] and UVG [37] datasets. We also note that HiNeRV performs better than all the other methods on all test sequences (with various spatial and temporal characteristics) in the UVG database, exhibiting better reconstruction quality in terms of PSNR. In particular, for some video sequences where existing INRs performed poorly, HiNeRV offers significant improvement (>7.9dB over NeRV on ReadySetGo).
Figure 1 (right) shows the performance of HiNeRV, NeRV and HNeRV (at scale S) in terms of the reconstruction quality with various epochs of training. We observe that our model with 37 epochs achieves similar reconstruction quality of HNeRV with 300 epochs on both datasets.
### Video compression
To evaluate video compression performance, we compared HiNeRV with two INR-based models: NeRV [10] and HNeRV [9]; with two conventional codecs: HEVC/H.265 HM 18.0 (Main Profile with _Random Access_) [40; 41] and x265 (_veryslow_ preset with B frames) [2]; and with two state-of-the-art learning-based codecs: DCVC [27] and VCT [36]. These were all compared using two test databases: UVG [37] and MCL-JCV [50]. Unlike previous work [10; 5], which concatenates the videos and uses a single network to compress multiple videos, we train all the models for each video separately. For the training of NeRV, HNeRV and HiNeRV, we use the configurations described in Section 4.1, but with pruning and quantization as described in Section 3.6. In particular, we prune these three models to remove 15% of their weights and fine-tune the models for another 60 epochs. These models are further optimized with QuantNoise [14] with 10% noise ratio for 30 epochs. Here we use the same learning rate scheduling for fine-tuning, but employ 10% of the original learning rate in the optimization with QuantNoise. To obtain a reasonable rate estimation, we perform arithmetic entropy coding [35] and combine all essential information including the pruning masks and the quantization parameters into bitstreams.
Figure 3 reports the overall rate quality performance of a selection (for better visibility) of tested methods on the UVG [37] and the MCL-JCV [50] datasets. Table 3 summarizes the average Bjontegaard Delta (BD) rate results for both databases. All results show that HiNeRV offers competitive coding efficiency compared to most conventional codecs and learning-based methods. This represents a significant improvement over existing NeRV-based approaches (this is also confirmed by the visual comparison with HNeRV in Figure 1). In particular, it is observed that HiNeRV outperforms x265 (_veryslow_) [2], DCVC [27] and VCT [36] based on PSNR. As far as we are aware, this is the first INR-based codec which can achieve such performance. We also observe that HiNeRV offers better performance compared to H.265 HM (_Random Access_) based on MS-SSIM. It should be noted that the results of each learning-based codec reported here are based on two model checkpoints, one for
Figure 3: Video compression results on the UVG [37] and the MCL-JCV datasets [50].
\begin{table}
\begin{tabular}{l l r r r r r r} \hline \hline Dataset & Metric & x265 (_veryslow_) & HM (_R_4) & DCVC & VCT & NeRV & HNeRV \\ \hline \multirow{2}{*}{UVG} & PSNR & -38.66\% & 7.54\% & -43.44\% & -34.28\% & -74.12\% & -72.29\% \\ & MS-SSIM & -62.70\% & -41.41\% & -34.50\% & -23.69\% & -73.76\% & -83.86\% \\ \hline \multirow{2}{*}{ MCL-JCV} & PSNR & -23.39\% & 31.09\% & -24.59\% & -17.03\% & -80.19\% & -84.81\% \\ & MS-SSIM & -44.12\% & -2.65\% & -17.32\% & 12.10\% & -82.28\% & -96.28\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: BD-Rate (Measured in PSNR/MS-SSIM) results on the UVG [37] and MCL-JCV [50] datasets.
optimizing PSNR and the other for MS-SSIM, while all the results for HiNeRV are however based on the same checkpoint.
Despite the fact that HiNeRV has not been fully optimized end-to-end (entropy encoding and quantization are not optimized in the loop), it nonetheless outperforms many state-of-the-art end-to-end optimized learning-based approaches. This demonstrates the significant potential of utilizing INRs for video compression applications.
### Ablation study
To verify the contribution of various components in HiNeRV, we generated a number of variants of the original model, and evaluated them for video representation on the UVG dataset [37] (all the sequences were down-sampled to \(1280\times 720\) for reduction the amount of computation). For all experiments, we followed the settings in Section 4.1, and performed training targeting scale S (by adjusting the width of the network to keep the similar model sizes). All results are shown in Table 4.
**Bilinear interpolation with hierarchical encoding.** The contribution of bilinear interpolation with hierarchical encoding was verified by comparing it with alternative upsampling layers, sub-pixel convolutional layer [43] with \(1\times 1\) (V1) and \(3\times 3\) (V2) kernel sizes.
**Upsampling encodings.** Four variants are trained to confirm the effectiveness of the upsampling encodings including (V3) w/o encoding; (V4) with the Fourier encoding [38]; (V5) using Fourier encoding with local coordinates (computed in the hierarchical encoding); (V6) using the grid-based encoding with local coordinates, i.e. the hierarchical encoding without the temporal dimension.
**ConvNeXt block.** The employed ConvNeXt block [32] has been compared with (V7) the MLP block in transformer [49]; (V8) the block containing two convolutional layers with, where we use \(3\times 3\) and \(1\times 1\) kernel size to keep the receptive field consistent with the ConvNeXt block used in our paper.
**Unified representations.** V9 and V10 have been generated for frame- and patch-wise configurations.
The ablation study results are presented in Table. 4 which shows that the full HiNeRV model outperforms all alternatives (V1-V10) on the UVG dataset in terms of the overall reconstruction quality. This confirms the contribution of each primary component of the design.
## 5 Conclusion
In this paper, a new neural representation model, HiNeRV, has been proposed for video compression that exhibits superior coding performance over many conventional and learning-based video codecs (including those based on INRs). The improvements demonstrated are associated with new innovations including bilinear interpolation based hierarchical encoding, a unified representation and a refined model compression pipeline.
Despite the fact that HiNeRV has not been fully optimized end-to-end (entropy encoding and quantization are not optimized in the loop), it nonetheless achieves comparable performance to state-of-the-art end-to-end optimized learning-based approaches, with significant improvement over existing NeRV-based algorithms. This demonstrates the great potential of utilizing INRs for video
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & Size & Beauty & Bosph. & Honey. & Jockey & Ready. & Shake. & Yacht. & Avg. \\ \hline HiNeRV & 3.17M & 35.67 & **39.37** & **41.61** & **36.94** & **31.98** & 36.74 & **31.57** & **36.27** \\ \hline (V1) w/ Sub-Conv1x1 & 3.16M & 35.28 & 36.63 & 41.58 & 34.64 & 29.12 & 36.31 & 29.91 & 34.78 \\ (V2) w/ Sub-Conv3x3 & 3.15M & 34.96 & 35.35 & 41.14 & 32.80 & 27.18 & 35.34 & 29.14 & 33.70 \\ \hline (V3) w/ Encoding & 3.17M & 35.64 & 39.18 & 41.58 & 36.16 & 30.92 & 36.68 & 31.50 & 35.95 \\ (V4) w/ Fourier enc. & 3.17M & 35.62 & 39.07 & 41.59 & 36.00 & 30.91 & **36.81** & 31.47 & 35.92 \\ (V5) w/ Fourier (local) enc. & 3.17M & 35.59 & 38.99 & 41.54 & 35.77 & 30.61 & 36.57 & 31.30 & 35.77 \\ (V6) w/ Grid (local) enc. & 3.19M & 35.65 & 39.26 & 41.58 & 36.17 & 30.93 & 36.72 & 31.55 & 35.98 \\ \hline (V7) w/ MLP & 3.19M & 35.10 & 37.17 & 41.35 & 34.77 & 29.10 & 35.58 & 29.76 & 34.69 \\ (V8) w/ Conv3x3 & 3.17M & 35.35 & 37.86 & 41.37 & 35.13 & 29.70 & 36.10 & 30.31 & 35.12 \\ \hline (V9) w/ Frame-wise & 3.17M & **35.68** & 39.22 & 41.54 & 36.69 & 31.49 & 36.54 & 31.54 & 36.10 \\ (V10) w/ Patch-wise & 3.17M & 35.46 & 38.30 & 41.55 & 35.04 & 30.06 & 36.51 & 30.77 & 35.38 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies of HiNeRV with the UVG dataset [37]. Results are in PSNR.
compression applications. For example, this is the first INR-based video codec which can outperform HEVC HM Random Access mode based on MS-SSIM.
Future work should focus on incorporation of entropy coding and quantization to achieve full end-to-end optimization.
## Acknowledgment
This work was jointly funded by UK EPSRC (iCASE Awards), British Telecommunications PLC, and the UKRI MyWorld Strength in Places Programme. We also thank the support by the Advanced Computing Research Centre, University of Bristol, for providing the computational facilities.
|
2303.03255 | On the solid angle of a convex set | Here we analyze three dimensional analogues of the classical Crofton's
formula for planar compact convex sets. In this formula a fundamental role is
played by the visual angle of the convex set from an exterior point. A
generalization of the visual angle to convex sets in euclidian space is the
visual solid angle. This solid angle, being an spherically convex set in the
unit sphere, has length, area and other geometric quantities to be considered.
The main goal of this note is to express invariant quantities of the original
convex set depending on volume, surface area and mean curvature integral by
means of integrals of functions related to the solid angle. | J. Bruna, J. Cufí E. Gallego, A. Reventós | 2023-03-06T16:18:09Z | http://arxiv.org/abs/2303.03255v2 | # On the solid angle of convex sets
###### Abstract.
Here we analyze three dimensional analogues of the classical Crofton's formula for planar compact convex sets. In this formula a fundamental role is played by the visual angle of the convex set from an exterior point. A generalization of the visual angle to convex sets in euclidian space is the visual solid angle. This solid angle, being an spherically convex set in the unit sphere, has length, area and other geometric quantities to be considered. The main goal of this note is to express invariant quantities of the original convex set depending on volume, surface area and mean curvature integral by means of integrals of functions related to the solid angle.
Key words and phrases:Invariant measures, convex set, dihedral angle, solid angle, constant width 2020 Mathematics Subject Classification: Primary 52A15, Secondary 53C65 The authors were partially supported by grants 2021SGR01015 (Generalitat de Catalunya) and PID2021-125625NB-I00. ( Ministerio de Ciencia e Innovacion).
Introduction
Let \(\Omega\) be a compact convex set and \(K\subset\mathbb{E}^{3}\) be a compact convex set. We say that \(K\) is a _convex set_ of \(K\) if there exists a unique
Theorem 1.3 has as a consequence:
**Theorem 1.4**.: _For all convex sets_
\[\int_{P\notin K}|\Omega(P)|^{2}\,dP\geq 4\pi^{2}V, \tag{1.6}\]
_and equality holds if and only if \(K\) is a ball._
Part of our analysis will consist in understanding the set functions \(\alpha,\beta\) in terms of metric properties of \(\Omega\). For the set function \(\alpha\) a satisfactory description is easily obtained, namely
\[\alpha(\Omega)=\pi|\Omega|-\langle c(\Omega),c(\Omega^{*})\rangle, \tag{1.7}\]
where \(c(\Omega)=\int_{\Omega}u\,du\) and similarly \(c(\Omega^{*})\) are (unweighted) centroids. Again, the analogue of this expression for an arc \(I\) in \(S^{1}\) (\(I^{*}\) being in this case the concentric arc with length \(\pi-\omega\)) equals \(\omega-\sin\omega\) up to constants. It is possible to express everything just in terms of \(\Omega\). The explicit computation of \(\alpha(\Omega)\) is possible just for spherical caps and other simple cases.
For the set function \(\beta\) the description is not so neat. In fact we show that the difference \(\beta(\Omega)-\frac{\pi^{2}}{2}\alpha(\Omega)\) can be expressed in terms of the _dihedral visual angles_\(\mathcal{D}(\Omega,u)\) of \(\Omega\) from points \(u\in S^{2}\) not in \(\Omega\), the analogue in spherical geometry of the visual angle. Note that \(\mathcal{D}(\Omega,u)\) is the angle between two planes through the origin and \(u\) tangent to \(\Omega\). From this it can be seen that a linear combination of equations (1.3) and (1.4) is the classical Crofton-Herglotz formula, so Theorem 1.1 and Theorem 1.2 can be seen as equivalent statements (see section 2.3).
Regarding theorem 1.3, it would be interesting to understand the left-hand side of (1.5) in terms of the geometry of \(K\).
## 2. Proofs of theorems
### Proof of theorems 1.1, 1.2, 1.3 and 1.4
All of them are obtained by mimicking the integral geometry proof of the plane Crofton formula (1.1). We denote by \(E,G\) affine planes and lines in space, respectively, and by \(dE,dG\) the corresponding canonical invariant measures as used for instance in [5].
From the classical Crofton's formulas for intrinsic volumes, mean curvature integral \(M\), area of the boundary \(F\) and volume \(V\) (see [5]) we have, for a given compact convex set \(K\subset\mathbb{E}^{3}\),
\[\int_{G\cap K\neq\emptyset}dG=\frac{\pi}{2}F,\quad\int L(K\cap G)dG=2\pi V, \tag{2.1}\]
and
\[\int_{E\cap K\neq\emptyset}dE=M,\int L(K\cap E)dE=\frac{\pi^{2}}{2}F,\int A(K \cap E)dE=2\pi V. \tag{2.2}\]
We shall consider pairs and triples of linear varieties provided with the product measure. It will be useful to express these measures using the parametrizations in next lemma, whose proof can be deduced from section 12 of [5]:
**Lemma 2.1**.: _The following holds: \(a)\)_
\[dG\ dE=|\langle u,v\rangle|dG_{P}\ dE_{P}\ dP \tag{2.3}\]
_where \(P=G\cap E\), \(v\) is a unit normal direction of \(E\), \(u\) is a unit direction of \(G\), \(dP\) is the Lebesgue measure on \(\mathbb{E}^{3}\), \(dG_{P}\) is the measure of lines in \(\mathbb{E}^{3}\) through \(P\) and \(dE_{P}\) the measure of planes in \(\mathbb{E}^{3}\) through \(P\). \(b)\)_
\[dE^{1}\ dE^{2}\ dE^{3}=|\det(v_{1},v_{2},v_{3})|dE^{1}_{P}\ dE^{2}_{P}\ dE^{3}_{P}\ dP. \tag{2.4}\]
_where \(v_{i}\) are the unit normal directions to \(E_{i}\) and \(P=E_{1}\cap E_{2}\cap E_{3}\). \(c)\) For pairs of lines \(G^{1},G^{2}\) intersecting at a point \(P\in\mathbb{E}^{3}\) we have_
\[dG^{1}_{P}\,dG^{2}_{P}\,dP=dG^{1}_{E}\,dG^{2}_{E}\,dE \tag{2.5}\]
_where \(dG_{E}\) is the measure of lines within \(E\). In all statements, we can identify \(dG_{P},dE_{P}\) with \(\frac{1}{2}du,\frac{1}{2}dv\) respectively._
We can now proceed to prove the announced theorems.
_Proof of Theorem 1.1_. We consider pairs \((E,G)\) of planes and lines both meeting \(K\). From (2.1), (2.2) and (2.3),
\[\frac{\pi}{2}MF=\int\limits_{G\cap K\neq\emptyset,E\cap K\neq\emptyset}dG\,dE= \int\limits_{\mathbb{E}^{3}}\int\limits_{G\cap K\neq\emptyset,E_{P}\cap K\neq \emptyset}|\langle u,v\rangle|dG_{P}\ dE_{P}\ dP.\]
If \(P\in K\), there is no restriction on \(G_{P},E_{P}\). Since they are doubly parametrized bu \(u,v\) respectively,
\[\int|\langle u,v\rangle|dG_{P}\ dE_{P}=\frac{1}{4}\int_{u,v\in S^{2}}|\langle u,v\rangle|du\,dv.\]
Obviously the \(v\) integral does not depend on \(u\), so the above equals
\[\pi\int|\langle u,v\rangle|\,dv.\]
Choosing \(u=(0,0,1)\) and computing in spherical coordinates one obtains the value \(2\pi^{2}\).
If \(P\notin K\) then \(G_{P}\) is parametrized by \(u\in\Omega(P)\) and \(E_{P}\) is doubly parametrized by \(v\in\tilde{\Omega}(P)\) whence
\[\int\limits_{G\cap K\neq\emptyset,E_{P}\cap K\neq\emptyset}|\langle u,v\rangle |dG_{P}\ dE_{P}=\frac{1}{2}\int_{u\in\Omega(P),v\in\tilde{\Omega}(P)}|\langle u,v\rangle|\,du\,dv=\alpha(\Omega(P)),\]
thus proving (1.3). \(\square\)
_Proof of Theorem 1.2_. Here we use triples of planes meeting \(K\) and proceed analogously to the above proof. Using (2.2) and (2.4) it follows
\[M^{3}=\int_{\mathbb{E}^{3}}\int\limits_{E^{1}_{P}\cap K\neq\emptyset}|\det(v_ {1},v_{2},v_{3})|dE^{1}_{P}\ dE^{2}_{P}\ dE^{3}_{P}\ dP.\]
Again, if \(P\) is within \(K\), there is no restriction on the \(E^{i}_{P}\) and
\[\int|\det(v_{1},v_{2},v_{3})|dE^{1}_{P}\ dE^{2}_{P}\ dE^{3}_{P}=\frac{1}{8}\int_ {v_{i}\in S^{2}}|\det(v_{1},v_{2},v_{3})|\,dv_{1}\,dv_{2}\,dv_{3}.\]
The integral in \(v_{1},v_{2}\) is independent of \(v_{3}\) so the above integral equals
\[\frac{\pi}{2}\int|\det(v_{1},v_{2},v_{3})|\,dv_{1}\,dv_{2}.\]
Choosing \(v_{3}=(0,0,1)\) and computing in spherical coordinates we get the value \(\pi^{4}\).
If \(P\notin K\) then \(E^{i}_{P}\) are doubly parametrized by \(v_{i}\in\tilde{\Omega}(P)\) so that
\[\int\limits_{E^{1}_{P}\cap K\neq\emptyset}|\det(v_{1},v_{2},v_{3})|dE^{1}_{P} \,dE^{2}_{P}\,dE^{3}_{P}=\beta(\Omega(P)),\]
and (1.4) is proved.
_Proof of Theorem 1.3_. We use now pairs of intersecting lines, both meeting \(K\). The measure of this set of lines is
\[\int_{P\in\mathbb{E}^{3},G^{i}_{P}\cap K\neq\emptyset}dG^{1}_{P}\,dG^{2}_{P}\, dP=\int_{\mathbb{E}^{3}}\bigg{(}\int_{G^{i}_{P}\cap K\neq\emptyset}dG^{1}_{P} \,dG^{2}_{P}\bigg{)}\,dP.\]
The contribution of \(K\) in the \(dP\) integral is \((2\pi)^{2}V\), while that of \(K^{c}\) is \(\int_{P\notin K}|\Omega(P)|^{2}\,dP\). On the other hand, by (2.5), this equals
\[\int\bigg{(}\int_{G_{1},G_{2}\subset E,G_{i}\cap K\neq\emptyset}dG^{1}_{E}\,dG ^{2}_{E}\bigg{)}dE.\]
Since \(\int_{G\subset E,G\cap K\neq\emptyset}\,dG=L(K\cap E)\) (the Cauchy-Crofton formula in the plane \(E\)), we are done.
We point out that this argument is equivalent to integration on \(E\) of the planar Crofton formula (1.1) for \(E\cap K\) within \(E\).
_Proof of Theorem 1.4_. Using the isoperimetric inequality in the plane \(E\) the left hand side of (1.5) is bigger than
\[4\pi\int_{E}A(K\cap E)\,dE,\]
which by the last equality in (2.2) equals \(8\pi^{2}V\), thus proving (1.6). If equality holds, then \(L(K\cap E)^{2}=4\pi A(K\cap E)\) for all \(E\), then all \(K\cap E\) are discs, and this implies easily that \(K\) is a ball.
### On the set function \(\alpha\)
To prove the relation (1.7) just notice that
\[\int_{v\in\tilde{\Omega}}|\langle u,v\rangle|\,dv=\int_{v\in S^{2}}|\langle u,v\rangle|\,dv-2\int_{v\in\Omega^{*}}|\langle u,v\rangle|\,dv.\]
The first integral does not depend on \(u\) and equals \(2\pi\), while in the second one \(\langle u,v\rangle\) is positive. Altogether,
\[\alpha(\Omega)=\pi\int_{u\in\Omega}du-\int_{u\in\Omega,v\in\Omega^{*}}\langle u,v\rangle\,du\,dv=\pi|\Omega|-\langle c(\Omega),c(\Omega^{*})\rangle.\]
When \(\Omega\) is a cap in \(S^{2}\) with sperical radius \(\omega\) then
\[\alpha(\Omega)=2\pi^{2}(1-\cos\omega)-\pi^{2}\cos^{2}\omega\sin^{2}\omega.\]
In order to express this function in terms of \(\Omega\) we find a relation between the centroids of \(\Omega\) and \(\Omega^{*}\) using a parametrization of the boundary of \(\Omega\). We consider the orientation in \(\Omega\) given by the unit outward normal to \(S^{2}\) and let \(\gamma(t)\), \(0\leq t\leq\ell\) be the arc-length parametrization of \(\partial\Omega\) with the induced orientation, so \(T=\gamma^{\prime}\) is the unit tangent.
**Proposition 2.2**.:
_a)_
\[c(\Omega)=\frac{1}{2}\int_{0}^{\ell}\gamma(t)\times\gamma^{\prime}(t)\ dt,\quad c (\Omega^{*})=\frac{1}{2}\int_{0}^{\ell}k_{g}(t)\gamma(t)\ dt,\]
_where_ \(k_{g}(t)\) _is the geodesic curvature of_ \(\gamma(t)\)_._
_b)_
\[c(\Omega)+c(\Omega^{*})=\frac{1}{2}\int_{0}^{\ell}\gamma^{\prime}(t)\times \gamma^{\prime\prime}(t)\ dt=\frac{1}{2}\int_{0}^{\ell}k(t)\vec{B}(t)\ dt.\]
_where_ \(k(t)\) _is the curvature of_ \(\gamma(t)\) _and_ \(\vec{B}(t)\) _its binormal._
Proof.: If \(u=(x,y,z)\), the first component of \(c(\Omega)\) is \(\int_{\Omega}xdu\), the flow through \(\Omega\) of the vector field \(X=(1,0,0)\). Since \(X=\nabla\times Y,Y=\frac{1}{2}(0,-z,y)\) this component equals
\[\frac{1}{2}\int_{\partial\Omega}\langle T,Y(\gamma(t))\rangle\,dt.\]
Now \(\langle T,Y\rangle\) equals the first component of \(\gamma\times\gamma^{\prime}(t)\). Similarly the other components, so the first formula in a) is proved. Next, notice that \(\gamma^{*}(t)=\gamma(t)\times\gamma^{\prime}(t)\) parametrizes the dual curve, the boundary of \(\Omega^{*}\), whence one has as well
\[c(\Omega^{*})=\frac{1}{2}\int_{0}^{\ell}\gamma^{*}(t)\times(\gamma^{*})^{ \prime}(t)\ dt.\]
Now,
\[\gamma^{*}\times{\gamma^{*}}^{\prime}=(\gamma\times\gamma^{\prime})\times( \gamma\times\gamma^{\prime})^{\prime}=(\gamma\times\gamma^{\prime})\times( \gamma\times\gamma^{\prime\prime})=\det(\gamma,\gamma^{\prime},\gamma^{\prime \prime})\gamma=k_{g}\gamma\]
and a) is proved.
In order to prove b) we simplify the notation writing \(\sigma=\gamma\times\gamma^{\prime}+\gamma^{*}\times{\gamma^{*}}^{\prime}\). Denote by \(\vec{T},\vec{N},\vec{B}\) the Frenet frame of \(\gamma\). From the definitions it is easy to see that \(\langle\sigma,\vec{T}\rangle=0\). Also
\[\langle\sigma,\vec{N}\rangle=\frac{1}{k}\langle\gamma\times\gamma^{\prime}, \gamma^{\prime\prime}\rangle+\frac{1}{k}\langle\det(\gamma,\gamma^{\prime}, \gamma^{\prime\prime})\gamma,\gamma^{\prime\prime}\rangle=\frac{1}{k}\langle \gamma\times\gamma^{\prime},\gamma^{\prime\prime}\rangle-\frac{1}{k}\det( \gamma,\gamma^{\prime},\gamma^{\prime\prime})=0\]
because \(\langle\gamma,\gamma^{\prime}\rangle=0\) and so \(\langle\gamma,\gamma^{\prime\prime}\rangle=-1.\) Now we compute \(\langle\sigma,\vec{B}\rangle\),
\[\langle\sigma,\vec{B}\rangle = \langle\sigma,\vec{T}\times\vec{N}\rangle=\frac{1}{k}\langle \sigma,\gamma^{\prime}\times\gamma^{\prime\prime}\rangle=\frac{1}{k}\langle \gamma\times\gamma^{\prime}+\gamma^{*}\times{\gamma^{*}}^{\prime},\gamma^{ \prime}\times\gamma^{\prime\prime}\rangle=\] \[= \frac{1}{k}\langle\gamma\times\gamma^{\prime},\gamma^{\prime} \times\gamma^{\prime\prime}\rangle+\frac{1}{k}\langle\gamma^{*}\times{\gamma^ {*}}^{\prime},\gamma^{\prime}\times\gamma^{\prime\prime}\rangle=\] \[= -\frac{1}{k}\langle\gamma,\gamma^{\prime\prime}\rangle\langle \gamma^{\prime},\gamma^{\prime}\rangle+\frac{1}{k}\langle k_{g}\gamma,\gamma^{ \prime}\times\gamma^{\prime\prime}\rangle=\frac{1}{k}(1+k_{g}^{2}).\]
The curve \(\gamma\) being on the unit sphere we have that \(k^{2}=1+k_{g}^{2}\); therefore \(\langle\sigma,\vec{B}\rangle=k\) and we conclude that
\[\gamma\times\gamma^{\prime}+\gamma^{*}\times{\gamma^{*}}^{\prime}=k\vec{B}.\]
Since \(\gamma^{\prime}\times\gamma^{\prime\prime}=\vec{T}\times k\vec{N}=k\vec{B}\) the proposition is proved.
### On the set function \(\beta\)
Here we wish to obtain an alternative expression for (1.2) which will lead us to the Crofton-Herglotz formula.
Let \(u=v_{2}\times v_{3}/|v_{2}\times v_{3}|\). To specify a basis for \(u^{\perp}\) we write \(u\) in spherical coordinates, \(u=(\sin\varphi\cos\theta,\sin\varphi\sin\theta,\cos\varphi)\), and define
\[e_{1}=\frac{\partial}{\partial\varphi}=(\cos\varphi\cos\theta,\cos\varphi\sin \theta,-\sin\varphi),\qquad e_{2}=\frac{1}{\sin\varphi}\frac{\partial}{ \partial\theta}=(-\sin\theta,\cos\theta,0),\]
so that \(\{e_{1},e_{2},v\}\) is a positive orthonormal basis. We write \(v_{2},v_{3}\) in this basis
\[v_{2}=\cos\theta_{2}\cdot e_{1}+\sin\theta_{2}\cdot e_{2},\quad v_{3}=\cos \theta_{3}\cdot e_{1}+\sin\theta_{3}\cdot e_{2}.\]
Then \(v_{2},v_{3}\) are parametrized by \(u,\theta_{2},\theta_{3}\). Keeping in mind that the integral is in fact over the set of triplets of great circles meeting \(\Omega\), so \(\pm v_{2},\pm v_{3}\) count as one, we let \(0\leq\theta_{2},\theta_{3}\leq\pi\) and require that the angles \(\theta_{2},\theta_{3}\) be within the dihedral angle \(\mathcal{D}(\Omega,u)\) determined by \(\Omega\) and \(u\). Then \(u\in S^{2},\theta_{2},\theta_{3}\) is a double parametrization of the set of pairs of great circles meeting \(\Omega\). In this parametrization, it is immediate to check that (cf. [4, (34.1)])
\[dv_{2}dv_{3}=|\sin(\theta_{3}-\theta_{2})|d\theta_{2}d\theta_{3}dv,\]
while
\[|\det(v_{1},v_{2},v_{3})|=|v_{2}\times v_{3}|\cdot|\langle v_{1},u\rangle|=| \sin(\theta_{3}-\theta_{2})|\cdot|\langle v_{1},u\rangle|.\]
Thus
\[\beta\Omega)=\frac{1}{4}\int_{v_{1}\in\bar{\Omega},u\in S^{2},\theta_{i}\in \mathcal{D}(\Omega,u)}\sin^{2}(\theta_{3}-\theta_{2})|\langle v_{1},u\rangle |dv_{1}\,d\theta_{2}\,d\theta_{3}\,du\]
Now the integral with respect to \(\theta_{2},\theta_{3}\) is easily computed. Denoting as well by \(\mathcal{D}(\Omega,u)\) the measure of the dihedral angle we get
\[\beta(\Omega)=\frac{1}{8}\int_{v_{1}\in\bar{\Omega},\,u\in S^{2}}(\mathcal{D} ^{2}(\Omega,u)-\sin^{2}\mathcal{D}(\Omega,u))|\langle v_{1},u\rangle|dv_{1} \,du.\]
For \(\pm u\in\Omega\) one has \(\mathcal{D}(\Omega,u)=\pi\) whence the contribution of this part equals \(\pi^{2}\alpha(\Omega)/2\). Thus
\[\beta(\Omega)=\frac{\pi^{2}}{2}\alpha(\Omega)+\gamma(\Omega),\]
with
\[\gamma(\Omega)=\frac{1}{8}\int_{v\in\bar{\Omega},\,\pm u\not\in\Omega}( \mathcal{D}^{2}(\Omega,u)-\sin^{2}\mathcal{D}(\Omega,u))|\langle v,u\rangle| dv\,du.\]
Thus theorems 1.1 and 1.2 imply
\[M^{3}-\frac{1}{4}\pi^{3}MF=\int_{P\notin K}\gamma(\Omega(P))\,dP.\]
We now insert the definition of \(\gamma(\Omega(P))\) and use (2.3). If \(u\) the unit direction of \(G,\mathcal{D}(\Omega(P),u)\) is the dihedral angle \(\mathcal{D}(K,G)\) of \(K\) as seen from \(G\) so the right-hand side above equals
\[\frac{1}{2}\int_{E\cap K\neq\emptyset,G\cap K=\emptyset}(\mathcal{ D}^{2}(K,G)-\sin^{2}\mathcal{D}(K,G))dE\,dG=\] \[= \frac{1}{2}\int_{E\cap K\neq\emptyset}dE\bigg{(}\int_{G\cap K= \emptyset}(\mathcal{D}^{2}(K,G)-\sin^{2}\mathcal{D}(K,G)\bigg{)}\,dG.\]
Using (2.2) we obtain the classical Crofton-Herglotz formula [[1] p.75]
\[\int_{G\cap K=\emptyset}(\mathcal{D}^{2}(K,G)-\sin^{2}\mathcal{D}(K,G))\,dG=2M ^{2}-\frac{\pi^{3}F}{2}.\]
This shows that in the presence of Crofton-Herglotz formula, theorems 1.1 and 1.2 can be seen as equivalent statements.
## 3. Some inequalities for convex sets of constant width
In this section we will deal with compact convex sets \(K\) of constant width. For each of these sets we have the relation \(R+r=a\), where \(a\) is the width of \(K\), and \(r,R\) are respectively the inradius and the circumradius of \(K\). Thus, denoting \(c=r/R\), we have
\[r=\frac{ac}{1+c},\quad R=\frac{a}{1+c}.\]
From Jung's theorem ([3, 3.4.2]) it follows that \(c\geq\sqrt{8/3}-1=0.63...\)
**Theorem 3.1**.: _Let \(K\) be a compact convex set of constant width \(a\) and \(c=r/R\) the quotient between the inradius and the cirumradius of \(K\). Then_
\[\int L(K\cap E)^{2}dE\leq 8\pi^{3}a^{3}\left(\frac{1}{(1+c)^{2}}-\frac{1}{12} \right)\,, \tag{3.1}\]
_which is an equality for spheres._
Proof.: First we observe that denoting by \(p(u)\), \(u\in S^{2}\), the support function of \(K\) and \(\eta(u)=p(u)-a/2\) one has
\[\eta(u)^{2}=p(u)^{2}+a^{2}/4-ap(u).\]
Hence
\[\int_{S^{2}}\eta(u)^{2}\,du=\int_{S^{2}}p(u)^{2}\,du+\pi a^{2}-2\pi a^{2}\geq 0,\]
and so
\[\int_{S^{2}}p(u)^{2}\,du\geq\pi a^{2}. \tag{3.2}\]
We have
\[\int_{\mathcal{A}_{3,k}}L(K\cap E)^{2}dE=\frac{1}{2}\int_{S^{2}} \int_{0}^{a}L(K\cap E)^{2}\,dt\,du\leq\frac{1}{2}\int_{S^{2}}\int_{0}^{a}L(S_ {R}\cap E)^{2}\,dt\,du\leq\] \[\frac{1}{2}\int_{S^{2}}\int_{0}^{a}\left(2\pi\sqrt{R^{2}-(p(u)-t )^{2}}\right)^{2}dt\,du=\int_{S^{2}}2\pi^{2}\left(R^{2}a-\frac{a^{3}}{3}+a^{2} p(u)-ap(u)^{2}\right)\,du\]
and by (3.2)
\[\int L(K\cap E)^{2}dE \leq 8\pi^{3}\left(R^{2}a-\frac{a^{3}}{12}\right)=8\pi^{3}a^{3}\left( \frac{1}{(1+c)^{2}}-\frac{1}{12}\right).\]
We note that by Jung's inequality \(c\geq\sqrt{8/3}-1\), the above result implies
\[\int L(K\cap E)^{2}dE\leq\frac{7}{3}\pi^{3}a^{3}.\]
**Proposition 3.2**.: _Let \(K\) be a compact convex set of constant width \(a\) with \(c=r/R\) the quotient between the inradius and the cirumradius of \(K\). Then_
\[4\pi^{3}a^{3}\bigg{(}\frac{8}{3}\frac{c^{3}}{(1+c)^{3}}-\frac{1}{6}\bigg{)}\leq \int_{P\notin K}|\Omega(P)|^{2}dP\leq 4\pi^{3}a^{3}\bigg{(}\frac{11-3c(3c^{2}+c-3)}{6 (1+c)^{3}}\bigg{)},\]
_with equalities for spheres, where the lower bound is non negative for \(c>0.657...\)._
Proof.: The right hand side inequality comes from (3.1) and (1.5) substituting \(V\) by \(V_{r}\), where \(V_{r}\) is the volume of the insphere \(S_{r}\) of \(K\). The left hand side comes subtracting \(4\pi^{2}V\) in the easily checked relations
\[8\pi^{2}V_{r}=\int L(S_{r}\cap E)^{2}dE\leq\int L(K\cap E)^{2}dE\]
and using the inequality \(V\leq V_{a/2}\), where \(V_{a/2}\) is the volume of the sphere of radius \(a/2\) (see [3]).
_Remark 3.3_.: We note that in terms of the width only, we have
\[\int_{P\notin K}|\Omega(P)|^{2}dP\leq\frac{9}{2}\pi^{3}a^{3}(\sqrt{6}-2).\]
_Remark 3.4_.: One can ask if equality
\[\int L(K\cap E)^{2}dE=\pi MF-4\pi^{2}V\]
that holds for spheres is also true for compact convex sets of constant width. For this case, with the same kind of arguments used above, we are only able to prove
\[\frac{c^{3}-1}{(1+c)^{3}}\leq\frac{1}{16\pi^{3}a^{3}}\left(\int L(K\cap E)^{2} dE-(\pi MF-4\pi^{2}V)\right)\leq\frac{-23c^{3}+3c^{2}+3c+17}{24(1+c)^{3}}\]
with equalities for spheres (c=1).
|
2310.14611 | Predictive Monitoring against Pattern Regular Languages | In this paper, we focus on the problem of dynamically analysing concurrent
software against high-level temporal specifications. Existing techniques for
runtime monitoring against such specifications are primarily designed for
sequential software and remain inadequate in the presence of concurrency --
violations may be observed only in intricate thread interleavings, requiring
many re-runs of the underlying software. Towards this, we study the problem of
predictive runtime monitoring, inspired by the analogous problem of predictive
data race detection studied extensively recently. The predictive runtime
monitoring question asks, given an execution $\sigma$, if it can be soundly
reordered to expose violations of a specification.
In this paper, we focus on specifications that are given in regular
languages. Our notion of reorderings is trace equivalence, where an execution
is considered a reordering of another if it can be obtained from the latter by
successively commuting adjacent independent actions. We first show that the
problem of predictive admits a super-linear lower bound of $O(n^\alpha)$, where
$n$ is the number of events in the execution, and $\alpha$ is a parameter
describing the degree of commutativity. As a result, predictive runtime
monitoring even in this setting is unlikely to be efficiently solvable.
Towards this, we identify a sub-class of regular languages, called pattern
languages (and their extension generalized pattern languages). Pattern
languages can naturally express specific ordering of some number of (labelled)
events, and have been inspired by popular empirical hypotheses, the `small bug
depth' hypothesis. More importantly, we show that for pattern (and generalized
pattern) languages, the predictive monitoring problem can be solved using a
constant-space streaming linear-time algorithm. | Zhendong Ang, Umang Mathur | 2023-10-23T06:41:55Z | http://arxiv.org/abs/2310.14611v2 | # Predictive Monitoring against Pattern Regular Languages
###### Abstract
While current bug detection techniques for concurrent software focus on unearthing low-level issues such as data races or deadlocks, they often fall short of discovering more intricate temporal behaviors that can arise even in the absence of such low-level issues. In this paper, we focus on the problem of dynamically analyzing concurrent software against high-level temporal specifications such as LTL. Existing techniques for runtime monitoring against such specifications are primarily designed for sequential software and remain inadequate in the presence of concurrency - violations may be observed only in intricate thread interleavings, requiring many re-runs of the underlying software in conjunction with the analysis. Towards this, we study the problem of _predictive runtime monitoring_, inspired by the analogous problem of _predictive data race detection_ studied extensively recently. The predictive runtime monitoring question asks, given an execution \(\sigma\), if it can be soundly _reordered_ to expose violations of a specification. In general, this problem may become easily intractable when either the specifications or the notion of reorderings used is complex.
In this paper, we focus on specifications that are given in regular languages. Our notion of reorderings is _trace equivalence_, where an execution is considered a reordering of another if it can be obtained from the latter by successively commuting adjacent independent actions. We first show that, even in this simplistic setting, the problem of predictive admits a super-linear lower bound of \(O(n^{\alpha})\), where \(n\) is the number of events in the execution, and \(\alpha\) is a parameter describing the degree of commutativity, and typically corresponds to the number of threads in the execution. As a result, predictive runtime monitoring even in this setting is unlikely to be efficiently solvable, unlike in the non-predictive setting where the problem can be checked using a deterministic finite automaton (and thus, a constant-space streaming linear-time algorithm).
Towards this, we identify a sub-class of regular languages, called _pattern languages_ (and their extension _generalized pattern languages_). Pattern languages can naturally express specific ordering of some number of (labeled) events, and have been inspired by popular empirical hypotheses underlying many concurrency bug detection approaches such as the'small bug depth' hypothesis. More importantly, we show that for pattern (and generalized pattern) languages, the predictive monitoring problem can be solved using a constant-space streaming linear-time algorithm. We implement and evaluate our algorithm PatternTrack on benchmarks from the literature and show that it is effective in monitoring large-scale applications.
Concurrent software, dynamic analysis, predictive monitoring, complexity 202420
Introduction
Writing reliable concurrent programs remains a challenge to date. Subtle bugs, arising due to intricate choreography of threads, often evade rigorous testing but appear in deployment under intense workloads. The first line of defense against such bugs is tools that enable developers to find concurrency errors automatically. Towards this, both static and dynamic analysis approaches have been proposed. Static approaches (Blackshear et al., 2018; Engler and Ashcraft, 2003; Naik et al., 2006) are typically geared towards certifying program correctness and can overwhelm developers with excessive false positive reports (Sadowski et al., 2018). Dynamic analysis techniques, on the other hand, remain the preferred approach for the task of automated bug detection. Here one executes the underlying program, observes its behavior, and infers the presence or absence of bugs based on this observed behavior. Even though such techniques are implicitly incomplete, they remain popular in practice, thanks to the inherent desirable properties such as low overhead and soundness of bug reports. Unsurprisingly, such techniques enjoy more widespread adoption (Serebryany and Iskhodzhanov, 2009).
Traditional dynamic analysis approaches for detecting concurrency bugs often cater to implicit specifications such as data race freedom, deadlock freedom or atomicity of blocks of code. However such techniques do not attempt to expose undesirable behaviors due to faulty order of interactions of threads, that are nevertheless symptomatic of serious failures. In practice, developers often rely on specific properties such as class invariants or temporal behaviors such as 'file f' cannot be closed in between the creation of f' and an access to f' to reason about the overall safety of their software. Nevertheless, the validity of such invariants often relies on complex synchronizations to be put in place, a task that even the most expert developers struggle to get right. Tools such as data race detectors are not helpful either; they fail to expose such high-level problems.
Consider, for example, the simplistic Java program \(P\) in Figure 0(a). This is a simplified code snippet from the GitHub project antlworks1. The class DBPlayer contains two fields, count and inputs, and two methods, play and reset, both of which write to both fields. In order to keep count and inputs in a consistent state, executions of method play and reset should behave atomically. Observe that it is a high-level data race, instead of a classical data race property (Artho et al., 2003). This type of violation is also studied in (Vaziri et al., 2006).
Footnote 1: [https://github.com/antlr/antlworks](https://github.com/antlr/antlworks)
The problematic behavior in the above example can, in fact, be expressed, in a temporal specification formalism like LTL, or as a regular language \(L_{\text{fail}}\), that expresses the order in which the calls to addCall, clearCall, reset.write(count), and play.write(count) are made. Such specification languages have been extensively studied in the context of runtime verification and efficient algorithms and tools have been developed to monitor such specifications. Such techniques, however, are not well-suited for monitoring concurrent programs. Thanks to non-determinism due to thread scheduling, even the most well-engineered LTL-monitoring technique may require many re-executions of the program under test to eventually observe a violation of a desirable property. Consider again the program \(P\) be from Figure 0(a), and the execution \(\sigma_{\text{safe}}\) generated when monitoring \(P\) (Figure 0(b)). As such, \(\sigma_{\text{safe}}\notin L_{\text{fail}}\) fails to expose the violation encoded as \(L_{\text{fail}}\), even though the very similar execution \(\sigma_{\text{fail}}\) (Figure 0(c)) of the same program \(P\) can very well expose the problematic behavior. This highlights the lack of robustness in traditional monitoring approaches that simply check whether an execution observed during dynamic analysis witnesses a specification.
To tackle this challenge, we borrow wisdom from _predictive_ approaches (Huang et al., 2015, 2014; Kalhauge and Palsberg, 2018; Kini et al., 2017; Mathur et al., 2021; Pavlogiannis, 2019; Said et al., 2011; Smaragdakis et al., 2012), that are capable of exposing low-level bugs such as data races even starting from executions that do not explicitly observe the concurrency bug under
question. Towards this, we consider the analogous _predictive runtime monitoring_ question -- given a specification, encoded as a language \(L\) and given an execution \(\sigma\) of some concurrent program \(P\), can \(\sigma\) be _reordered_, in a sound manner, to execution \(\sigma^{\prime}\) so that \(\sigma^{\prime}\in L\)? Observe that an efficient solution to the predictive runtime monitoring problem has the potential of enhancing coverage of traditional runtime verification approaches that only focus on the observed execution, when used for concurrent software.
Even in the context of the simplest specification -- data races -- the predictive question is intractable, in general [14]. Nevertheless, tractable and linear-time algorithms for predictive data race detection have been proposed recently [15, 14]. GPredict [13] also uses predictive analysis to detect high-level concurrency properties. However, it is SMT-based and not scalable in practice. In this work, we aim to develop efficient algorithms for predictive monitoring against richer specifications. In the context of data races, the key insight underlying efficient algorithms is to restrict the search space of reorderings, and is also central to our work presented here.
We consider reorderings described by _Mazurkiewicz trace equivalence_[11]. Here, one fixes an _independence relation_, consisting of pairs of actions that can be commuted when present next to each other in any context. With such a commutativity specification, two executions are deemed equivalent if they can be reached from each other by successive commutations of independent actions. Indeed, the most popular approaches for race detection -- those based on the
Figure 1: Example Java program P and its executions. Interleaving play() and reset() causes the value of count and inputs in an inconsistent state.
happens-before_ partial order [Flanagan and Freund, 2009; Itzkovitz et al., 1999] -- essentially employ this reasoning principle and, as a result, can be implemented using a fast linear-time algorithm.
In this paper, we study the problem of predictive trace monitoring against a regular language \(L\) -- given an execution \(\sigma\), is \(\sigma\) Mazurkiewicz trace equivalent to an execution that belongs to \(L\)? The problem has previously been studied in a theoretical setting [Bertoni et al., 1989], where the authors proposed an algorithm that runs in time \(O(n^{\alpha})\) for executions with \(n\) events; \(\alpha\) is a parameter given by the independence relation, and is typically equal to the number of threads in the execution. Such an algorithm is unlikely to be practical for large-scale software applications where executions typically contain millions of events. Unfortunately, as we show in our paper, in general, this problem cannot be solved using a faster algorithm -- we show a matching conditional lower bound of \(\Omega(n^{\alpha})\) using techniques from fine-grained complexity [Williams, 2018].
To this end, we identify a class of specifications that we call _pattern_ regular languages for which predictive monitoring can be performed efficiently. This class of pattern languages has been inspired by systematic concurrency bug detection approaches that often rely on empirical hypotheses such as the _small bug depth hypothesis_ -- "_Empirically, though, many bugs in programs depend on the precise ordering of a small number of events \(\ldots\) there is some constant \(d\) and a subset \(\ldots\) of events such that some ordering of these \(d\) events already exposes the bug no matter how all other events are ordered._" [Chistikov et al., 2016]. Pattern languages are the natural class of specifications under such a hypothesis -- a language \(L\) is a pattern language if it is of the form \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}=\Sigma^{*}a_{1}\Sigma^{*}\ldots\Sigma^{*}a _{d}\Sigma^{*}\), where \(\Sigma\) is the alphabet labeling events, and \(a_{1},\ldots,a_{d}\in\Sigma\) are \(d\) symbols (not necessarily distinct from each other). We also propose an extension, the class of _generalized_ pattern languages which are unions of pattern languages. We expect that this class of specifications, previously deployed in domains such as pattern mining [Agrawal and Srikant, 1995] or root cause analysis [Murali et al., 2021], will be useful in advancing the state of the art in concurrency testing, given the success of past approaches based on similar empirical hypotheses including context bounding [Burckhardt et al., 2010; Emmi et al., 2011; Musuvathi and Qadeer, 2007], small bug depth [Burckhardt et al., 2010; Chistikov et al., 2016], delay bounding [Emmi et al., 2011], coarse interleaving hypothesis [Kasikci et al., 2017] for testing Java-like multi-threaded software programs, event-driven and asynchronous programs, distributed systems, and more recently for testing and model checking of weak memory behaviors [Abdulla et al., 2019; Gao et al., 2023].
The main result of this work is that the class of (generalized) pattern languages admits a constant-space linear-time algorithm for the predictive monitoring problem under trace equivalence. This algorithm relies on several interesting insights specific to pattern languages that enable our efficient monitoring algorithm, and is of independent interest.
We show that, in fact, we can use _vector clocks_ to further develop a fast and scalable predictive monitoring algorithm PatternTrack. We implement PatternTrack in Java and evaluate its performance over a set of benchmark Java applications derived from prior works. Our comparison of PatternTrack against the classical algorithm due to [Bertoni et al., 1989] reveals the effectiveness of both a specialized class of specifications and a linear-time algorithm for predictive monitoring.
The rest of the paper is organized as follows. In Section 2, we discuss background on modeling events, executions and questions in runtime monitoring. In Section 3, we recall trace equivalence and present our hardness result for predictive trace monitoring against regular languages (Theorem 3.2). In Section 4, we formally define the class of pattern and generalized pattern languages, and provide language-theoretic characterizations of these languages. In Section 5, we present our main result -- a constant-space streaming algorithm for predictive trace monitoring against generalized pattern regular languages, and a more practical algorithm PatternTrack that uses vector clocks (Section 5) for the same task. We present our experimental evaluation Section 6 and conclude in Section 8.
## 2. Preliminaries
In this section, we present relevant background and notations useful for the rest of the paper.
**Events and Executions.** Our primary subject of study is concurrent programs and their executions with a focus on developing monitoring algorithms for them. In this regard, executions can be modeled as sequences of events. For this, we will consider an _event alphabet_, or simply, alphabet \(\Sigma\), which will intuitively represent a set of relevant events observed during program execution. As an example, \(\Sigma\) can include information about instructions corresponding to method invocations/returns, or even load or store of memory locations. For the purpose of modeling concurrent programs, the symbols in our alphabet \(\Sigma\) also contain information about thread identifiers, belonging to a fixed set \(\mathcal{T}\). An event is a unique occurrence of a symbol from \(\Sigma\). Formally, an event is a tuple \(e=\langle id,a\rangle\), where \(id\) is the unique identifier of \(e\), while \(a\in\Sigma\) is the label of \(e\); we will denote the identifiers and labels of \(e\) using the notation \(\operatorname{lab}(e)=a\) and \(\operatorname{id}(e)=id\). In the context of concurrent executions, the labels of events will be of the form \(a=[t,op]\in\Sigma\) to denote information about the thread \(t\in\mathcal{T}\) that perform the corresponding event and the operation \(op\) performed in that event. We will often omit the identifier \(id\) of events when it is clear from the context. An execution can then be modeled as a sequence of events with labels from \(\Sigma\). We use \(\operatorname{Events}_{\sigma}\) to denote the set of events occurring in an execution \(\sigma\). We will use \(<^{\sigma}\) to denote the total order induced by \(\sigma\), i.e., \(e_{1}<^{\sigma}e_{2}\) iff the event \(e_{1}\) appears before \(e_{2}\) in the sequence \(\sigma\). The word constructed by concatenating the labels of \(\sigma\) will be denoted by \(\operatorname{lab}(\sigma)\in\Sigma^{*}\), and will often confuse \(\sigma\) with \(\operatorname{lab}(\sigma)\) when the identifiers of events are not important. We will use \(|\sigma|\) to denote the number of events in \(\sigma\).
**Runtime Monitoring.** Runtime verification has emerged as a powerful technique for enhancing reliability of software and hardware systems by augmenting traditional software testing methods with more expressive specifications that can be checked or _monitored_ during the execution. One of the primary components in a traditional runtime monitoring workflow is the choice of specifications to be monitored at runtime. In general, many different classes of specifications have been considered in the literature and in practice, including temporal logic such as LTL [20] and variants [14] with quantitative [13] and timing aspects [15]. More expressive variants such as extensions with calls and returns [13], extended regular expressions or vanilla regular languages have also been considered. A large class of these specifications can be described by a language (or a set of executions) over a chosen alphabet \(\Sigma\) of interest. Examples include LTL, FOLTL, extended-regular expressions [21], etc, and the more general class of regular languages. For a language \(L\) over the alphabet \(\Sigma\), the runtime monitoring problem then asks -- given an execution \(\sigma\in\Sigma^{*}\) coming from a program \(P\), does \(\sigma\) satisfies the specification \(L\), i.e., \(\sigma\in L\)? When \(L\) represents an undesired (resp. desired) behavior, a positive (resp. negative) answer to the membership question \(\sigma\in L\) can then be flagged to developers or system engineers who can fix the erroneous behavior (or in some cases, refine the specification). In this paper, we study the monitoring problem in the context of concurrent programs, with a focus on developing more robust solutions to the monitoring problem. In Example 2.1, we motivate the _predictive_ monitoring problem and formally describe it subsequently.
**Example 2.1**.: Consider the simple Java program \(P\), shown in Figure 0(a), that implements a class DBPlayer with fields, count and inputs, and member methods, play (for adding numbers to inputs and log position into count) and reset (for clearing inputs and set count as 0). The function main creates a DBPlayer object, and then call play and reset concurrently in different threads. Suppose, we are interested in observing the behaviors of the program that lead to the inconsistent state of fields count and inputs. The events thus generated belong to the alphabet
\(\Sigma_{\mathrm{Ex}}=\{[t,op]\,|\,t\in\mathcal{T},op\in\mathcal{O}\}\), where \(\mathcal{T}=\{t_{\mathrm{main}},t_{1},t_{2}\}\) is the set of threads corresponding to the main function and the 4 registers, while \(\mathcal{O}=\{\mathsf{fork}(t)\,|\,t\in\mathcal{T}\}\cup\{\mathsf{write}(0)\,|\, \mathsf{o}\in\{\mathsf{inputs},\mathsf{count}\}\}\cup\{\mathsf{acquire}(1), \mathsf{release}(1)\}\cup\{\mathsf{add}_{\mathsf{Call},\,\mathsf{add}_{ \mathsf{Return}}\mathsf{clear}_{\mathsf{Call},\,\mathsf{clear}_{\mathsf{Return}}}\}\}\). Observe that an execution \(\sigma\) of this program might leave the fields in an inconsistent state if it belongs to the language that constrains the order of calls to add, clear, and two writes to count:
\[L_{\mathrm{fail}}=\Sigma_{\mathrm{Ex}}^{*}\cdot[t_{2},\mathsf{add}_{\mathsf{ Call}}]\cdot\Sigma_{\mathrm{Ex}}^{*}\cdot[t_{1},\mathsf{clear}_{\mathsf{ Call}}]\cdot\Sigma_{\mathrm{Ex}}^{*}\cdot[t_{1},\mathsf{write}(\mathsf{count})]\cdot \Sigma_{\mathrm{Ex}}^{*}\cdot[t_{2},\mathsf{write}(\mathsf{count})]\cdot\Sigma_{ \mathrm{Ex}}^{*}\]
In Figure 0(b), we show an execution \(\sigma_{\mathrm{safe}}\) of \(P\), which is a sequence of 16 events \(e_{1},\ldots,e_{16}\), labeled with \(\Sigma_{\mathrm{Ex}}\). Observe that \(\sigma_{\mathrm{safe}}\notin L_{\mathrm{fail}}\) fails to witness the buggy behavior of \(P\), and would deem the task of runtime monitoring against \(L_{\mathrm{fail}}\) unsuccessful. On the other hand, the bug would be exposed successfully if the execution \(\sigma_{\mathrm{fail}}\) (Figure 0(c)) was observed when monitoring \(P\). More importantly, observe that, \(\sigma_{\mathrm{fail}}\) can, in fact, be obtained by _reordering_ the events of \(\sigma_{\mathrm{safe}}\), hinting at the possibility of enhancing the effectiveness of runtime monitoring via _prediction_! Notice that an execution belonging to \(L_{\mathrm{fail}}\) may not immediately leave an inconsistent state. \(\sigma_{\mathrm{fail}}\) here is actually benign, but it indicates another execution that \(e_{13}\) occurs before \(e_{5}\), which is invalid. We choose our \(L_{\mathrm{fail}}\) in this way that the call to add, clear are included instead of the write to inputs, because \([t_{2},\mathsf{clear}_{\mathsf{Call}}]\) and \([t_{2},\mathsf{add}_{\mathsf{Call}}]\) are independent so that we can reorder them soundly.
Example 2.1 illustrates the challenge of naively adapting the traditional runtime monitoring workflow for the case when the software under test exhibits concurrency -- even though the underlying software may violate a specification, exposing the precise execution that witnesses such a violation is like finding a needle in a haystack. This is because, even under the same input, not all thread interleavings may witness a violation of a temporal specification of interest, and the precise interleaving that indeed witnesses the violation may require careful orchestration from the thread scheduler. As a result, the bug may remain unexposed even after several attempts of executing the program. The emerging class of _predictive analysis_ techniques (Kini et al., 2017; Mathur et al., 2021; Pavlogiannis, 2019; Smaragdakis et al., 2012) attempts to address this problem in the context of concurrency bugs such as data races and deadlocks. Here, one observes a single execution of a program, and _infers_ additional executions (from the observed one) that are guaranteed to be generated by the same underlying program, and also witness a violation of a property of interest. Such techniques thus partially resolve the dependence on the often demonic non-determinism due to thread scheduler, and pro-actively increase coverage when testing concurrent programs.
**Correct Reorderings.** In order to formalize the predictive analysis framework, we require a notion of the set of _feasible_ or _correct_ reorderings of a given execution(Serbanuta et al., 2013). For a given execution \(\sigma\) over \(\Sigma\), the set \(\mathsf{CRorderings}(\sigma)\) of correct reorderings of \(\sigma\) represents the set of executions that will be generated by any program \(P\) that generates \(\sigma\). The formal notion of correct reorderings can often be obtained by formally modeling the programming language, together with the semantics of concurrent objects (Herlihy and Wing, 1990) such as threads, shared memory and synchronization objects such as locks(Serbanuta et al., 2013). In the case of multi-threaded programs, it is customary to include symbols corresponding to reads and writes to all shared objects and lock acquisition and release events in the alphabet \(\Sigma\). Prior works have developed several different (yet related) definitions to precisely capture this notion; the most prominent notion is due to (Smaragdakis et al., 2012), which we describe next.
Given a well-formed2 execution \(\sigma\in\Sigma^{*}\), we have that a execution \(\rho\in\mathsf{CRorderings}(\sigma)\) iff the following conditions hold -- (1) The set of events in \(\rho\) is a subset of those in \(\sigma\), (2) \(\rho\) is well-formed;
this means, for example, critical sections on the same lock do not overlap in \(\rho\)). (3) \(\rho\) preserves the _program-order_ of \(\sigma\), i.e., for every thread \(t\), the sequence of events \(\rho|_{t}\) obtained by projecting \(\rho\) to events of thread \(t\) is a prefix of the analogous sequence \(\sigma|_{t}\) obtained by projecting \(\sigma\) to \(t\). (4) \(\rho\) preserves control flow of \(\sigma\), i.e., every read event \(e\) in \(\rho\) observes the same write event \(e^{\prime}\) as in \(\sigma\). Observe that, for programming languages such as Java or C/C++, \(\rho\in\mathsf{CReorderings}(\sigma)\) implies that indeed any control flow undertaken by \(\sigma\) can also be undertaken by \(\rho\), and thus, any program that generates \(\sigma\) can also generate \(\rho\), possibly under a different thread interleaving. In general, the more permissive the set \(\mathsf{CReorderings}(\sigma)\), the higher the computational complexity involved in reasoning with it. For example, for the precise definition due to [22], the race _prediction_ question -- given an execution \(\sigma\), check if there is a \(\rho\in\mathsf{CReorderings}(\sigma)\) that exhibits a data race -- is an \(\mathsf{NP}\)-hard problem [17]. On the other hand, for simpler (and less permissive) notions such as trace equivalence, which we recall in Section 3, the corresponding race prediction question becomes linear-time (and also constant-space) checkable!
**Predictive Monitoring.** We are now ready to formalize the predictive monitoring framework, by generalizing related notions such as predictive data race detection [23, 24] or predictive deadlock detection [23]. For a specification language \(L\), the predictive monitoring question asks, given an execution \(\sigma\in\Sigma^{*}\), is there an execution \(\rho\) such that \(\rho\in\mathsf{CReorderings}(\sigma)\cap L\), i.e., \(\rho\) is a correct reordering of \(\sigma\) that also witnesses the specification \(L\). In the context of Example 2.1, while the execution \(\sigma_{\text{safe}}\) (Figure 0(b)) is a negative instance of the monitoring problem against the language \(L_{\text{fail}}\), it is indeed a positive instance of _predictive_ monitoring problem against the same language, because the witnessing execution \(\sigma_{\text{fail}}\in L_{\text{fail}}\) is a _correct reordering_ of \(\sigma_{\text{safe}}\). In fact, notions such as predictive data race detection can be easily formulated in this framework as follows. Let us fix the set of threads to be \(\mathcal{T}\), and also a set of memory locations/objects \(\mathcal{X}\). Consider the alphabet of read and write events:
\[\Sigma_{\text{RW}}=\{[t,\text{w}(x)],[t,r(x)]\mid t\in\mathcal{T},x\in \mathcal{X}\}.\]
The following language over \(\Sigma_{\text{RW}}\) then represents the set of all executions with a data race.
\[L_{\text{race}}=\bigcup_{\begin{subarray}{c}t\neq t^{\prime}\in\mathcal{T}\\ x\in\mathcal{X}\end{subarray}}\Sigma^{*}\bigg{(}[t,\text{w}(x)][t^{\prime}, \text{w}(x)]+[t,\text{w}(x)][t^{\prime},r(x)]+[t,r(x)][t^{\prime},\text{w}(x)] \bigg{)}\Sigma^{*}\]
The race prediction question then asks to check -- given an execution \(\sigma\), is there an execution \(\rho\in\mathsf{CReorderings}(\sigma)\cap L_{\text{race}}\)? Observe that, \(L_{\text{race}}\) is a regular language over \(\Sigma_{\text{RW}}\). Likewise, the deadlock prediction question can also be formulated analogously using a regular language \(L_{\text{deadlock}}\) over an alphabet that also contains information about lock acquire and release events:
\[\Sigma_{\text{RWL}}=\Sigma_{\text{RW}}\cup\{[t,\text{acq}(\ell)],[t,\text{ rel}(\ell)]\mid t\in\mathcal{T},\ell\in\mathcal{L}\},\]
where \(\mathcal{L}\) is a fixed set of lock identifiers. We skip the precise definition of \(L_{\text{deadlock}}\) here.
## 3. Trace Languages and Predictive Monitoring
In this section, we will recall trace3 languages (Mazurkiewicz, 1987), and discuss their membership question, its connections to predictive monitoring and prior complexity-theoretic results. We will finally present our hardness result, which is the first contribution of this work.
Footnote 3: Readers must note that the use of the term _trace_ in this paper is specifically distinguished from its more contemporary use (to denote a specific log or sequence of events that happen during an execution). The usage of _trace_ in this paper is derived from the more traditional language theoretic notion of Mazurkiewicz _traces_ which denote (equivalence) _classes_ of strings over some alphabet, instead of a single string modeling a single execution.
**Mazurkiewicz Trace Equivalence.** Trace theory, introduced by Antoni Mazurkiewicz (Mazurkiewicz, 1987) is a simple yet systematic framework for reasoning about the computation of concurrent programs. The broad idea of trace theory is to characterize when two executions of a concurrent programs must be deemed equivalent, based on the notion of commutativity of dependent actions (or labels). Formally an _independence_ (or _concurrency_) relation over an alphabet of actions \(\Sigma\) is an irreflexive and symmetric relation \(\mathcal{I}\subseteq\Sigma\times\Sigma\) denoting all pairs of actions that can intuitively be deemed pairwise independent. Together, the pair \((\Sigma,\mathcal{I})\) constitute a _concurrent alphabet_. Then, the _trace equivalence_ induced by \(\mathcal{I}\), denoted \(\equiv_{\mathcal{I}}\) is the smallest equivalence class over \(\Sigma^{*}\) such that for every \((a,b)\in\mathcal{I}\) and for every two words \(w_{1},w_{2}\in\Sigma^{*}\), we have
\[w_{1}\cdot a\cdot b\cdot w_{2}\equiv_{\mathcal{I}}w_{1}\cdot b\cdot a\cdot w_ {2}.\]
For a string \(w\in\Sigma^{*}\), the trace equivalence class of \(w\) is \([w]_{\mathcal{I}}=\{w^{\prime}\mid w\equiv_{\mathcal{I}}w^{\prime}\}\).
**Trace Partial Order.** An alternative characterization of trace equivalence is often given in terms of the partial order induced due to the concurrent alphabet. Formally, given a concurrent alphabet \((\Sigma,\mathcal{I})\) and a sequence of events \(\sigma\) labeled with symbols from \(\Sigma\), the partial order induced due to \(\sigma\), denoted \(\leq_{\mathcal{I}}^{\sigma}\) is the smallest partial order over \(\mathsf{Events}_{\sigma}\) such that for any two events \(e_{1},e_{2}\in\mathsf{Events}_{\sigma}\), if \(e_{1}\) appears before \(e_{2}\) (i.e. \(e_{1}<^{\sigma}e_{2}\)) in the sequence \(\sigma\) and \((\mathsf{lab}(e_{1}),\mathsf{lab}(e_{2}))\not\in\mathcal{I}\), then \(e_{1}\leq_{\mathcal{I}}^{\sigma}e_{2}\). One can then show that, the set of linearizations \(\mathsf{Lin}(\leq_{\mathcal{I}}^{\sigma})\) of this partial order precisely captures the set of words that are trace equivalent to the label of \(\sigma\):
Proposition 3.1 ().: Let \(\sigma\) be an execution over \(\Sigma\). We have, \([\mathsf{lab}(\sigma)]_{\mathcal{I}}=\{\mathsf{lab}(\sigma^{\prime})\mid \sigma^{\prime}\in\mathsf{Lin}(\leq_{\mathcal{I}}^{\sigma})\}\).
As a consequence of Proposition 3.1, we lift the notion of trace equivalence from words over \(\Sigma\) to executions over \(\Sigma\) as follows. Given two executions \(\sigma\) and \(\sigma^{\prime}\) over the same set of events (i.e., \(\mathsf{Events}_{\sigma}=\mathsf{Events}_{\sigma^{\prime}}\)), we say that \(\sigma\equiv_{\mathcal{I}}\sigma^{\prime}\) if \(\sigma^{\prime}\in\mathsf{Lin}(\leq_{\mathcal{I}}^{\sigma})\). Likewise, we use \([\sigma]_{\mathcal{I}}\) to denote the set of executions \(\sigma^{\prime}\) for which \(\sigma\equiv_{\mathcal{I}}\sigma^{\prime}\).
The partial order view of trace equivalence has, in fact, been routinely exploited in program analysis for concurrency. Dynamic analysis techniques such as those designed for data race detection (Flanagan and Freund, 2009; Itzkovitz et al., 1999) construct a partial order, namely the _happens-before_ partial order, which essentially characterizes the Mazurkiewicz trace equivalence of an appropriately defined concurrency alphabet. Likewise, optimization techniques employed in bounded model checking such as dynamic partial order reduction (Flanagan and Godefroid, 2005) are rooted in Mazurkiewicz trace theory in a similar precise sense.
**Mazurkiewicz Equivalence v/s Correct Reorderings.** Mazurkiewicz trace equivalence provides a sound (but incomplete) characterization of the space of correct reorderings in the following sense. We first fix an independence relation over the alphabet \(\Sigma\) that soundly characterizes the underlying commutativity induced by the underlying programming languages. Consider, for example, the alphabet \(\Sigma_{\mathrm{RWL}}\) described previously in Equation (2) over some set of threads \(\mathcal{T}\), memory locations \(\mathcal{X}\) and locks \(\mathcal{L}\). Let \(C_{\mathrm{RW}}=\{(a_{1}(x),a_{2}(x))\mid w\in\{a_{1},a_{2}\}\text{ and }x\in\mathcal{X}\}\) be the set of conflicting memory
operations, and let \(\mathcal{C}_{\mathsf{L}}=\{(a_{1}(t),a_{2}(t))\,|\,a_{1},a_{2}\in\{\mathsf{acq}, \mathsf{rel}\}\text{ and }\ell\in\mathcal{L}\}\) be the set of conflicting lock operations. Now consider the independence relation defined as follows:
\[\mathcal{I}_{\text{RWL}}=\{([t_{1},o_{1}],[t_{2},o_{2}])\,|\,t_{1}\neq t_{2}\text { and }(o_{1},o_{2})\notin\mathcal{C}_{\text{RW}}\cup\mathcal{C}_{\mathsf{L}}\}\]
The above definition of independence relation has been carefully crafted to ensure that the resulting trace equivalence satisfies two properties. First, the relative order between a read event \(e_{r}\) and a conflicting write event \(e_{r}\) (i.e., \((\operatorname{lab}(e),\operatorname{lab}(e_{u}))\in\mathcal{C}_{\text{RW}}\)) does not get flipped, ensuring that for any two Mazurkiewicz equivalent executions \(\sigma\) and \(\sigma^{\prime}\), the control flow taken by \(\sigma\) will also be taken by \(\sigma^{\prime}\). Furthermore, the relative order of conflicting lock operations does not change, ensuring that if \(\sigma\) was well-formed (i.e., critical sections on the same lock do not overlap in \(\sigma\)), then so is \(\sigma^{\prime}\). This gives us the following.
Proposition 3.2 ().: Let \(\sigma\) be an execution over \(\Sigma_{\text{RWL}}\). We have \(\mathsf{Lin}(\leq_{\text{RWL}}^{\sigma})\subseteq\mathsf{CReorderings}(\sigma)\).
We remark that data race detection techniques like FastTrack(Flanagan and Freund, 2009) are thus implicitly predictive and _sound_ because they reason about trace equivalence induced by the concurrent alphabet \((\Sigma_{\text{RWL}},\mathcal{I}_{\text{RWL}})\).
Example 3.1 ().: Consider again the example program \(P\) (Figure 0(a)) from Example 2.1. Let \(\operatorname{Dep}_{\text{Ex}}\) be the set of dependent instructions, namely those pairs \((op_{1},op_{2})\) such that there is an object \(\mathsf{o}\in\{\mathsf{count},\mathsf{inputs}\}\) such that \((op_{1},op_{2})\in\{(\mathsf{write}(\mathsf{o}),\mathsf{write}(\mathsf{o}))\}\). Using this, we can define the independence relation as \(\mathcal{I}_{\text{Ex}}=\{([t_{1},op_{1}],[t_{2},op_{2}])\,|\,t_{1}\neq t_{2} \text{ and }(op_{1},op_{2})\notin\operatorname{Dep}_{\text{Ex}}\}\). Observe that any two consecutive instructions which are independent according to \(\mathcal{I}_{\text{Ex}}\) can be commuted without affecting the control flow of the program. Thus, equivalence induced by \(\mathcal{I}_{\text{Ex}}\) soundly captures correct reorderings. Finally, observe that the two executions from Figure 1 are also deemed equivalent according to the independence relation defined here: \(\sigma_{\text{safe}}\equiv_{\mathcal{I}_{\text{Ex}}}\sigma_{\text{fail}}\).
### Predictive Monitoring for Trace Equivalence
The framework of Mazurkiewicz traces is well-equipped to study the predictive monitoring problem defined henceforth in the context of trace equivalence.
Definition 1 (Predictive Trace Monitoring).: Fix a concurrent alphabet \((\Sigma,\mathcal{I})\) and a language \(L\subseteq\Sigma^{*}\). Given an execution \(\sigma\) over \(\Sigma\) as input, the predictive monitoring problem asks to check if there is an execution \(\sigma^{\prime}\) such that \(\sigma^{\prime}\equiv_{\mathcal{I}}\sigma\) and \(\operatorname{lab}(\sigma^{\prime})\in L\).
We remark that the above problem can be equivalently formulated in terms of words (instead of executions) -- given an execution \(\sigma\) specified as the corresponding word \(w=\operatorname{lab}(\sigma)\), check if \([w]_{\mathcal{I}}\cap L\neq\emptyset\). As an example, for the alphabet \(\Sigma_{\text{Ex}}\) from Example 2.1 and the independence relation \(\mathcal{I}_{\text{Ex}}\) defined in Example 3.1, the predictive monitoring question would return YES for the input \(\sigma_{\text{safe}}\) because \(\sigma_{\text{fail}}\in[\sigma_{\text{safe}}]_{\mathcal{I}_{\text{Ex}}}\cap L _{\text{fail}}\).
Predictive Trace Monitoring against Regular Languages.Bertoni, Mauri and Sabadini (Bertoni et al., 1989) studied the problem of predictive trace monitoring against the class of regular (and context-free) languages and propose an algorithm whose time complexity is given in terms of a parameter of the concurrency alphabet, which is defined as follows. The _width_\(\mathsf{width}(\Sigma,\mathcal{I})\) of the concurrency alphabet \((\Sigma,\mathcal{I})\) is the size of the largest clique of the undirected graph whose vertices are elements of \(\Sigma\) and edges are elements of \(\mathcal{I}\). We next recall the result of Bertoni et al. relevant to this exposition, and provide a high-level overview of their algorithm subsequently.
Theorem 3.1 (Theorem 6.2 in (Bertoni et al., 1989)).: The predictive monitoring problem against regular languages can be solved using an algorithm that uses \(O(n^{\alpha})\) time and space for input executions of size \(n\), where \(\alpha=\mathsf{width}(\Sigma,\mathcal{I})\) is the width of the concurrency alphabet.
_Ideals_. The algorithm due to Bertoni et al. relies on the observation that the set of _prefixes_ of an equivalence class of a given execution can be defined using an _ideal_. An ideal \(\mathsf{X}\) of \(\sigma\) is a subset \(\mathsf{X}\subseteq\mathsf{Events}_{\sigma}\) such that for every \(e,e^{\prime}\in\mathsf{Events}_{\sigma}\) such that \(e\leq_{I}^{\sigma}e^{\prime}\), if \(e^{\prime}\in\mathsf{X}\), we also have \(e\in\mathsf{X}\). An event \(e\in X\) is said to be _maximal_ in \(X\) if for every \(f\in\mathsf{Events}_{\sigma}\) different from \(e\) such that \(e\leq_{\sigma}^{f}\), we have \(f\notin X\). We use \(\mathsf{max}(\mathsf{X})\) to denote the (unique) set of maximal events of \(\mathsf{X}\). We will use \(\mathsf{Lin}(\leq_{I}^{\sigma}\mathsf{X})\) to denote the set of linearizations of the set \(\mathsf{X}\) consistent with \(\leq_{I}^{\sigma}\) i.e., each \(\rho\in\mathsf{Lin}(\leq_{I}^{\sigma}\mathsf{X})\) is such that \(\mathsf{Events}_{\rho}=\mathsf{X}\) and for every \(e_{1}\leq_{J}^{\sigma}e_{2}\), if \(e_{1},e_{2}\in\mathsf{X}\), then \(e_{1}<^{\rho}e_{2}\). Given two ideals \(\mathsf{X}\) and \(\mathsf{Y}\) of \(\sigma\), we say that \(\mathsf{X}\) is an immediate predecessor of \(\mathsf{Y}\) if there is an event \(e\in\mathsf{Y}\) such that \(\mathsf{Y}=\mathsf{X}\uplus\{e\}\) (i.e., \(e\notin\mathsf{X}\), and \(e\in\mathsf{max}(\mathsf{Y})\)). We use \(\mathsf{pre}(\mathsf{X})\) to denote the set of all immediate predecessors of \(\mathsf{X}\). The principal ideal generated by an event \(e\in\mathsf{Events}_{\sigma}\), denoted \(\mathsf{Pldeal}(e)\), is the smallest ideal of \(\sigma\) that contains \(e\). There is a one-to-one correspondence between an ideal \(\mathsf{X}\) and the set of its maximal elements. In particular, an ideal \(\mathsf{X}\) can be uniquely represented as \(\mathsf{X}=\bigcup\limits_{e\in\mathsf{max}(\mathsf{X})}\mathsf{Pldeal}(e)\). The number of maximal elements of an ideal of \(\sigma\) is bounded by \(\alpha\). As a result, the number of ideals of \(\sigma\) is bounded by \(O(|\sigma|^{\alpha})\).
Let us now recall the algorithm due to Bertoni et al. Let \(\alpha=\mathsf{width}(\Sigma,\mathcal{I})\) be the width of the concurrency alphabet. Let \(\mathcal{A}_{L}=(Q,Q_{0},\delta,F)\) be the non-deterministic finite automaton corresponding to the regular language \(L\) against which we perform predictive trace monitoring; the size of \(\mathcal{A}_{L}\) is assumed to be \(O(1)\). Let us also fix the input execution \(\sigma\in\Sigma^{*}\) and let \(w=\mathsf{lab}(\sigma)\). The algorithm computes, for every ideal \(\mathsf{X}\) of \(\sigma\), the set of states \(S_{\mathsf{X}}=\{\delta^{*}(q_{0},w)\,|\,\exists\rho\in\mathsf{Lin}(\leq_{I}^ {\sigma}\mathsf{X})\) such that \(\mathsf{lab}(\rho)=w\}\) that can be reached by the automaton \(\mathcal{A}_{L}\) on some linearization of \(\mathsf{X}\). Observe that, \(S_{\mathsf{X}}\subseteq Q\) has constant size for any ideal \(\mathsf{X}\). This information can be defined inductively using immediate predecessor ideals \(-\,S_{\mathsf{X}}=Q_{0}\) if \(\mathsf{X}=\varnothing\), and otherwise, \(S_{\mathsf{X}}=\{q^{\prime}\,|\,\exists\mathsf{X}^{\prime},q,f\text{ such that }\mathsf{X}^{\prime}\in\mathsf{pre}(\mathsf{X}),q\in S_{\mathsf{X}},\{f\}= \mathsf{X}\setminus\mathsf{X}^{\prime}\text{ and }(q,\mathsf{lab}(f),q^{\prime})\in\delta\}\). The computation of the sets \(S_{\mathsf{X}}\) can be performed incrementally in order of the size of the ideals, starting from ideals of the smallest size. The final check for determining whether there is an equivalent execution belonging to \(L\) simply corresponds to checking if \(F\cap\mathsf{X}_{\sigma}\), where \(\mathsf{X}_{\sigma}=\mathsf{Events}_{\sigma}\) is the largest ideal that includes all events. The time spent for each ideal \(\mathsf{X}\) is \(O(1)\), and the total number of ideals is \(O(|\sigma|^{\alpha})\), giving us the desired bound of Theorem 3.1.
### Hardness of Predictive Trace Monitoring
The algorithm in (Bertoni et al., 1989) runs in time (and space) \(O(n^{\alpha})\) for executions of size \(n\) and where \(\alpha\) is the width of the concurrent alphabet, in the case when the target language is regular. Even though this is polynomial time (assuming \(\alpha\) is constant), it can become prohibitive when deployed for predictive monitoring for executions coming from large-scale concurrent software systems, which routinely generate executions with millions of events. In sharp contrast to this is the case of non-predictive monitoring, which boils down to checking membership in a regular language. The non-predictive monitoring problem for regular languages thus admits a linear-time, one-pass streaming algorithm, the holy grail of runtime verification. The question we ask here is -- for the case of predictive trace monitoring, is there a more efficient algorithm than the \(O(n^{\alpha})\) algorithm proposed by Bertoni et al. (Bertoni et al., 1989)? Is the exponential dependence on \(\alpha\) necessary?
At first glance, predictive trace monitoring, in general, does not admit a constant-space (automata-theoretic) algorithm. Consider, for example, the alphabet \(\Sigma=\{\mathsf{a}_{1},\mathsf{a}_{2},\mathsf{a}_{3}\}\) and the independence relation \(\mathcal{I}=\{(\mathsf{a}_{i},\mathsf{a}_{j})\,|\,i\neq j\in\Sigma\}\), i.e., all pairs of distinct letters are deemed independent by \(\mathcal{I}\). Now consider the language \(L=(\mathsf{a}_{1}\mathsf{a}_{2}\mathsf{a}_{3})^{*}\). Checking if \([w]_{I}\cap L\), amounts to checking if \(w\in L^{\prime}=\{u\,|\,\text{the number of occurrences of }\mathsf{a}_{1},\mathsf{a}_{2}\text{ and }\mathsf{a}_{3}\text{ is equal}\}\). Since \(L^{\prime}\) is not a regular language (not even context-free), the predictive trace monitoring problem does not admit a constant-space linear-time algorithm for \(L\). In this work, we establish an even tighter (conditional) lower bound
(Theorem 3.2). Our hardness result is, in fact, stronger in that it applies even for the case of _star-free_ regular languages [14, 15]. Further, our result also establishes that the \(O(n^{\alpha})\) algorithm due to Bertoni et al. is in fact optimal.
**Complexity Theoretic Conjectures.** One of the most widely believed complexity theoretic conjectures is that the Strong Exponential Time Hypothesis (SETH), which states that for every \(\epsilon>0\), there is no (deterministic or randomized) algorithm that determines the satisfiability of a 3SAT formula over \(n\) propositions in time \(O(2^{(1-\epsilon)n})\). In the proof of our result Theorem 3.2, we will show a reduction from the orthogonal vectors problem \(k\)-OV. An instance of \(k\)-OV is \(k\) sets \(\{A_{1},\ldots,A_{k}\}\), each of which is a set of \(n\) Boolean vectors over \(d\) dimensions, i.e., \(A_{i}\subseteq\{0,1\}^{d}\) and \(|A_{i}|=n\). The \(k\)-OV problem asks to check if there are \(k\) orthogonal vectors, i.e., if there are vectors \(a_{1}\in A_{1}\), \(a_{2}\in A_{2},\ldots,a_{k}\in A_{k}\) such that the norm of their pointwise product vector is \(0\), i.e., \(\sum_{j=1}^{d}\prod_{i=1}^{k}a_{i}[j]=0\). Under SETH, there is no \(O(n^{k-\epsilon}\cdot\mathsf{poly}(d))\) algorithm for \(k\)-OV (no matter what \(\epsilon>0\) we pick) [14].
Theorem 3.2: Assume SETH holds. For each \(\epsilon>0\), there is no algorithm that solves the predictive trace monitoring problem against star-free regular languages (over a concurrent alphabet with width \(\alpha\)) in time \(O(n^{\alpha-\epsilon})\) for input words of length \(n\).
Here we present the proof of Theorem 3.2. The proof is via a _fine-grained_ reduction [14] from the \(k\)-OV problem.
Proof of Theorem 3.2.: We show a fine-grained reduction from \(k\)-OV to the problem of predictive trace monitoring against star-free language. We fix \(k,d\in\mathbb{N}\). We also fix the alphabet \(\Sigma\), independence relation \(\mathcal{I}\) and the star-free language \(L\) against which we check predictive trace monitoring, as follows. First, the alphabet \(\Sigma\) is partitioned into \(k\) disjoint alphabets \(-\Sigma=\bigcup_{i=1}^{k}\Sigma_{i}\), where
\[\Sigma_{i}=\{\mathsf{a}_{i,j}\,|\,1\leq j\leq d\}\cup\{\#_{i}\}\]
That is, the alphabet consists of one symbol \(\mathsf{a}_{i,j}\) for every pair \((i,j)\) where \(1\leq i\) and \(1\leq j\leq d\) and additionally one symbol \(\#_{i}\) for each \(1\leq i\leq k\). The independence relation is such that all symbols
across partitions are deemed independent, while those within the same partition are deemded dependent, That is,
\[\mathcal{I}=\bigcup_{i\neq j}\Sigma_{i}\times\Sigma_{j}\]
Thus the width of the concurrency alphabet is the number of partitions, i.e., \(\alpha=k\). The language \(L\) is given by the following regular expression:
\[r=\Sigma^{*}\cdot\{\mathtt{a}_{1,1},\mathtt{a}_{2,1},\ldots,\mathtt{a}_{k,1} \}^{+}\cdots\{\mathtt{a}_{1,d},\mathtt{a}_{2,d},\ldots,\mathtt{a}_{k,d}\}^{+} \cdot\Sigma^{*}\]
Observe that \(|\Sigma|=k(d+1)\), \(\mathcal{I}=k(k-1)(d+1)^{2}/2\) and \(|r|=O(kd)\). Also observe that \(L\) is a star-free language because \(\Sigma^{*}=\varnothing^{e}\) and for a subset \(\Delta\subseteq\Sigma\), we have \(\Delta^{+}=\Delta\cdot\Delta^{*}\) and further \(\Delta^{*}=(\cup_{a\in\Sigma\setminus\Delta}\Sigma^{*}a\Sigma^{*})^{e}\).
**Reduction.** Given an instance \(\mathcal{A}=\{A_{1},A_{2},\ldots,A_{k}\}\) of \(k\)-OV with \(|A_{i}|=n\) and \(A_{i}\subseteq\{0,1\}^{d}\) for every \(i\), we construct a execution \(\sigma\) such that \(\mathcal{A}\) is a positive instance of \(k\)-OV iff \([\mathtt{lab}(\sigma)]_{\mathcal{I}}\cap L\neq\varnothing\). Our construction ensures that \(|\sigma|\leq k(d+1)n\). For each \(i\), let us denote the vectors of \(A_{i}\) using \(\{v_{1}^{(i)},v_{2}^{(i)},\ldots,v_{n}^{(i)}\}\). The execution \(\sigma\) is then a concatenation, obtained by successively appending smaller sub-executions, one for each set \(A_{i}\):
\[\sigma=\sigma_{1}\circ\sigma_{2}\circ\ldots\circ\sigma_{k}.\]
Further, for an \(1\leq i\leq k\), the sub-execution \(\sigma_{i}\) is a concatenation of smaller sub-executions corresponding to each vector in \(A_{i}\):
\[\sigma_{i}=\sigma_{i,1}\circ\bar{\#}_{i}\circ\sigma_{i,2}\circ\bar{\#}_{i} \cdots\bar{\#}_{i}\circ\sigma_{i,n}\circ\bar{\#}_{i}\]
Further, for each \(1\leq i\leq k\) and \(1\leq l\leq n\), the trace \(\sigma_{i,l}\) encodes the vector \(v_{l}^{(i)}\):
\[\sigma_{i,l}=b_{i,1,l}\circ b_{i,2,l}\circ\cdots\circ b_{i,d,l}\]
where
\[b_{i,j,l}=\begin{cases}\mathtt{a}_{i,j}&\text{if }v_{l}^{(i)}[j]=0\\ \varepsilon&\text{otherwise}\end{cases}\]
Figure 2 illustrates our construction. The execution \(\sigma^{\prime}\) in Figure 1(c) is equivalent to \(\sigma\) and also belongs to \(L\); symbols marked red highlight membership in \(L\) and correspond to the vectors whose dot product is \(0\) (also marked in red in Figure 1(a)).
**Correctness.** We now argue for the correctness of our reduction.
\((\Rightarrow)\) Consider \(v_{t_{1}}^{(1)}\in A_{1},v_{t_{2}}^{(2)}\in A_{2},\ldots,v_{t_{k}}^{(k)}\in A _{k}\) such that \(v_{t_{1}}^{(1)}\cdot v_{t_{2}}^{(2)}\cdot\ldots\cdot v_{t_{k}}^{(k)}=0\). This means, for every \(j\in\{1,\ldots,d\}\), there is at least one \(i\in\{1,\ldots,k\}\) such that \(v_{t_{i}}^{(i)}[j]=0\). \(\sigma^{\prime}\) can now be described as follows:
\[\sigma^{\prime}=\sigma^{\prime}_{\text{pre}}\circ\sigma^{\prime}_{\text{mid}} \circ\sigma^{\prime}_{\text{post}}.\]
The prefix \(\sigma^{\prime}_{\text{pre}}\) (resp. \(\sigma^{\prime}_{\text{post}}\)) is obtained by successively concatenating the first \(t_{i}-1\) (resp. last \(n-t_{i}\)) sub-executions of \(\sigma_{i}\):
\[\sigma^{\prime}_{\text{pre}} = \sigma_{1}[1\ldots t_{1}-1]\circ\ldots\circ\sigma_{k}[1\ldots t_{ k}-1]\] \[\sigma^{\prime}_{\text{post}} = \sigma_{1}[t_{1}+1\ldots n]\circ\ldots\circ\sigma_{k}[t_{k}+1 \ldots n]\]
where \(\sigma_{i}[p\ldots q]=\sigma_{i,p}\circ\sigma_{i,p+1}\cdots\circ\sigma_{i,q}\). The sub-execution \(\sigma^{\prime}_{\text{mid}}\) is obtained by successive concatenation of sub-executions corresponding to the \(j^{\text{th}}\) components of each vector \(v_{t_{i}}^{(i)}\):
\[\sigma^{\prime}_{\text{mid}}=\sigma^{\prime}_{\text{mid},1}\circ\sigma^{\prime} _{\text{mid},2}\circ\ldots\circ\sigma^{\prime}_{\text{mid},d}\circ\bar{\#}_{1} \circ\bar{\#}_{2}\cdots\bar{\#}_{k}\]
where for each \(1\leq j\leq d\),
\[\sigma^{\prime}_{\text{mid},j}=b_{1,j,t_{1}}\circ b_{2,j,t_{2}}\circ\ldots\circ b _{k,j,t_{k}}\]
First, observe that \(\sigma^{\prime}\) does not flip the order of any two events that are dependent. Thus, \(\sigma^{\prime}\equiv_{I}\sigma\). Second, \(\sigma^{\prime}_{\text{mid}}\) has a prefix (namely the one that does not include the residual \(\#_{i}\) symbols) that matches the regular expression \(\{\mathtt{a}_{1,1},\ldots,\mathtt{a}_{k,1}\}^{+}\ldots\{\mathtt{a}_{1,d}, \ldots,\mathtt{a}_{k,d}\}^{+}\). This is because or every \(j\in\{1,\ldots,d\}\), there is at least one \(i\in\{1,\ldots,k\}\) such that \(v^{(i)}_{i_{t}}[j]=0\). Thus, \(\sigma^{\prime}\in L\).
(\(\Leftarrow\)) Consider an equivalent execution \(\sigma^{\prime}\equiv_{I}\sigma\) such that \(\sigma^{\prime}\in L\). Clearly, \(\sigma^{\prime}\) must contain a substring \(\sigma^{\prime}_{\text{mid}}\) which belongs to the language \(\{\mathtt{a}_{1,1},\ldots,\mathtt{a}_{k,1}\}^{+}\ldots\{\mathtt{a}_{1,d}, \ldots,\mathtt{a}_{k,d}\}^{+}\). Let \(\sigma^{\prime}_{\text{mid},i}\) be the subsequence of \(\sigma^{\prime}_{\text{mid}}\) obtained by projecting it to \(\Sigma_{i}\). We remark that for each such \(i\), there must be a common sub-execution \(\sigma_{i,t_{i}}\) of \(\sigma\) such that the events of \(\sigma^{\prime}_{\text{mid},i}\) belong to \(\sigma_{i,t_{i}}\) (recall that \(\sigma_{i,I}\) corresponds to the vector \(v^{(i)}_{l}\in A_{i}\)). This is because any two subexecutions \(\sigma_{i,\beta}\) and \(\sigma_{i,\beta^{\prime}}\), are separated by the separator \(\#_{i}\) which is dependent with each symbol in \(\sigma_{i,\beta}\) and \(\sigma_{i,\beta^{\prime}}\). We can thus choose indices \(t_{1},t_{2},\ldots,t_{k}\) such that events of \(\sigma^{\prime}_{\text{mid},i}\) belong to the unique subtree \(\sigma_{i,t_{i}}\); If for some \(i\), \(\sigma^{\prime}_{\text{mid}}\) contains no symbol in \(\Sigma_{i}\), \(t_{i}\) is chosen arbitrarily. We will now argue that the vectors \(v^{(i)}_{t_{1}}\in A_{1},\ldots,v^{(i)}_{k_{k}}\in A_{k}\) are orthogonal. This follows easily since For each dimension \(j\in\{1,\ldots,d\}\), \(\sigma^{\prime}_{\text{mid}}\) contains at least one symbol \(\mathtt{a}_{i,j}\) for some \(i\in\{1,\ldots,k\}\), which implies that \(v^{(i)}_{t_{i}}[j]=0\).
**Time Complexity.** Recall that \(\Sigma=|k(d+1)|\), \(|\mathcal{I}|=k(k-1)(d+1)^{2}/2\), \(\alpha=k\), \(|r|=\mathsf{poly}(d)\), and \(|\sigma|\leq kn(d+1)\). The time taken to construct \(\sigma\) is also \(O(n\cdot k\cdot(d+1))\). If predictive trace monitoring against \(L\) can be decided in time \(|\sigma|^{\alpha-\epsilon}\mathsf{poly}((|\Sigma|\cdot|r|))\leq(kn(d+1))^{k- \epsilon}\mathsf{poly}(d)=n^{k-\epsilon}\mathsf{poly}(d)\), \(k\)-OV with set size \(n\) can also be solved in time \(n^{k-\epsilon}\mathsf{poly}(d)\), which refutes SETH (since SETH implies the absence of such an algorithm).
## 4. Pattern and Generalized Pattern Languages
In this section, we describe the class of _generalized pattern languages_, which are the central object of our study. Later, in Section 4.1 we show that for this class of languages, the predictive trace monitoring problem becomes highly tractable, in that, it admits a constant-space streaming algorithm that works in linear time.
Definition 2 (Pattern Languages).: Let \(\Sigma\) be an alphabet. A language \(L\) is said to be a pattern language of dimension \(d\in\mathbb{N}\) over \(\Sigma\), if it is of the form
\[L=\Sigma^{*}a_{1}\Sigma^{*}\ldots\Sigma^{*}a_{d}\Sigma^{*},\]
where \(a_{1},a_{2},\ldots,a_{d}\in\Sigma\); we use \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\) to denote the above pattern language. The class of all pattern languages will be denoted by \(\mathsf{PATT}\).
The above definition of pattern languages has been inspired from their potential target application in dynamic analysis for detecting bugs in concurrent and distributed applications. Over the years, a large class of concurrency bug detection techniques have been proposed that enhance the effectiveness of otherwise exhaustive testing by leveraging the hypothesis that "_many bugs in programs depend on the precise ordering of a small number of events..._" (Chistikov et al., 2016). The additional knowledge of'small bug depth' can then be leveraged to simplify testing, in that, only a small number of behaviors need to be analyzed; and has been the key to tackling the interleaving explosion problem encountered during testing for concurrent software (Burckhardt et al., 2010; Musuvathi and Qadeer, 2007). In our setting, where the events are labeled from an alphabet (denoting, say instructions, function entry and exit points, synchronization operations or
even load and store operations), the natural analog of this hypothesis asks to determine if there is some small \(d\) and labels \(a_{1},\ldots,a_{d}\) so that some \(d\) events \(e_{1},e_{2},\ldots,e_{d}\) (not necessarily observed in this order) with these labels \(\operatorname{lab}(e_{1})=a_{1},\ldots,\operatorname{lab}(e_{d})=a_{d}\), can be organized in the precise order where \(e_{i}\) is followed by \(e_{i+1}\) for each \(i\leq d\). This is precisely the predictive monitoring question against the pattern language \(\operatorname{\mathtt{Patt}}_{a_{1},\ldots,a_{d}}=\Sigma^{*}a_{1}\Sigma^{*} \ldots\Sigma^{*}a_{d}\Sigma^{*}\). As a final remark, observe that the language \(L_{\text{fail}}\) from Example 2.1 is indeed a pattern language.
In the following, we define a more general class of languages, thereby enhancing the class of specifications we consider.
Definition 3 (Generalized Pattern Languages).: Let \(\Sigma\) be an alphabet. A language \(L\subseteq\Sigma^{*}\) is said to be a generalized pattern language if it is a finite union \(L=L_{1}\cup L_{2}\cdots\cup L_{m}\), where for each \(L_{i}\) (\(1\leq i\leq m\)), we have (a) \(L_{i}=\varnothing\), or (b) \(L_{i}=\{\varepsilon\}\), or (c) \(L_{i}\in\operatorname{\mathtt{PATT}}\) is a pattern language.
### Properties of Pattern and Generalized Pattern Languages
In this section, we study language theoretic properties of our proposed class of languages.
**Topological Characterization.** Pattern languages and generalized pattern languages are clearly regular languages. Here, we provide a finer characterization -- they are also star-free languages. Recall that the class of star-free languages over \(\Sigma\) are those that can be constructed by using \(\varnothing\), \(\{\varepsilon\}\), \(\{a\}\) (where \(a\in\Sigma\) is some symbol), and inductively composing them using concatenation, union, intersection and complementation. We remark that Theorem 3.2 shows that the complete class of star-free regular languages are not amenable to efficient predictive trace monitoring.
Proposition 4.1 ().: Every generalized pattern language is a star-free regular language.
Let us now examine the closure properties of our proposed class of languages.
**Closure Under Union.** Let us first consider closure under union for \(\operatorname{\mathtt{PATT}}\). Consider the two patterns \(\operatorname{\mathtt{Patt}}_{a}\) and \(\operatorname{\mathtt{Patt}}_{b}\), where \(a\neq b\in\Sigma\). We can show using contradiction that there is no pattern \(\operatorname{\mathtt{Patt}}_{c_{1},\ldots,c_{k}}\) such that \(\operatorname{\mathtt{Patt}}_{c_{1},\ldots,c_{k}}=\operatorname{\mathtt{Patt}} _{a}\cup\operatorname{\mathtt{Patt}}_{b}\). Suppose on the contrary that this holds. Then, \(a,b\in c_{1},\ldots,c_{k}\) as otherwise \((\Sigma\setminus\{a,b\})\cap\operatorname{\mathtt{Patt}}_{c_{1},\ldots,c_{k}}\neq\varnothing\). This means that \(a\notin\operatorname{\mathtt{Patt}}_{c_{1},\ldots,c_{k}}\) since every string in \(\operatorname{\mathtt{Patt}}_{c_{1},\ldots,c_{k}}\) must contain \(b\), thereby giving us a contradiction. On the other hand, the class of generalized pattern languages is clearly closed under finite union.
**Closure Under Intersection.** Consider two pattern languages \(L_{1}=\operatorname{\mathtt{Patt}}_{a_{1},\ldots,a_{d}}\) and \(L_{2}=\operatorname{\mathtt{Patt}}_{b_{1},\ldots,b_{l}}\). Then, a word \(w\in L_{1}\cap L_{2}\) must have a subsequence \(a_{1},\ldots,a_{d}\) as well as a sub-sequence \(b_{1},\ldots,b_{l}\). Let us use the notation \(u\odot v\) to denote the shortest words that contain \(u\) and \(v\) as subsequences. Let us use the notation \(\sqsubseteq(\sqsubset)\) to denote the (strict) subsequence relation between two words.
\[u\odot v=\{w\in\Sigma^{*}\,|\,u\sqsubseteq w,\,v\sqsubseteq w,\,\forall w^{ \prime}\sqsubset w,u\not\sqsubset w^{\prime}\text{ or }v\not\sqsubset w^{\prime}\}\]
Then, we remark that \(L_{1}\cap L_{2}=\bigcup_{u\in a_{1},\ldots,a_{d}\odot b_{1},\ldots,b_{l}} \operatorname{\mathtt{Patt}}_{u}\). The class of generalized pattern languages is closed under intersection, since intersection distributes over union.
**Closure under Complementation.** The class of pattern languages is not closed under complementation. Consider the pattern language \(L=\operatorname{\mathtt{Patt}}_{a}\). Observe that \(L^{c}=(\Sigma\setminus\{a\})^{*}\) which cannot be a pattern language. Assume on the contrary that there was a pattern language \(\operatorname{\mathtt{Patt}}_{b_{1},\ldots,b_{l}}=L^{c}\); observe that \(a\cdot b_{1}\cdot\ldots\cdot b_{l}\in\operatorname{\mathtt{Patt}}_{b_{1}, \ldots,b_{l}}\) giving us a contradiction right away. In fact, the language \(L^{c}\) here is different from \(\{\varepsilon\}\), and \(\varnothing\), and is also not a finite union of multiple pattern languages (each of which will otherwise contain a string containing \(a\)). Thus, even the class of generalized pattern languages is not closed under complementation.
**Closure Under Concatenation.** Consider two pattern languages \(L_{1}=\operatorname{\mathtt{Patt}}_{a_{1},\ldots,a_{d}}\) and \(L_{2}=\operatorname{\mathtt{Patt}}_{b_{1},\ldots,b_{l}}\). Observe that the language \(L=\operatorname{\mathtt{Patt}}_{a_{1},\ldots,a_{d}\odot b_{1},\ldots,b_{l}}\) is indeed the concatenation of \(L_{1}\) and
\(L_{2}\), i.e., \(L=L_{1}\circ L_{2}\). As a result, the class \(\mathsf{PATT}\) is closed under concatenation. It is easy to see that the class of generalized pattern languages is also closed under concatenation
**Closure Under Kleene Star.** Pattern languages are certainly not closed under Kleene star because \(\varepsilon\notin\mathsf{Patt}_{a}\), but \(\varepsilon\in(\mathsf{Patt}_{a})^{*}\). However, the class of generalized pattern languages is closed under the Kleene star operation, because of the observation that for a pattern language \(L\in\mathsf{PATT}\), we have \(L^{*}=L\cup\{\varepsilon\}\) which can be proved inductively.
To summarize, we have the following closure properties. The formal proof is presented in Appendix A
**Theorem 4.1** (Closure properties of Pattern Languages).: The class of pattern languages is closed under finite concatenation, but not under union, intersection, complement and Kleene star.
**Theorem 4.2** (Closure properties of Generalized Pattern Languages).: The class of generalized pattern languages is closed under finite union, finite intersection, finite concatenation, and Kleene star but not under complement.
## 5. Predictive Monitoring against Generalized Pattern Languages
In Section 3.2, we established that the problem of predictive trace monitoring against regular languages does not admit an algorithm that runs faster than \(O(n^{\alpha})\), assuming \(\mathsf{SETH}\) holds. In this section, we show that, the case of pattern languages and generalized pattern languages (defined in Section 4) admit a more efficient -- constant-space linear-time streaming -- algorithm.
### Overview of the algorithm
Recall that a generalized pattern language is a union of pattern languages, \(\{\varepsilon\}\) or \(\varnothing\), and predictive monitoring against the latter two can trivially be performed in constant space. We instead show a constant-space algorithm for predictive monitoring against pattern languages, which would essentially imply a constant-space algorithm for the larger class of generalized pattern languages. Our algorithm (for pattern languages) is based on several crucial insights: here we briefly discuss them and give details in subsequent sections. In order to give intuitions behind these insights, we will define some subproblems (Problem 5.1, Problem 5.2). We fix the concurrent alphabet \((\Sigma,I)\) and the pattern language \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\) of dimension \(d\). We also fix the input execution \(\sigma\) over \((\Sigma,I)\). Let us now define some useful notations.
Let \(\tau=\langle e_{1},\ldots,e_{m}\rangle\) be a tuple of events of \(\sigma\) such that \(e_{1}<^{\sigma}\ldots<^{\sigma}e_{m}\). Let \(b_{1},\ldots,b_{m}\) be a sequence of \(m\) labels from \(\Sigma\). \(\tau\) is said to be a _shuffle of_\(b_{1},\ldots,b_{m}\) if \(\langle\text{lab}(e_{1}),\ldots,\text{lab}(e_{m})\rangle\) is a permutation of \(\langle b_{1},\ldots,b_{m}\rangle\). \(\tau\) is said to be a _partial candidate_ tuple with respect to our pattern \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\), if it is a shuffle of some subsequence of \(a_{1},\ldots,a_{d}\). \(\tau\) is said to be a _(complete) candidate_ tuple with respect to our pattern \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\), if it is a shuffle of \(a_{1},\ldots,a_{d}\).
**Definition 4** (Partial and Complete Admissible Tuples).: Let \(\tau\) be a tuple of \(k\) events such that \(\tau\) is a partial candidate tuple. \(\tau\) is said to be admissible with respect to \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\) if there is an execution \(\sigma^{\prime}\equiv_{\tau}\sigma\) and a permutation \(\tau^{\prime}=\langle e_{i_{1}},\ldots,e_{i_{k}}\rangle\) of \(\tau\), such that \(\langle\text{lab}(e_{i_{1}}),\ldots,\text{lab}(e_{i_{k}})\rangle\) is a subsequence of \(\langle a_{1},\ldots,a_{d}\rangle\) and \(\tau^{\prime}\) is subsequence of \(\sigma^{\prime}\). As a special case, \(\tau\) is a complete admissible tuple if it is a _complete_ candidate and also, admissible.
We remark that the admissibility of a tuple \(\tau\), if true, must be witnessed as a special permutation of \(\tau\). Assume that \(\tau\) is a partial candidate tuple, which is a shuffle of \(a_{j_{1}},\ldots,a_{j_{m}}\), a subsequence of \(a_{1},\ldots,a_{k}\). Then, there is a _special_ permutation \(\tau^{\dagger}=\langle e_{i_{1}^{\dagger}},\ldots,e_{i_{m}^{\dagger}}\rangle\) of \(\tau\), such that \(\langle\text{lab}(e_{i_{1}^{\dagger}}),\ldots,\text{lab}(e_{i_{m}^{\dagger}}) \rangle=\langle a_{j_{1}},\ldots,a_{j_{m}}\rangle\); more importantly, if \(\tau\) is admissible, then for every \(\sigma^{\prime}\equiv_{\tau}\sigma\) that witnesses the admissibility of \(\tau\), \(\tau^{\dagger}\) is a subsequence of \(\sigma^{\prime}\). Observe that this special permutation
can be obtained by sorting \(\tau\) according to \(\langle a_{1},\ldots,a_{d}\rangle\) while additionally ensuring that events with the same label do not get flipped. Henceforth, we will use the notation \(\mathsf{sort}(\tau,\langle a_{j_{1}},\ldots,a_{j_{m}}\rangle)\).
The first subproblem we consider pertains to checking if a given sequence of events is admissible.
**Problem 5.1**.: Given an execution \(\sigma\) and a candidate tuple \(\tau\) of events from \(\sigma\), check whether \(\tau\) is admissible with respect to \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\).
A naive approach to solving this problem would enumerate all \(O(|\sigma|!)\) linearizations and check membership in \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\) for each of them in total time \(O(|\sigma|\cdot|\sigma|!)\), which is prohibitive. A different and better approach instead relies on the partial order view of Mazurkiewicz traces. Here, one considers the graph \(G_{\sigma}=(V_{\sigma},E_{\sigma})\) corresponding to the partial order \(\leq_{I}^{\sigma}\), where \(V_{\sigma}=\mathsf{Events}_{\sigma}\) and \(E_{\sigma}\) captures immediate edges of \(\leq_{I}^{\sigma}\). One can get a graph \(G_{\sigma}^{\prime}\) by adding additional edges \(\{(e_{i_{1}^{\uparrow}},e_{i_{2}^{\uparrow}}),\ldots,(e_{i_{k-1}^{\uparrow}},e_{i_{k}^{\uparrow}})\}\) derived from the special permutation \(\langle e_{i_{1}^{\uparrow}},\ldots,e_{i_{k}^{\uparrow}}\rangle=\mathsf{sort} (\tau,\langle a_{1},\ldots,a_{k}\rangle)\). This reduces the admissibility of \(\tau\) to the acyclicity of \(G_{\sigma}^{\prime}\), which can be checked in time \(O(|\sigma|)\) since the degree of each vertex is constant. However, the space utilization of this graph-theoretic algorithm is also linear. In Section 5.2, we will show that Problem 5.1 can be solved using a streaming constant-space algorithm, that uses _after sets_ (Definition 6). After sets succinctly capture causality induced by the corresponding partial order and can be used to check the admissibility of a candidate tuple in a streaming fashion.
The second subproblem asks whether there exists an admissible tuple in the given execution.
**Problem 5.2**.: Given a execution \(\sigma\), check whether there is a candidate tuple \(\tau\) of events from \(\sigma\), such that \(\tau\) is admissible with respect to \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\).
In other words, Problem 5.2 asks there is a \(\sigma^{\prime}\equiv_{I}\sigma\) such that \(\mathrm{lab}(\sigma^{\prime})\in\mathsf{Patt}_{a_{1},\ldots,a_{d}}\). How do we solve Problem 5.2? Again, a naive approach would enumerate all \(O(|\sigma|^{k})\) candidate tuples and check their admissibility by repeatedly invoking our algorithm for Problem 5.1. This is again prohibitive, in terms of both its time and space usage. On the other hand, in Section 5.3, we design a linear-time streaming algorithm for Problem 5.2 whose overall space usage is constant. The high-level insights behind this algorithm are as follows. First, our algorithm runs in a streaming fashion, and tracks not just complete candidate tuples, but also _partial_ tuples which can potentially be extended to complete candidate tuples. Second, our algorithm is _eager_, in that, it determines whether a partial tuple is admissible, i.e., if it can be extended to a complete admissible tuple, and proactively discards all other partial tuples. The problem of checking if a partial tuple is admissible can, in fact, be solved in constant space using an algorithm similar to Algorithm 1 that solves Problem 5.1. Nevertheless, even the number of partial admissible tuples is still large (\(O(|\sigma|^{k})\)). Our third and most important insight, formalized in Lemma 5.3, tackles this problem -- we only need to track constantly many partial admissible tuples, those that are maximum in some precise sense.
Equipped with the above insights, the high-level description of our algorithm for predictive monitoring against pattern languages is the following. The algorithm processes one event at a time and tracks the maximum partial admissible tuple of each kind. When processing a new event of the given execution, it checks whether any existing partial tuple can be extended to another (partial or complete) admissible tuple. If at any point, a complete admissible tuple can be constructed, the algorithm terminates and declares success. Otherwise, there is no complete admissible tuple in the entire execution.
In Section 5.2, we discuss the details of how we solve Problem 5.1 in constant space. In Section 5.3, we present the details of our overall algorithm for predictive trace monitoring against pattern regular languages (Problem 5.2).
### Checking Admissibility of Candidate Tuples
Our first observation towards solving Problem 5.1 is that in order to check the admissibility of a candidate tuple \(\tau\), it suffices to examine only pairs of events that appear in \(\tau\), which, thankfully, is a local property of \(\tau\). We formalize this using the notion of locally admissible tuple. Intuitively, \(\tau\) is locally admissible when for every pair of events \((e,e^{\prime})\) in \(\tau\), if the order in which \(e\) and \(e^{\prime}\) appear in \(\tau\) is different from their order in the target \(\tau^{\dagger}=\operatorname{sort}(\tau,\langle a_{1},\ldots,a_{k}\rangle)\), then they are incomparable according to \(\leq_{\mathcal{I}}^{\sigma}\). This observation can be generalized to all partial candidate tuples. Thus we give the following definition.
**Definition 5** (Locally Admissible Tuple).: Let \(\tau=\langle e_{1},\ldots,e_{m}\rangle\) be a partial candidate tuple of an execution \(\sigma\) with respect to \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\), which is a shuffle of \(a_{j_{1}},\ldots,a_{j_{m}}\), a subsequence of \(a_{1},\ldots,a_{k}\). Let \(\langle e_{i_{1}^{\prime}},\ldots,e_{i_{m}^{\prime}}^{\dagger}\rangle= \operatorname{sort}(\tau,\langle a_{j_{1}},\ldots,a_{j_{m}}\rangle)\). We say that \(\tau\) is partially locally admissible with respect to \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\) if for all \(i_{\tau}^{\dagger}\neq i_{s}^{\dagger}\), we have \(s<r\Rightarrow e_{i_{r}^{\prime}}\mathrel{\leq_{\mathcal{I}}^{\sigma}}e_{i_{ s}^{\dagger}}\). As a special case, \(\tau\) is completely locally admissible if it is a complete candidate.
Let us observe a simple intuitive implication of Definition 5. Consider again the directed acyclic graph \(G_{\sigma}\) of induced partial order \(\leq_{\mathcal{I}}^{\sigma}\). The local admissibility of a tuple \(\tau=\langle e_{1},\ldots,e_{k}\rangle\) essentially means that for every two vertices \(e_{i_{\tau}^{\dagger}},e_{i_{s}^{\dagger}}\) that need to be flipped, the addition of the edge \((e_{i_{\tau}^{\dagger}},e_{i_{s}^{\dagger}})\) alone does not introduce any cycle in this graph.
In the following, we establish a clear connection between admissibility and local admissibility.
**Lemma 5.1**.: Let \(\tau=\langle e_{1},\ldots,e_{m}\rangle\) be a partial candidate tuple of an execution \(\sigma\) with respect to \(\mathsf{Patt}_{a_{1},\ldots,a_{d}}\), which is a shuffle of \(a_{j_{1}},\ldots,a_{j_{m}}\), a subsequence of \(a_{1},\ldots,a_{k}\). We have,
\(\tau\) is admissible \(\iff\)\(\tau\) is locally admissible
The above result is interesting, yet subtle. The intuitive implication of this lemma is that if each flipped edge \((e_{i_{\tau}^{\dagger}},e_{i_{s}^{\dagger}})\) individually does not introduce any cycle in \(G_{\sigma}\), then, in fact, the simultaneous addition of all such edges also does not introduce a cycle.
While the criterion of local admissibility and its equivalence with the admissibility are important ingredients towards our solution for Problem 5.1, it is not immediately amenable to a space-efficient algorithm. In particular, we cannot afford to construct a linear-sized graph and check the reachability in order to decide local admissibility. Fortunately, our next ingredient (Definition 6) effectively tackles this challenge.
**Definition 6**.: For an execution \(\sigma\) and an event \(e\in\mathsf{Events}_{\sigma}\), the after set of \(e\) in a prefix \(\rho\) of \(\sigma\) is defined as \(\mathsf{After}_{\rho}(e)=\{\operatorname{lab}(f)\mid f\in\mathsf{Events}_{ \rho}\text{ and }e\leq_{\mathcal{I}}^{\sigma}f\}\).
Observe that, for any event \(e\) and prefix \(\rho\), \(\mathsf{After}_{\rho}(e)\subseteq\Sigma\), thus this set is of constant size. A direct consequence of Definition 6 is that after sets are sufficient to check causality between two events.
**Proposition 5.1**.: Let \(\sigma\) be an execution and \(\rho\) be a prefix of \(\sigma\) that ends at an event \(f\). Let \(e\) be an event such that \(e<^{\rho}f\). We have, \(\operatorname{lab}(f)\in\mathsf{After}_{\rho}(e)\)\(\iff\)\(e\leq_{\mathcal{I}}^{\sigma}f\).
Our algorithm maintains after sets of the events participating in the candidate tuple. Observe that the after set of an event \(e\) is parameterized on a prefix \(\rho\) of a prefix of \(\sigma\). The next crucial observation, formally stated in Lemma 5.2, is that this set can be updated incrementally.
**Lemma 5.2**.: Let \(\sigma\) be an execution and \(\rho,\rho\circ f\) be prefixes of \(\sigma\) for some event \(f\). Let \(e\) be an event in \(\rho\). We have
\[\mathsf{After}_{\rho\circ f}(e)=\begin{cases}\mathsf{After}_{\rho}(e)\cup \{\operatorname{lab}(f)\}&\text{ if }\exists a\in\mathsf{After}_{\rho}(e)\text{ s.t. }(a,\operatorname{lab}(f))\notin\mathcal{I},\\ \mathsf{After}_{\rho}(e)&\text{ otherwise}.\end{cases}\]
Proc. ACM Program. Lang., Vol. 1, No. POPL, Article 1. Publication date: January 2024.
Equipped with Lemma 5.1, Proposition 5.1, and Lemma 5.2, we can now design the constant-space streaming algorithm for Problem 5.1 in Algorithm 1. The algorithm first constructs the target tuple \(\tau^{\dagger}\) which is uniquely determined from \(\tau\). It then scans the execution by processing one event at a time in the order of the given execution. At each event, it updates the after sets of all the events occurring in \(\tau\) (line 4). Then when we see an event from \(\tau\), we also check the violations of local admissibility proactively (line 7).
### Checking Existence of Admissible Tuples
Recall that the naive algorithm which enumerates all \(O(n^{k})\)\(k\)-tuples consumes \(O(n^{k})\) in time and space, even though we have a constant-space algorithm to check their admissibility. Our algorithm for solving Problem 5.2 instead runs in a streaming fashion and uses constant space. To do so, the algorithm tracks not only complete candidates but also partial ones. It incrementally extends those partial candidates as it processes new events. A distinct and crucial feature of our algorithm is that it eagerly discards those partial candidates that cannot be extended to a complete admissible tuple. In other words, it tracks only partially admissible tuples. Observe that Algorithm 1 can be easily adjusted to check partial admissibility.
Notice that, compared with the number of partial candidates, the number of partially admissible tuples is not reduced significantly, which is still \(O(n^{k})\). However, tracking only partially admissible tuples (instead of all partial candidates) allows us to bound the number of tuples we track. In particular, for a set of partially admissible tuples with the same label, we can afford to track only the maximum element. We observe that the number of all kinds of labels is the number of all permutations of all subsequences of \(a_{1},\ldots,a_{k}\), which is \(1!+\cdots+k!\), a constant number. This is also the number of partially admissible tuples we track at any point when processing an execution. The next lemma (Lemma 5.3) shows that the maximum element exists and is unique.
Before we formally show the existence of the maximum element, let us fix some helpful notations. Let \(\tau_{1}=\langle e^{1}_{1},\ldots,e^{1}_{m}\rangle,\tau_{2}=\langle e^{2}_{1}, \ldots,e^{2}_{m}\rangle\) be two sequences of events in the execution \(\sigma\), such that \(\operatorname{lab}(\tau_{1})=\operatorname{lab}(\tau_{2})\). We say that \(\tau_{1}\trianglelefteq\tau_{2}\) if for all \(1\leq i\leq m\), \(e^{1}_{i}<^{\sigma}e^{2}_{i}\) or \(e^{1}_{i}=e^{2}_{i}\). Also, we use \(\tau_{1}\triangledown\tau_{2}\) to denote the tuple \(\langle e_{1},\ldots,e_{m}\rangle\), where for \(1\leq i\leq m\), \(e_{i}=e^{2}_{i}\) if \(e^{1}_{i}<^{\sigma}e^{2}_{i}\), or \(e^{1}_{i}\) otherwise.
**Lemma 5.3** (Existence of the Maximum Element).: Let \(\sigma\) be an execution and \(\rho\) be a prefix of \(\sigma\). Let \(\langle b_{1},\ldots,b_{m}\rangle\) be a permutation of a subsequence of \(\langle a_{1},\ldots,a_{k}\rangle\). Let \(\vartheta\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\) be the set of all partially admissible tuples \(\tau\) in \(\rho\), such that \(\operatorname{lab}(\tau)=\langle b_{1},\ldots,b_{m}\rangle\). Then the set
\(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\) has a unique maximum tuple \(\tau_{0}\) with regard to \(\trianglelefteq\). That is, for every \(\tau\in\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\), we have \(\tau\trianglelefteq\tau_{0}\).
The above lemma states the unique existence of a maximum element. But observe that existence implies uniqueness. Consider \(\tau_{1}\) and \(\tau_{2}\) that both are maximum, then \(\tau_{1}\trianglelefteq\tau_{2}\) and \(\tau_{2}\trianglelefteq\tau_{1}\). From the definition, this implies that \(\tau_{1}=\tau_{2}\). The proof of existence relies on two observations. First, the set \(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\) is closed under \(\triangledown\), which is formalized in Lemma 5. Second, from the definition, \(\tau=\tau_{1}\triangledown\tau_{2}\) implies that \(\tau_{1}\trianglelefteq\tau\) and \(\tau_{2}\trianglelefteq\tau\). Combined the above two observations, we know that \(\triangledown_{\tau\in\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m} \rangle)}\tau\) is in \(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\) and also, is the unique maximum element.
**Lemma 5.4** (Closure Property under Join).: Let \(\sigma\) be an execution and \(\rho\) be a prefix of \(\sigma\). Let \(\langle b_{1},\ldots,b_{m}\rangle\) be a permutation of a subsequence of \(\langle a_{1},\ldots,a_{k}\rangle\). Let \(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\) be the set of all partially admissible tuples \(\tau\) in \(\rho\), such that \(\text{lab}(\tau)=\langle b_{1},\ldots,b_{m}\rangle\). For all \(\tau_{1},\tau_{2}\in\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle)\), we have \(\tau_{1}\triangledown\tau_{2}\in\partial\text{AdTpls}(\rho,\langle b_{1}, \ldots,b_{m}\rangle)\).
Our final ingredient is the observation that for any \(\rho\), a prefix of the execution \(\sigma\), the maximum partially admissible tuple of each kind in \(\rho\) can be computed incrementally using the maximum ones computed in the immediate prefix of \(\rho\). In the following Lemma 5.5, we formally establish this observation. From now on, we use \(\max(S)\) to denote the maximum element of the set \(S\) of partially admissible tuples.
**Lemma 5.5**.: Let \(\sigma\) be an execution. Let \(\rho^{\prime}=\rho\circ f\) be a prefix of \(\sigma\), where \(\rho\) is also a prefix of \(\sigma\) and \(f\) is an event. Let \(\langle b_{1},\ldots,b_{m}\rangle\) be a permutation of a subsequence of \(\langle a_{1},\ldots,a_{k}\rangle\). We have
\[\max(\partial\text{AdTpls}(\rho\circ f,\langle b_{1},\ldots,b_{m}\rangle))= \begin{cases}\max(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m-1} \rangle))\circ f&\text{if $\text{lab}(f)=b_{m}$ and}\\ &\text{max}(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m-1}\rangle)) \circ f\\ &\text{is partially admissible}\\ \max(\partial\text{AdTpls}(\rho,\langle b_{1},\ldots,b_{m}\rangle))&\text{ otherwise.}\end{cases}\]
The above lemma ensures the feasibility of our streaming algorithm. Recall that we only keep track of the maximum element of each kind for a processed prefix. Once processing a new event, the algorithm can compute new maximum partially admissible tuples for a longer prefix easily from those old ones it tracks.
Equipped with Lemma 5.3 and Lemma 5.5, we now present a constant-space streaming algorithm for Problem 5.2 in Algorithm 2, where we omit some details about how to maintain after sets and check partial local admissibility. A complete algorithm which combines Algorithm 1 and Algorithm 2 will be discussed in Section 5.4. We use a map \(M\) to track maximum partially admissible tuples with their labels as keys, which is initialized as empty (line 1). The algorithm processes events in the execution one after another. When processing an event whose label is in \(\langle a_{1},\ldots,a_{k}\rangle\), we update the maximum partially admissible tuples we track if needed (line 3), as stated in Lemma 5.5. If a complete admissible tuple is constructed (line 8), which is of length \(k\), the algorithm returns "YES" to claim the existence of admissible tuples. On the other hand, when finishing processing the whole execution and no complete admissible tuple is constructed (line 10), it returns "NO" to refute the existence.
### Algorithm for Predictive Monitoring against Pattern Languages
We now present in Algorithm 3 the one-pass streaming and constant-space algorithm for solving predictive monitoring against pattern languages, where we merge Algorithm 1 into Algorithm 2. In this section, We first introduce the data structure used in the algorithm, then illustrate how the algorithm works. In particular, we will elaborate on how to check partial admissibility using Algorithm 1 and the data structure, which is omitted in Section 5.3.
```
Input:\(\sigma\in\Sigma^{*}\) Output: YES if there exists a \(\sigma^{\prime}\equiv_{\mathcal{I}}\sigma\) such that \(\operatorname{lab}(\sigma^{\prime})\in\operatorname{\mathsf{Patt}}_{a_{i}, \ldots,a_{d}}\); NO otherwise
1 Let (\(M\): Permutations of prefixes of \(a_{1},\ldots,a_{k}\)\(\rightarrow\) Sequence of After Sets) be an empty map;
2for\(f\in\sigma\)do // Update after sets
3if\(\exists a\in\mathbb{A}_{e}\) s.t. \((a,\operatorname{lab}(f))\notin\mathcal{I}\)then\(\mathbb{A}_{e}\leftarrow\mathbb{A}_{e}\cup\{\operatorname{lab}(f)\}\) ;
4
5if\(\operatorname{lab}(f)\in\langle a_{1},\ldots,a_{k}\rangle\)then
6for\(\pi\) is a permutation of a prefixes of \(a_{1},\ldots,a_{k}\) ending with \(\operatorname{lab}(f)\)do
7 Let \(\pi^{\prime}=\pi[:-1]\);
8if\(M\) contains \(\pi^{\prime}\)then
9 Let \(\operatorname{\mathsf{A}}_{f}=\{\operatorname{lab}(f)\}\);
10 Let \(\operatorname{\mathsf{Part}}\operatorname{\mathsf{CandTuple}}=M(\pi^{\prime}) \circ\mathbb{A}_{f}\);
11 Let \(\operatorname{\mathsf{Part}}\operatorname{\mathsf{CandTuple}}^{\dagger}= \operatorname{sort}(\operatorname{\mathsf{Part}}\operatorname{\mathsf{CandTuple}},\pi)\); // Check partial admissibility
12if\(\forall A_{e}\in M(\pi^{\prime})\), \(\mathbb{A}_{f}\) occurs after \(\mathbb{A}_{e}\) in \(\operatorname{\mathsf{Part}}\operatorname{\mathsf{CandTuple}}^{\dagger}\) or \(\operatorname{lab}(f)\notin\mathbb{A}_{e}\)then
13\(M(\pi)\gets M(\pi^{\prime})\circ\{\operatorname{lab}(f)\}\)
14
15if\(M\) contains a key of length \(k\)then
16return YES;
17
18 returnNO;
```
**Algorithm 3**Constant-Space Algorithm for Predictive Monitoring against Pattern Languages
The data structure is similar to what is used in Algorithm 2. First, recall that our algorithm tracks the maximum partially admissible tuple for each kind of label. So, for any processed execution \(\rho\), we use a map to store key-value pairs, \((\langle b_{1},\ldots,b_{m}\rangle,\max(\partial\operatorname{AdTpls}(\rho, \langle b_{1},\ldots,b_{m}\rangle)))\), where \(\langle b_{1},\ldots,b_{m}\rangle\) is a permutation of a subsequence of \(\langle a_{1},\ldots,a_{d}\rangle\). The difference lies in that we keep track of the
after set of each event in maximum partially admissible tuples, instead of events themselves. It is sufficient to do so because the only information we need to check admissibility is the label and the after set of an event. Notice that the label information is implicitly stored in the key of the map.
In Algorithm 3, we first initialize the map as empty (line 1). What comes next is processing the events in the execution streamingly. When processing an event \(f\), we first update all after sets we track on seeing \(f\) (line 3 - line 4). If \(\text{lab}(f)\) participates in \(\langle a_{1},\ldots,a_{k}\rangle\), we try to update partially admissible tuples based on Lemma 5 (line 5 - line 15). Event \(f\) might participate in a maximum partially admissible tuples, whose labels end with \(\text{lab}(f)\). From line 8 to line 13, we check whether a partially admissible tuple extended with \(f\) is still admissible. The algorithm first constructs the after set for \(f\), which contains only \(\text{lab}(f)\). Then it computes the target \(\text{PartCandTuple}^{\dagger}\) uniquely determined from a partial candidate tuple \(\text{PartCandTuple}\), which is a concatenation of a previously computed maximum partially admissible tuples and \(f\) (Lemma 5). It finally uses Algorithm 1 to determine partial admissibility of \(\text{PartCandTuple}\). This can be done because we have maintained the label and the after set of each event.
Finally, similarly to Algorithm 2, when it witnesses a \(k\)-length complete admissible tuple, the algorithm claims the existence of a \(\sigma^{\prime}\equiv_{I}\sigma\), such that \(\text{lab}(\sigma^{\prime})\in\text{Patt}_{a_{1},\ldots,a_{d}}\), and says "YES" to the predictive monitoring problem (line 14). Otherwise, after the whole execution is processed, it refutes the existence (line 16).
Fix \((\Sigma,I)\) and the pattern language \(L\). The following two theorems state the correctness and the complexity of our Algorithm 3, where we assume that \(d,|\Sigma|\in O(1)\).
**Theorem 5.1** (Correctness).: For any given execution \(\sigma\), Algorithm 3 declares "YES" iff there is a \(\sigma^{\prime}\equiv_{I}\sigma\) such that \(\sigma^{\prime}\in L\).
**Theorem 5.2** (Complexity).: Let \(\sigma\) be an execution. Then, Algorithm 3 runs in time \(O(|\sigma|)\) and uses constant space.
For a generalized pattern language \(L\), we have the following corollary from Theorem 4.2 and Theorem 5.2.
**Corollary 5.1**.: The problem of predictive trace monitoring against generalized pattern languages can be solved in linear time and constant space.
### Vector Clock Algorithm
In Algorithm 3, we store the after set for each event we track. Although the size of an after set is constant and bounded by \(\Sigma\), it can become too large when \(|\Sigma|\) is large. To overcome this problem, in this section, we introduce another tool, vector clocks, that can be used to efficiently check happens-before relation in the context of concurrent execution. Our resulting vector clock algorithm will be referred to as PatternTrack.
We first provide a brief introduction to vector timestamps and vector clocks (Fidge, 1991; Mattern, 1989). Recall that in the context of concurrent executions, every event is labeled of the form \([t,op]\in\Sigma\), where \(t\) belongs to a set of threads \(\mathcal{T}\). The vector timestamp of an event \(\epsilon\) records the number of events executed by each thread that happened before \(e\). Formally, a vector timestamp is a mapping \(\text{VC}:\mathcal{T}\rightarrow\mathbb{N}\) from the threads to natural numbers. Vector clocks are variables taking values from the space of vector timestamps. Three common operations on vector clocks are _join_, _comparison_, and _update_. We use \(\text{VC}_{1}\sqcup\text{VC}_{2}\) to denote the join of two vector timestamps \(\text{VC}_{1}\) and \(\text{VC}_{2}\), which is the vector timestamp \(\lambda t,\max(\text{VC}_{1}(t),\text{VC}_{2}(t))\). The partial order \(\sqsubseteq\) is defined as \(\text{VC}_{1}\subseteq\text{VC}_{2}\) iff \(\forall t,\text{VC}_{1}(t)\leq\text{VC}_{2}(t)\). The minimum timestamp \(\lambda t,0\) is denoted by \(\bot\). For \(c\in\mathbb{N}\), the result of update \(\text{VC}[t\to c]\) is a new vector timestamp \(\lambda u\), if \(u=t\) then \(c\) else \(\text{VC}(u)\). Before presenting
the algorithm for computing the vector timestamp for a given event, we introduce a lemma that ensures the efficient comparison of the happens-before relation using vector clocks.
Lemma 5.6 ().: Let \(\sigma\) be an execution. Let \(e,f\) be two events in \(\sigma\). \(\mathsf{VC}_{e}\sqsubseteq\mathsf{VC}_{f}\iff e\leq_{T}^{\sigma}f\).
The aforementioned lemma guarantees that our algorithm can utilize vector clocks to perform the same task as long as we can compute the vector timestamp for each event in a streaming fashion and constant space. Algorithm 4 demonstrates how to compute vector timestamps by maintaining a vector clock for each label \([t,op]\in\Sigma\) and each \(t\in\mathcal{T}\).
In Algorithm 4, each vector clock is the vector timestamp of the last event \(\rho\) of that label or executed by that thread after processing a prefix \(\rho\) of \(\sigma\) Initially, in, all vector clocks for every label and thread are initialized to \(\bot\). When processing an event \(f\) executed by thread \(t\), the entry corresponding to \(t\) is first incremented in \(\mathsf{VC}_{t}\) (line 5). Then \(\mathsf{VC}_{t}\) is updated by joining it with all vector clocks corresponding to labels that are dependent on \(\mathsf{lab}(f)\), since the last events with these labels happen before \(f\) (line 7). Finally, we update the clock for \(\mathsf{lab}(f)\) with \(\mathsf{VC}_{t}\). The vector timestamp of \(f\) is precisely \(\mathsf{VC}_{\mathsf{lab}(f)}\).
```
Input:\(\sigma\in\Sigma^{*}\)
1 Let \(\mathsf{VC}_{[t,op]}=\bot\) for every \([t,op]\in\Sigma\);
2 Let \(\mathsf{VC}_{t}=\bot\) for every \(t\in\mathcal{T}\);
3for\(f\in\sigma\)do
4 Let \([t,op]=\mathsf{lab}(f)\);
5\(\mathsf{VC}_{t}\leftarrow\mathsf{VC}_{t}[t\rightarrow(\mathsf{VC}_{t}(t)+1)]\);
6for\([t^{\prime},op^{\prime}]\in\Sigma\) such that \(([t,op],[t^{\prime},op^{\prime}])\notin\mathcal{I}\)do
7\(\mathsf{VC}_{t}\leftarrow\mathsf{VC}_{t}\sqcup\mathsf{VC}_{[t^{\prime},op^{ \prime}]}\)
8\(\mathsf{VC}_{[t,op]}\leftarrow\mathsf{VC}_{t}\);
9\(\mathsf{VC}_{f}\leftarrow\mathsf{VC}_{[t,op]}\)
```
**Algorithm 4**Constant-Space Streaming Algorithm for Computing Vector Timestamps
Therefore, obtaining a vector-clock version of Algorithm 3 is simple. We can replace initialization (Algorithm 3 line 1) and maintain (Algorithm 3 line 3) parts with the initialization of all vector clocks and the computation of \(\mathsf{VC}_{\mathsf{lab}(f)}\) as in Algorithm 4. For the partial admissibility checking part, we first assign \(\mathsf{VC}_{f}\) as \(\mathsf{VC}_{\mathsf{lab}(f)}\) and compute \(\mathsf{PartCandTuple}\) as \(M(\pi^{\prime})\circ\mathsf{VC}_{f}\) (Algorithm 3 line 9 and line 10). Next we modify the condition in Algorithm 3 line 12 as follows: \(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsfmathsf{ }}}}}}}}}}}}} \mathsf{V} \mathsf{C_{e}\in M(\pi^{\prime})\), \(\mathsf{VC}_{f}\) occurs after \(\mathsf{VC}_{e}\) in \(\mathsf{PartCandTuple}^{\dagger}\) or \(\mathsf{VC}_{e}\not\subseteq\mathsf{VC}_{f}\)". Notice that we use the \(\sqsubseteq\) relation between vector timestamps to determine happens-before relation.
Fix \((\Sigma,\mathcal{I})\) and the pattern language \(L\). In the following, we assume that every arithmetic operation takes \(O(1)\) time and \(d\), \(|\mathcal{T}|\), \(|\Sigma|\)\(\in O(1)\).
Theorem 5.3 (Correctness).: For any given execution \(\sigma\), \(\mathsf{PatternTrack}\) declares "YES" iff there is a \(\sigma^{\prime}\equiv_{\mathcal{I}}\sigma\) such that \(\sigma^{\prime}\in L\).
Theorem 5.4 (Complexity).: Let \(\sigma\) be an execution. Then, \(\mathsf{PatternTrack}\) runs in time \(O(|\sigma|)\) and uses constant space.
## 6. Implementation and Evaluation
In this section, we discuss the implementation and evaluation of our vector clock algorithm, which we call \(\mathsf{PatternTrack}\), and the algorithm due to (Bertoni et al., 1989), which we call Bertoni. The goal of our evaluation is twofold. First, we want to understand whether the framework of pattern
languages is expressive to encode high-level temporal properties beyond traditional concurrency bugs, such as data race and atomicity violations. Further, we want to demonstrate whether the algorithm we propose can effectively expose those violations. Our second objective is to understand the performance improvement of our linear-time algorithm PatternTrack over the classical algorithm Bertoni and show the scalability of our algorithm. For this, we collect benchmarks from the Java Grande forum benchmark suite [Smith et al., 2001], the DaCapo benchmark suite [Blackburn et al., 2006], as well as GitHub repositories listed in [Legunsen et al., 2016].
### Experimental Setup
**Algorithm Implementation.** We implemented the vector clock version of Algorithm 3 and the classical algorithm from [Bertoni et al., 1989] in Java. Both algorithms have been implemented in a streaming fashion, in that, they process each event as soon as it is observed. To ensure a fair comparison between the two algorithms, we give the same execution as input to both. For this, we first generate one or more execution logs (a total of 33 executions) using the instrumentation and logging facility provided by RoadRunner[Flanagan and Freund, 2010]. and used the generated trace logs instead of performing monitoring at runtime. The events of the logs are related to a full program which may span many classes during the execution of the program. For native operations like read and write, the instrumentation resolves references precisely, while for calls to external APIs, extra manual annotations are utilised to print the addresses of the objects.
**Time Reporting.** Comparing the two algorithms in a streaming way and with the same criteria for termination ensures the fairness and correctness of the comparison. Both algorithms return "YES" as soon as they witness that a processed prefix can be reordered as a member of the pattern language. Thus, the length of execution processed by both algorithms is always the same, either the same prefix or the whole execution. This approach to the comparison ensures that any differences in performance are due to algorithmic differences rather than variations in the amount of data processed. In addition, we have imposed a 3-hour timeout (TO). Moreover, the maximum heap size of the Java Virtual Machine is set to 256GB, which puts a limit on the amount of memory that can be used during execution (OOM).
**Machine Configuration.** Our experiments were conducted on a 1996.250MHz 64-bit Linux machine with Java 19 and 256GB heap space.
### Bug Finding
In order to show the ability of our pattern language to encode concurrency properties and the ability of our algorithm to predict violations, we focus on five Java GitHub projects listed in [Legunsen et al., 2016]: logstash-logback-encoder4, exp4j5, jfreechart6, zmq-appender7, antIrworks8, and examined five high-level properties on them: Class_Invariant, Buffer_ManipulateAfterClose, Collection_UnsafeIterator, Map_UnsafeIterator, Collection_UnsynchronizedAddAll.
Footnote 4: [https://github.com/logfellow/logstash-logback-encoder](https://github.com/logfellow/logstash-logback-encoder)
Footnote 5: [https://github.com/fassey/exp4j](https://github.com/fassey/exp4j)
Footnote 6: [https://github.com/jfree/jfreechart](https://github.com/jfree/jfreechart)
Footnote 7: [https://github.com/lusis/zmq-appender](https://github.com/lusis/zmq-appender)
Footnote 8: [https://github.com/antlr/antIrworks](https://github.com/antlr/antIrworks)
The first property ensures consistency between two fields of a class. The last four come from Java API specifications [Legunsen et al., 2016]. A violation of these specifications would result in runtime errors. In the following, we describe these properties briefly and present how to encode them in our pattern language.
Class_Invariant.In many class designs, two fields of the class are required to be in a consistent state after every method call. Consider two fields a,b and also two methods f1,f2, both of which write to both a and b. From the design specification of this class, it requires that in every execution, calling to f1 and f2 behave atomically. In other words, an interleaving of f1 and f2 might leave the values in an inconsistent state. We can encode one of the violations as \(\mathsf{Part}_{\mathsf{f1\_write(a)}}\), f2.write(a), f2.write(b), f1.write(b). Buffer_ManipulateAfterClose.This property asks whether there is an access (write) to a buffer that follows a close() call to the buffer. We encode the violation of this property in our pattern language as follows: \(\mathsf{Part}_{\mathsf{buf\_close}}(\mathsf{)}\), \(\mathsf{buf\_write()}\). UnsafeIterator.The property Collection_UnsafeIterator checks if an execution modifies a collection while iterating it. We encode the violation as \(\mathsf{Part}_{\mathsf{iter\_next}}(\mathsf{)}\), c.add(), iter.next(), where c is a collection and iter is one of its iterators. The property \(\mathsf{Map\_UnsafeIterator}\) is almost the same except that iter is one of the iterators of c.entrySet(). Collection_UnsynchronizedAddAll.This property checks whether a call to addAll() interleaves with another modification to the same collection. Let us encode this violation as \(\mathsf{Part}_{\mathsf{c\_addAll()Enter}}\), c.add(), c.addAll()Enter is the event of method invocation and c.addAll()Exit is the event of method return.
We run multi-threaded test cases on these projects and log the executions. According to manual checking and the fact that there is no runtime error thrown, we ensure that these executions do not contain any pattern violations. However, shown in Table 1, our algorithm predicts violations
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|} \hline Program & \(\mathcal{N}\) & \(\mathcal{T}\) & Property Violation & PatternTrack & Bertoni \\ \hline logstash-logback-encoder & 587 & 3 & Buffer\_ManipulateAfterClose & 23ms & 64ms \\ \hline jfrecchart & 2753 & 3 & Collection\_UnsafeIterator & 63ms & 683ms \\ \hline zmq-appender & 10742 & 8 & Map\_UnsafeIterator & 168ms & OOM \\ \hline expdj & 1420 & 3 & Collection\_UnsynchronizedAddAll & 56ms & 171ms \\ \hline anifworks & 1027 & 3 & Class\_Invariant & 76ms & 379ms \\ \hline \end{tabular}
* class JFreeChart(
* public LegerIdintle getLegend(int index) {
* item = 0;
* item = 0;
* item iterator iterator = this.subtitiles.iterator();
* while (iterator.hasNext()) {
* Title subtitle = (Title) iterator.next();
*...
* /* get title */
*...
*...
* }
* return null;
* }
* public void addSubtitle(Title subtitle) {
*...
*...
* this.subtitiles.add(subtitle);
*...
*...
*...
* }
* }
\end{table}
Table 1: Experimental Results on Violation Prediction: Columns 1 - 3 present the benchmark name, number of processed events, and number of thread. Column 4 shows the name of the violation that our algorithm predicts. Columns 5 and 6 report the processing time of PatternTrack and Bertoni.
Figure 3: Simplified Code of Property Violation: jfrecchart code violates Collection_UnsafeIterator property and logstash code violates Buffer_ManipulateAfterClose property.
in these executions, indicating the ability of PatternTrack to find real-world bugs. This result also demonstrates the performance improvement of PatternTrack over Bertoni. In the test case zmq-appender, we can observe an out-of-memory error. Recall that the space complexity of Bertoni is \(O(\mathcal{N}^{\mathcal{T}})\), which is large when the number of processed events and the number of threads are large. In the next part, we will present more experiments on performance comparison.
In the following, we provide some examples by showing code snippets. First, Figure 2(a) shows a real bug violating the property Collection_UnsafeIterator in JFreeChart. When two different threads call the methods getLegend and addSubtitle concurrently, a ConcurrentModification error might be thrown. This is because the call to add() in addSubtitle might interleave between two next() calls. Next, another code snippet of logstash is shown in Figure 2(b), where Buffer_ManipulateAfterClose property might be violated. Even though write() method check this.closed before writing to the buffer, it is still possible that setting this.closed in close() occurs between the checking (line 10) and the writing (lines 13 - 17). The author of this project has claimed that this class is not thread-safe. The last example of antIrworks has been presented as the motivating example in Example 2.1.
To conclude, our pattern language framework is capable of encoding real-world high-level concurrency properties and our algorithm can predict violations effectively.
### Performance Evaluation
This section comprises three parts. In the first part, we explain the policy of selecting patterns to conduct performance evaluation. In the second part, we compare the performance of our algorithm with that of the algorithm in (Bertoni et al., 1989). In the last part, we investigate the impact of several related parameters on the performance of our algorithm. Specifically, we examine how the performance of our algorithm evolves with changes in the length of executions and the number of threads in an execution.
**Pattern Generation.** In order to ensure that the comparison is sufficient and fair, we generated pattern languages of dimension \(d=5\) and \(d=3\) at random by sampling events in each execution log. The choice of \(d=5\) and \(d=3\) is in line with prior empirical observations that most concurrent bugs in real-world scenarios can be observed by the ordering of a small number of events (Burckhardt et al., 2010). In our experiment, our pattern is a sequence of program locations \(\langle l_{1},\ldots,l_{d}\rangle\), instead of labels. Each program location can be viewed as a set of program labels, so we evaluated the membership problem against a generalized pattern language \(L=\cup_{a^{1}\in l_{1},\ldots,a^{d}_{i}\in l_{d}}\texttt{Patt}_{a^{1}_{i}, \ldots,a^{d}_{i}}\).
To generate a pattern, we randomly choose 5 or 3 events and log their program locations. We employed two policies for selecting events. The first policy we utilized is locality. We divided each execution into 100 parts of equal length and randomly selected one of those parts as the source of one pattern. Events chosen from widely separated parts of the execution are unlikely to participate in a bug-inducing pattern. The second policy is diversity. We choose events from as many different threads as possible, as this leads to more concurrency in the selected program locations.
**Speedup over**(Bertoni et al., 1989).** Theoretical analysis indicates that our algorithm runs in \(O(|\sigma|)\) time, while the algorithm in (Bertoni et al., 1989) runs in \(O(|\sigma|^{\alpha})\) time. As shown in Table 2 and Table 3, our algorithm PatternTrack exhibits significant performance advantages over Bertoni in practice. Additionally, Bertoni is not scalable when the length of the execution increases, as evidenced by the numerous instances of "OOM" (Out of Memory) cases in the tables. This is because Bertoni needs to store all ideals of a partial order, which consumes \(O(|\sigma|^{\alpha})\) space. As a result, this limits its ability to process larger executions. In contrast, our constant-space algorithm PatternTrack is not burdened by this limitation and can handle executions with millions of events.
**Performance w.r.t. execution length.** This evaluation aims to demonstrate that our algorithm runs with linear-time complexity. We selected two executions, from benchmark Raytracer and
\begin{table}
\begin{tabular}{|c|c|c|c||c|c||c|c||c|c||c|c|} \hline
**Mean** & 168M & - & 30.29s & 2ih0m & 30.75s & 2ih0m & 29.86s & 2ihm & 35.19s & 2ih7m & 32.59s & 2h22m \\ \hline \end{tabular}
\end{table}
Table 2. Experimental results on patterns of dimension 3: Columns 1 - 3 contain benchmark information, including name, number of events, and number of threads. Columns 4 - 13 report the first pattern match time or full processing time (if there is no match) of PatternTrack and Bertoni on five different 3-dimension patterns. It is important to note that the patterns in the same column are different for different benchmarks. For the sake of clarity and ease of presentation, we have named the five patterns as Pattern 1-5.
Sparsematmult, based on their length and the time taken to witness the first pattern match. This selection criterion ensures that we could gather sufficient data to evaluate the performance of our algorithm across a range of execution lengths. To accomplish this, We recorded the time taken by our algorithm to process every 10 million events for Raytracer and every 20 million events for Sparsematmult. Figure 3(a) presents the result, which is in alignment with our theoretical result.
**Performance w.r.t number of threads.** The number of threads is the final parameter of interest. As the time consumption of each vector clock operation is proportional to the number of threads, we assume that our algorithm's time consumption is also proportional to the number of threads. To test this assumption, we generated seven executions from benchmark raytracer with 5, 10, 15, 20, 25, and 35 threads respectively. Raytracer allows us to configure the number of threads when generating execution. We then recorded the time taken by PatternTrack to process 1,000,000 events for each execution. The results, in Figure 3(b), show that the time consumption increases as the number of threads increases, which confirms our theoretical assumption.
\begin{table}
\begin{tabular}{|c|c|c||c|c||c|c||c|c||c|c|c|} \hline
**Mean** & 168M & - & 40.66s & 2h2zm & 43.97s & 2h27m & 34.74s & 2h3zm & 2m8s & 2h27m & 1m53s & 2h3zm \\ \hline \end{tabular}
\end{table}
Table 3: Experimental results on patterns of dimension 5. Columns 1 - 3 contain benchmark information, including name, number of events, and number of threads. Columns 4 - 13 report the first pattern match time or full processing time (if there is no match) of PatternTrack and Bertoni on five different patterns.
## 7. Related Work
Runtime verification has emerged as a popular class of techniques instrumental in applying light-weight formal methods in industrial settings and is actively studied. Many techniques are proposed to monitor different specification languages at runtime, such as LTL (Rosa and Havelund, 2005), extended regular expressions (Sen and Rosu, 2003), and CFG (Meredith et al., 2010). Furthermore, some monitoring frameworks are developed to bridge the gap between formal algorithms and practical testing, such as Java-MaC (Kim et al., 2004) and JavaMOP (Jin et al., 2012). However, in general, such techniques are catered towards the "detection" of property violations, in contrast to the "prediction" problem we address in our work.
Predictive monitoring techniques are studied to enhance the efficiency of runtime verification. Most works in the runtime predictive analysis are centered around the detection of data races. Some of the early works that studied predictive analysis for data races are instrumental in formulating core ideas such as sound and maximal causal models and correct reorderings (Chen and Rosu, 2007; Chen et al., 2008; Huang et al., 2015, 2014; Said et al., 2011; Sen et al., 2005, 2006; Serbanuta et al., 2013). However, such techniques primarily employ heavy-weight approaches such as exhaustive enumeration or the use of SAT/SMT solvers, limiting their scalability in practice. Thus the focus of recent works in predictive analysis tools (Cai et al., 2021; Kini et al., 2017; Mathur et al., 2021; Roemer and Bond, 2019; Roemer et al., 2020; Zhu et al., 2023) has been on efficiency and practical scalability. Such techniques develop carefully crafted algorithms relying on specific heuristics or partial orders and avoid heavy-weight SMT solving. Beyond data races, recent improvements in these techniques have been used for predicting other concurrency bugs such as deadlocks (Cai et al., 2021; Kalhauge and Palsberg, 2018; Sorrentino et al., 2010; Tunc et al., 2023). Some works also propose predictive methods for higher level, but specific properties such as use-after-free violations (Huang, 2018), null-pointer dereferences (Farzan et al., 2012), and violations of atomicity specifications (Biswas et al., 2014; Farzan and Madhusudan, 2008, 2009; Flanagan et al., 2008; Mathur and Viswanathan, 2020; Sinha et al., 2012). Unlike those specific properties, our work focuses on more general properties. Previously, GPredict (Huang et al., 2015) is an important technique for predicting against generic properties, based on SMT solver, and thus is not scalable in practice.
The notion of a sound causal model, from which predictive monitoring algorithms infer the existence of an execution violating the specification, is an important component in predictive monitoring. Reasoning based on Mazurkiewicz traces, though less exhaustive, is in general more scalable than those based on more heavyweight notions such as correct reorderings (Serbanuta et al., 2013; Smaragdakis et al., 2012). Indeed the most widely adopted data race detection techniques crucially rely on the happens-before partial order (Pozniansky and Schuster, 2003), which implicitly builds upon Mazurkiewicz traces. Recent complexity-theoretic results on predictive analysis (Kulkarni et al., 2021; Mathur et al., 2020) also characterise the precise complexity (NP-completeness and W[1]-hardness) of predictive race detection. Our work has been inspired by this contrast in computational complexity of reasoning with Mazurkiewicz traces v/s correct reorderings. Unfortunately, though, our hardness result depicts that this insight does not trivially generalise to arbitrary properties beyond data races. Our work therefore proposes pattern languages to offer a good balance of expressiveness and algorithmic efficiency. In addition, there are many theoretical results have been proposed in Mazurkiewicz trace theory (Diekert and Rozenberg, 1995), which, therefore, could provide intuitions for predictive monitoring algorithms. For example, Ochmanski's theorem (Ochmanski, 1985) captures a characterisation of a subset of regular languages, in the notion of _star-connectedness_, whose trace equivalence class is still regular. Pattern languages, proposed in our work, fit into this framework. However, from the general theorem, the size of the automata accepting the trace equivalence class might be a tower function in \(d\) and \(|\Sigma|\) and the
structure is unclear. Our work fills the gap between theoretical results and practical techniques by designing small-size automata for predictive monitoring against pattern languages.
Soundness (absence of false positives), although highly desirable, often comes at a cost of computational complexity. Towards this, many runtime predictive analysis techniques often forego soundness in favour of the simplicity of analysis, including the Eraser algorithm based on lock-sets (Savage et al., 1997) or work on deadlock prediction (Bensalem and Havelund, 2005; Cai et al., 2020). Some approaches perform post-hoc analysis to reduce false positives (Roemer et al., 2018, 2020). Other approaches rely on re-executions to confirm the predicted bugs (Joshi et al., 2009; Sorrentino et al., 2010) to reduce false positives.
Concurrency bug detection has been an active area of research for several decades. Besides runtime verification and predictive monitoring, there are many other techniques for the analysis of concurrent programs. Model checking (Clarke et al., 1986) has emerged as a popular paradigm for finding bugs, thanks to advances such as dynamic partial order reduction (Flanagan and Godefroid, 2005) and stateless model checking (Abdulla et al., 2014; Kokologiannakis et al., 2022, 2019; Oberhauser et al., 2021). Randomised testing techniques (Burckhardt et al., 2010; Joshi et al., 2009; Ozkan et al., 2019; Yuan et al., 2018) as well as fuzz testing techniques (Jeong et al., 2019) have also been shown effective in practice. Static analysis techniques (Blackshear et al., 2018; Engler and Ashcraft, 2003; Naik et al., 2006; Young et al., 2007) have been developed but their adoption is often limited by high false positive rates. Type systems for preventing data races (Boyapati et al., 2002; Flanagan et al., 2008; Flanagan and Qadeer, 2003) have been instrumental in the design of programming languages such as Rust.
## 8. Conclusions
In this work, we study the predictive monitoring problem -- given an execution of a concurrent program, can it be reordered to witness the violation of a specification? We show that for specifications expressed using regular languages, and when the reorderings are restricted to the _trace equivalence_ class of the observed execution, this problem suffers a high complexity polynomial lower bound. Towards this, we propose a sub-class of regular languages, called (generalized) pattern languages, and show that this class of languages can be effectively monitored, in a predictive sense, using a constant-space linear-time algorithm. Our experimental evaluation, using an implementation of the algorithm PatternTrack we develop, shows the effectiveness of our proposed class of specification languages and the algorithms for (predictively) monitoring against them.
There are many avenues for future work and we list some here. We expect pattern languages to be useful for enhancing controlled concurrency testing techniques (Agarwal et al., 2021; Musuvathi and Qadeer, 2007) and fuzz testing for concurrent software (Jeong et al., 2019). Building optimal stateless model checking (Kokologiannakis et al., 2022) algorithms for richer specifications such as pattern languages is another interesting direction in the same vein as the work proposed here.
## Acknowledgement
We thank the anonymous reviewers for several comments that helped improve the paper. We also acknowledge Vladimir Gladshtein and Martin Mirchev for their help. |
2305.15961 | Quantifying the Intrinsic Usefulness of Attributional Explanations for
Graph Neural Networks with Artificial Simulatability Studies | Despite the increasing relevance of explainable AI, assessing the quality of
explanations remains a challenging issue. Due to the high costs associated with
human-subject experiments, various proxy metrics are often used to
approximately quantify explanation quality. Generally, one possible
interpretation of the quality of an explanation is its inherent value for
teaching a related concept to a student. In this work, we extend artificial
simulatability studies to the domain of graph neural networks. Instead of
costly human trials, we use explanation-supervisable graph neural networks to
perform simulatability studies to quantify the inherent usefulness of
attributional graph explanations. We perform an extensive ablation study to
investigate the conditions under which the proposed analyses are most
meaningful. We additionally validate our methods applicability on real-world
graph classification and regression datasets. We find that relevant
explanations can significantly boost the sample efficiency of graph neural
networks and analyze the robustness towards noise and bias in the explanations.
We believe that the notion of usefulness obtained from our proposed
simulatability analysis provides a dimension of explanation quality that is
largely orthogonal to the common practice of faithfulness and has great
potential to expand the toolbox of explanation quality assessments,
specifically for graph explanations. | Jonas Teufel, Luca Torresi, Pascal Friederich | 2023-05-25T11:59:42Z | http://arxiv.org/abs/2305.15961v1 | # Quantifying the Intrinsic Usefulness of
###### Abstract
Despite the increasing relevance of explainable AI, assessing the quality of explanations remains a challenging issue. Due to the high costs associated with human-subject experiments, various proxy metrics are often used to approximately quantify explanation quality. Generally, one possible interpretation of the quality of an explanation is its inherent value for teaching a related concept to a student. In this work, we extend artificial simulatability studies to the domain of graph neural networks. Instead of costly human trials, we use explanation-supervisable graph neural networks to perform simulatability studies to quantify the inherent _usefulness_ of attributional graph explanations. We perform an extensive ablation study to investigate the conditions under which the proposed analyses are most meaningful. We additionally validate our method's applicability on real-world graph classification and regression datasets. We find that relevant explanations can significantly boost the sample efficiency of graph neural networks and analyze the robustness towards noise and bias in the explanations. We believe that the notion of usefulness obtained from our proposed simulatability analysis provides a dimension of explanation quality that is largely orthogonal to the common practice of faithfulness and has great potential to expand the toolbox of explanation quality assessments, specifically for graph explanations.
Keywords:Graph Neural Networks Explainable AI Explanation Quality Simulatability Study
## 1 Introduction
Explainable AI (XAI) methods are meant to provide explanations alongside a complex model's predictions to make its inner workings more transparent to human operators to improve trust and reliability, provide tools for retrospective model analysis, as well as to comply with anti-discrimination laws [6]. Despite
recent developments and a growing corpus of XAI methods, a recurring challenge remains the question of how to assess the quality of the generated explanations. Since explainability methods aim to improve human understanding of complex models, Doshi-Velez and Kim [6] argue that ultimately the quality of explanations has to be assessed in a human context. To accomplish this, the authors propose the idea of simulatability studies. In that context, human subjects are tasked to simulate the behavior of a machine-learning model given different amounts of information. While a control group of participants receives only the model input-output information, the test group additionally receives the explanations in question. If, in that case, the test group performs significantly better at simulating the behavior, the explanations can be assumed to contain information useful to human understanding of the task. However, human trials such as this are costly and time-consuming, especially considering the number of participants required to obtain a statistically significant result. Therefore, the majority of XAI research is centered around more easily available proxy metrics such as explanation sparsity and faithfulness.
While proxy metrics are an integral part of the XAI evaluation pipeline, we argue that the quantification of usefulness obtained through simulatability studies is an important next step toward comparing XAI methods and thus increasing the impact of explainable AI. Recently, Pruthi _et al._[21] introduce the concept of _artificial simulatability studies_ as a trade-off between cost and meaningfulness. Instead of using human subjects, the authors use explanation-supervisable neural networks as participants to conduct simulatability studies for natural language processing tasks.
In this work, we extend the concept of artificial simulatability studies to the domain of graph neural networks and specifically node and edge attributional explanations thereof. This application has only been enabled through the recent development of sufficiently explanation-supervisable graph neural network approaches [26]. We will henceforth refer to this artificial simulatability approach as the student-teacher analysis of explanation quality: The explanations in question are considered to be the "teachers" that are evaluated on their effectiveness of communicating additional task-related information to explanation-supervisable "student" models. We show that, under the right circumstances, explanation supervision leads to significantly improved main task prediction performance w.r.t. to a reference. We first conduct an extensive ablation study on a specifically designed synthetic dataset to highlight the conditions under which this effect can be optimally observed. Most importantly, we find that the underlying student model architecture has to be sufficiently capable to learn explanations during explanation-supervised training. Our experiments show, that this is especially the case for the self-explaining MEGAN architecture, which was recently introduced by Teufel _et al._[26].
Additionally, we find that the target prediction problem needs to be sufficiently challenging to the student models to see a significant effect. We can furthermore show that while ground truth explanations cause an increase in performance,
deterministically incorrect/adversarial explanations cause a significant decrease in performance. In the same context, random explanation noise merely diminishes the benefit of explanations, but neither causes a significant advantage nor a disadvantage.
Finally, we validate the applicability of our method on explanations for one real-world molecular classification and one molecular regression dataset.
## 2 Related Work
**Simulatability Studies.** Doshi-Velez and Kim [6] introduce the concept of simulatability studies, in which human participants are asked to simulate the forward predictive behavior of a given model. Explanations about the model behavior should be considered useful if a group of participants with access to these explanations performs significantly better than a control group without them. Such studies are only rarely found in the growing corpus of XAI literature due to the high effort and cost associated with them. Nonetheless, some examples of such studies can be found. Chandrasekaran _et al._[4] for example conduct a simulatability study for a visual question answering (VQA) task. The authors investigate the effect of several different XAI methods such as GradCAM and attention among other aspects. They find no significant performance difference for participants when providing explanations. Hase and Bansal [10] conduct a simulatability study for a sentiment classification task. They can only report significant improvements for a small subset of explanation methods. Lai _et al._[14, 13] conduct a simulatability study for a deception detection task. Unlike previously mentioned studies, the authors ask participants to predict ground truth labels instead of simulating a model's predictions. Among different explanation methods, they also investigate the effects of other assistive methods on human performance, such as procedurally generated pre-task tutorials and real-time feedback. The study shows that real-time feedback is crucial to improve human performance. In regard to explanations, the authors find that especially simplistic explanations methods seem to be more useful than more complicated deep-learning-based ones and that providing the polarity of attributional explanations is essential.
Beyond the cost and effort associated with human trials, previous studies report various additional challenges when working with human subjects. One issue seems to be the limited working memory of humans, where participants report forgetting previously seen relevant examples along the way. Another issue is the heterogeneity of participants' abilities, which causes a higher variance in performance results, necessitating larger sample sizes to obtain statistically significant results. Overall, various factors contribute to such studies either not observing any effect at all or reporting only on marginal explanation benefits.
One possible way to address this is proposed by Arora _et al._[2], who argue to rethink the concept of simulatability studies itself. In their work, instead of merely using human subjects as passive predictors, the participants are encouraged to interactively engage with the system. In addition to guessing the model
prediction, participants are asked to make subsequent single edits to the input text with the goal of maximizing the difference in model confidence. The metric of the average confidence deviation per edit can then also be seen as a measure of human understanding of the model's inner workings. The authors argue that such an explorative and interactive study design is generally more suited to the strengths of human subjects and avoids their respective weaknesses.
Another approach is represented by the emergent idea of _artificial simulatability studies_, which generally aim to substitute human participants in these kinds of studies with machine learning models that are able to learn from explanations in a similar manner. There exist early variations of this basic idea [11, 27], for which conceptional problems have been pointed out [21]. Most notably, some methods expose explanations during test time, which may cause label leakage. Recently, Pruthi _et al._[21] devise a method that does not expose explanations during test time by leveraging explanation-supervised model training. They are able to show a statistically significant test performance benefit for various explanation methods, as well as for explanations derived from human experts in natural language processing tasks. In our work, we build on the basic methodology proposed by Pruthi _et al._ and use explanation-supervisable student models to avoid the label-leakage problem. Furthermore, we extend their basic approach toward a more rigorous method. The authors consider the _absolute_ performance of the explanation supervised student by itself as an indicator of simulatability. We argue that, due to the stochastic nature of neural network training, potential simulatability benefits should only be considered on a statistical level obtained through multiple independent repetitions, only _relative_ to a direct reference, and verified by tests of statistical significance.
#### 2.0.1 Explanation Supervision for GNNs
Artificial simulatability studies, as previously discussed, require student models which are capable of _explanation supervision_. This means that it should be possible to directly train the generated explanations to match some given ground truth explanations during the model training phase. Explanation supervision has already been successfully applied in the domains of image processing [16] and natural language processing [3]. However, only recently was the practice successfully adapted to the domain of graph neural networks as well. First, Gao _et al._[8] propose the GNES framework, which aims to use the differentiable nature of various existing post-hoc explanation methods such as GradCAM and LRP to perform explanation supervised training. Teufel _et al._[26] on the other hand introduce the MEGAN architecture which is a specialized attention-based architecture showing especially high potential for explanation-supervision. To the best of our knowledge, these two methods remain the only existing methods for explanation-supervision of graph _attributional_ explanations until now.
In addition to attributional explanations, several other types of explanations have been introduced. Noteworthy examples are prototype-based explanations [23] and concept-based explanations [19]. In the realm of prototype explanations, Zhang _et al._[28] and Dai and Wang [5] introduce self-explaining prototype-based
graph neural networks, although it has not yet been demonstrated if and how explanation-supervision could be applied to them. For concept-based explanations, on the other hand, Magister _et al._[18] demonstrate explanation supervision, opening up the possibility to extend artificial simulatability studies to explanation modalities beyond simple attributional explanations as well.
## 3 Student-Teacher Analysis of Explanation Quality
Simulatability studies aim to assess how useful a set of explanations is in improving human understanding of a related task. To offset the high cost and uncertainty associated with human-subject experiments, Pruthi _et al._[21] introduce artificial simulatability studies, which substitute human participants with explanation-aware neural networks, for natural language processing tasks. In this section, we describe our extension of this principle idea to the application domain of graph neural networks and introduce the novel STS metric which we use to quantify the explanation-induced performance benefit.
We assume a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{V})\) is represented by a set of node indices \(\mathcal{V}=\mathbb{N}^{V}\) and a set of edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\), where a tuple \((i,j)\in\mathcal{E}\) denotes a directed edge from node \(i\) to node \(j\). Every node \(i\) is associated with a vector of initial node features \(\mathbf{h}_{i}^{(0)}\in\mathbb{R}^{N_{0}}\), combining into the initial node feature tensor \(\mathbf{H}^{(0)}\in\mathbb{R}^{V\times N_{0}}\). Each edge is associated with an edge feature vector \(\mathbf{u}_{ij}^{(0)}\in\mathbb{R}^{M}\), combining into the edge feature tensor \(\mathbf{U}\in\mathbb{R}^{E\times M}\). Each graph is also annotated with a target value vector \(\mathbf{y}^{\text{true}}\in\mathbb{R}^{C}\), which is either a one-hot encoded vector for classification problems or a vector of continuous values for regression problems. For each graph exists node and edge attributional explanations in the form of a node importance tensor \(\mathbf{V}\in[0,1]^{V\times K}\) and an edge importance tensor
Figure 1: Illustration of the student teacher training workflow as well as the setting of our artificial simulatability study.
\(\mathbf{E}\in[0,1]^{E\times K}\) respectively. \(K\) is the number of explanation channels and is usually equal to the size \(C\) of the target vector, meaning that for every target value each element of the input graph is annotated with a 0 to 1 value indicating that element's importance.
In the framework of artificial simulatability studies, human participants are replaced by explanation-aware machine learning models which will be referred to as _students_. In this analogy, the _teacher_ is represented by the dataset of input graphs and target value annotations, as well as the explanations whose quality is to be determined. Figure 1 illustrates the concept of such a _student-teacher analysis_ of explanation quality. The set \(\mathbb{X}\) of input data consists of tuples \((G,\mathbf{H}^{(0)},\mathbf{U}^{(0)})\) of graphs and their features. The set \(\mathbb{Y}\) consists of tuples \((\mathbf{y},\mathbf{V},\mathbf{E})\) of target value annotations, as well as node and edge attributional explanations. A student is defined as a parametric model \(\mathcal{S}_{\theta}:(G,\mathbf{H}^{(0)},\mathbf{U}^{(0)})\rightarrow(\mathbf{ y},\mathbf{V},\mathbf{E})\) with the trainable model parameters \(\boldsymbol{\theta}\). This firstly implies that every student model has to directly output explanations alongside each prediction. Moreover, these generated explanations have to be actively _supervisable_ to qualify as an explanation-aware student model.
During a single iteration of the student-teacher analysis, the sets of input and corresponding output data are split into a training set \(\mathbb{X}^{\text{train}},\mathbb{Y}^{\text{train}}\) and an unseen test set \(\mathbb{X}^{\text{test}},\mathbb{Y}^{\text{test}}\) respectively. Furthermore, two architecturally identical student models are initialized with the same initial model parameters \(\boldsymbol{\theta}\): The reference student model \(\mathcal{S}^{\text{ref}}_{\theta}\) and the explanation-aware student model \(\mathcal{S}^{\text{exp}}_{\theta}\). During the subsequent training phase, the reference student only gets to train on the main target value annotations \(\mathbf{y}\), while the explanation student is additionally trained on the given explanations. After the two students were trained on the same training elements and the same hyperparameters, their final prediction performance is evaluated on the unseen test data. If the explanation student outperforms the reference student on the final evaluation, we can assume that the given explanations contain additional task-related information and can thus be considered useful in this context.
However, the training of complex models, such as neural networks, is a stochastic process that generally only converges to a local optimum. For this reason, a single execution of the previously described process is not sufficient to assess a possible performance difference. Rather, a repeated execution is required to confirm the statistical significance of any result. Therefore, we define the student-teacher analysis as the \(R\) repetitions of the previously described process, resulting in the two vectors of test set evaluation performances \(\mathbf{p}^{\text{ref}},\mathbf{p}^{\text{exp}}\in\mathbb{R}^{R}\) for the two student models respectively. The concrete type of metric used to determine the final performance may differ, as is the case with classification and regression problems for example. Based on this definition we define the _student-teacher simulatability_ metric
\[\text{STS}_{R}=\text{median}(\mathbf{p}^{\text{exp}}-\mathbf{p}^{\text{ref}})\]
as the median of the pairwise performance differences between all the individual explanation students' and reference students' evaluation results. We choose
the median here instead of the arithmetic mean, due to its robustness towards outliers, which may occur when models sporadically fail to properly converge in certain iterations of the procedure.
In addition to the calculation of the STS metric, a paired t-test is performed to assure the statistical significance of the results. Only if the p-value of this test is below a 5% significance level should the analysis results be considered meaningful.
## 4 Computational Experiments
### Ablation Study for a Synthetic Graph Classification Dataset
We first conduct an ablation study on a specifically designed synthetic graph dataset to show the special conditions under which a performance benefit for the explanation student can be observed.
We call the synthetic dataset created for this purpose _red and blue adversarial motifs_ and a visualization of it can be seen in Figure 2. The dataset consists of 5000 randomly generated graphs where each node is associated with 3 node features representing an RGB color code. Each graph is seeded with one primarily red motif: Half of the elements are seeded with the red and yellow star motif and are consequently labeled as the "active" class. The other half of the elements are seeded with a red and green ring motif and labeled as "inactive". The dataset represents a binary classification problem where each graph will have to be classified as either active or inactive. As each class assignment is entirely based on the existence of the corresponding sub-graph motifs, these motifs are considered the perfect ground truth explanations for that dataset. In addition to the primarily red motifs, each graph is also seeded with one primarily blue motif: Either a blue-yellow ring motif or a blue-green star motif. These blue motifs are seeded such that their distribution is completely uncorrelated with the true class label of the elements. Thus, these motifs are considered deterministically incorrect/adversarial explanations w.r.t. the main classification task.
Student Model Implementations.We conduct an experiment to assess the suitability of different student model implementations. As previously explained, a student model has to possess two main properties: Node and edge explanations have to be generated alongside each prediction and more importantly it has to be possible to train the models based on these explanations in a supervised manner. To the best of our knowledge, there exist two methods from literature, which do this for _attributional_ explanations: The GNES framework of Gao _et al._[8] and the MEGAN architecture of Teufel _et al._[26]. We conduct an experiment with \(R=25\) repetitions of the student-teacher analysis for three different models: A lightweight MEGAN model, GNES explanations based on a simple GCN network, and GNES explanations based on a simple GATv2 network. In each iteration, 100 elements of the dataset are used to train the student model while the rest is used during testing. Table 1 shows the results of this experiment. We
report the final STS value, as well as the node and edge AUC metrics, which indicate how well the explanations of the corresponding models match the ground truth explanations of the test set.
Since the perfect ground truth explanations are used for this experiment, we expect the explanation student to have the maximum possible advantage w.r.t to the explanations. The results show that only the MEGAN student indicates a statistically significant STS value of a median 12% accuracy improvement for the explanation-aware student. The GNES experiments on the other hand do not show statistically significant performance benefits. We believe that this is due to the limited effect of the explanation supervision that can be observed in these cases: While the node and edge accuracy of the GNES explanation student only improves by a few percent, the MEGAN explanation student almost perfectly learns the ground truth explanations. This is consistent with the results
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Student Model & STS\({}_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline GNES\({}_{\text{GCN}}\) & 0.02 & 0.55\(\pm\)0.04 & 0.59\(\pm\)0.03 & 0.64\(\pm\)0.04 & 0.66\(\pm\)0.04 \\ GNES\({}_{\text{GATv2}}\) & 0.01 & 0.59\(\pm\)0.05 & 0.61\(\pm\)0.05 & 0.51\(\pm\)0.05 & 0.55\(\pm\)0.04 \\ MEGAN\({}_{0.0}^{2}\) & **0.12\({}^{(*)}\)** & 0.64\(\pm\)0.15 & **0.94\(\pm\)**0.01 & 0.66\(\pm\)0.14 & **0.96\(\pm\)**0.02 \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 1: Results for 25 repetitions of the student-teacher analysis for different reference models (Ref) and explanation supervised student model (Exp) implementations.
Figure 2: Synthetic dataset used to quantify the usefulness of attributional graph explanations, incl. testing the robustness toward adversarial explanations.
reported by Teufel _et al._[26], who report that MEGAN outperforms the GNES approach in capability for explanation supervision. A possible explanation for why that is the case might be that the explanation-supervised training of the already gradient-based explanations of GNES relies on a second derivative of the network, which might provide a generally weaker influence on the network's weights.
Based on this result, we only investigate the MEGAN student in subsequent experiments.
#### 4.2.2 Training Dataset Size Sweep.
In this experiment, we investigate the influence of the training dataset size on the explanation performance benefit. For this purpose, we conduct several student-teacher analyses with \(R=25\) repetitions using the MEGAN student architecture. We vary the number of elements used for training between 100, 200, 300, 400, and 500 elements out of a total of 5000. In each iteration, the training dataset with that number of elements is randomly sampled from the entire dataset and the rest is used during testing. Figure 3 shows the results of this experiment. We visualize the performance distributions of explanation and reference students for each dataset size and provide the STS metric in each case.
The results show the greatest performance benefit for the smallest training set size of just 100 elements. Afterward, the STS value converges to 0 for 500 elements, losing statistical significance as well. We believe that this is caused by
Figure 3: Results of student-teacher analyses (\(R=25\)) for different training dataset sizes. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
the convergence of _both_ students to the near-perfect performance of approx. 98% accuracy. In other words: A larger train set size represents a smaller difficulty for the student models. With decreasing difficulty, the students can solve the task almost perfectly by themselves, diminishing any possible benefit of the explanations. We can therefore formulate the rule of thumb that explanations have the potential to provide the greatest benefit when tasks are _more difficult_, and cannot be so easily solved without explanations. As shown in this experiment, a reduction of the train set size sufficiently provides such an increase in difficulty. Based on this result, we conduct subsequent experiments with a training set size of 100 to observe the most pronounced effect.
**Explanation Noise Sweep.** For the majority of real-world tasks, perfect ground truth explanations are generally not available. Instead, explanations can be generated through a multitude of XAI methods that have been proposed in recent years. Since complex machine learning models and XAI methods generally only find local optima, it is reasonable to assume that generated explanations are not perfect but rather contain some amount of noise as well. The question is how such explanation noise affects the results of our proposed student-teacher analysis. In this experiment, we perform different student-teacher analyses, where in each case the explanations are overlaid with a certain ratio \(P\%\) of random noise, where \(P\in\{0,5,10,20,40,60,80,100\}\). A ratio \(P\%\) means that the explanation importance value for every element (nodes and edges) in every graph has a \(P\%\) chance of being randomly sampled instead of the ground truth value being used.
Figure 4: Results of student-teacher analyses (\(R=25\)) for explanations with different ratios of additional explanation noise. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
Each student-teacher analysis is conducted with a MEGAN student architecture and 100 training data points. Figure 3 shows the results of this experiment.
The results show that there is a statistically significant performance benefit for the explanation student until 40% explanation noise is reached. Afterward, the STS value converges towards zero and loses statistical significance as well. One important aspect to note is that even for high ratios of explanation noise the performance difference converges toward zero. This indicates that explanations consisting almost entirely of _random noise_ do not benefit the performance of a student model, but they do _not negatively influence_ it either. We believe this is the case because random explanations do not cause any learning effect for the model. In our setup of explanation-supervised training, actual explanation labels are not accessible to either student during the testing phase, instead, the models have to learn to replicate the given explanations during training through their own internal explanation-generating mechanisms. Only through these learned replications can any potential advantage or disadvantage be experienced by the models during performance evaluation. Completely random explanations cannot be learned by the models and consequently have no effect during performance evaluation.
#### 4.2.2 Adversarial Explanation Sweep.
The previous experiment indicates that purely random explanations do not negatively affect the model performance. By contrast, it could be expected that deterministic incorrect explanations on
Figure 5: Results of student-teacher analyses (\(R=25\)) for datasets containing different amounts of adversarial incorrect explanations. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
the other hand should have a negative influence on the performance. The used dataset is seeded with two families of sub-graph motifs (see Figure 2): The red-based motifs are completely correlated with the two target classes and can thus be considered the perfect explanations for the classification task. The blue-based motifs on the other hand are completely uncorrelated to the task and can thus be considered _incorrect/adversarial_ explanations w.r.t. to the target labels. In this experiment, increasing amounts of these adversarial explanations are used to substitute the true explanations during the student-teacher analysis to investigate the effect of incorrect explanations on the performance difference. In each iteration, \(Q\%\) of the true explanations are replaced by adversarial explanations, where \(Q\in\{0,5,10,20,40,60,80,100\}\). Each student-teacher analysis is conducted with a MEGAN student architecture and 100 training elements.
The results in Figure 5 show that a statistically significant explanation performance benefit remains for ratios of adversarial explanations for up to 20%. For increasingly large ratios, the STS value still remains positive although the statistical significance is lost. For ratios of 80% and above, statistically significant _negative_ STS values can be observed. This implies that incorrect explanations negatively influence the performance of the explanation-aware student model.
Figure 6: Results of student-teacher analyses (\(R=25\)) for different layer structures of the MEGAN student model. The square brackets indicate the number of hidden units in each layer of the main convolutional part of the network. The normal brackets beneath indicate the number of hidden units in the fully connected layers in the tail-end of the network. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
#### 3.3.2 Student Network Layer Structure.
In this experiment, we investigate the influence of the concrete student network layout on the explanation performance benefit. For this purpose, we conduct several student-teacher analyses with \(R=25\) repetitions using the MEGAN student architecture. We vary the number of convolutional and fully-connected layers, as well as the number of hidden units in these layers. Starting with a simple two-layer 3-unit network layout, the number of model parameters, and thus its complexity is gradually increased until the most complex case of a three-layer 20-unit network is reached. Figure 3 shows the results of this experiment. We visualize the performance distributions of explanation and reference students for each dataset size and provide the STS metric in each case.
The results show that the students' prediction performance generally improves for more complex models. However, this is true for the explanation as well as the reference student. While there still is a statistically significant effect for the most complex network layout, it is very marginal because the reference student achieves almost perfect accuracy in these cases as well. On the other hand, the most simple student network layout shows the largest performance benefit. However, for the simple network layouts, the standard variation of the performance over the various repetitions is greatly increased for reference and explanation students, but seemingly more so for the explanation student. We generally conclude that both extreme cases of simplistic and complex student network architectures have disadvantages w.r.t. to revealing a possible explanation performance benefit. In the end, the best choice is a trade-off between variance in performance and overall capability.
#### 3.3.3 Node versus Edge Explanations.
We conduct an experiment to determine the relative impact of the node and edge explanations individually. We conduct a student-teacher analysis with \(R=25\) repetitions. We use a simple three-layer MEGAN student, where each iteration uses 100 randomly chosen training samples. We investigate three cases: As a baseline case, the explanation student uses ground truth node and edge explanations during explanation-supervised training. In another case, the explanation student is only supplied with the node attributional explanations. In the last case, only the edge attributional explanations are used. This is achieved by setting the corresponding weighting factors to 0 during training. Table 2 shows the results of this experiment. We report the final STS value, as well as the node and edge AUC metrics, which indicate how well the explanations of the corresponding models match the ground truth explanations of the test set.
The results show that all three cases achieve statistically significant STS values indicating a performance benefit of the given explanations. Furthermore, in all three cases, the explanations learned by the explanation student show high similarity (AUC \(>0.9\)) to the ground truth explanations for node _as well as_ edge attributions. This implies that the student model is able to infer the correspond
ing explanation edges for the ground truth explanatory motifs, even if it is only trained on the nodes, and vice versa. We believe the extent of this property is a consequence of the used MEGAN student architecture. The MEGAN network architecture implements an explicit architectural co-dependency of node and edge explanations to promote the creation of connected explanatory sub-graphs. These results imply that it may be possible to also apply the student-teacher analysis in situations where only node or edge explanations are available.
### Real-World Datasets
In addition to the experiments on the synthetic dataset, we aim to provide a validation of the student-teacher analysis' effectiveness on real-world datasets as well. For this purpose, we choose one graph classification and one graph regression dataset from the application domain of chemistry. We show how the student-teacher analysis can be used to quantify _usefulness_ of the various kinds of explanations for these datasets.
#### 4.2.1 Mutagenicity - Graph Classification
To demonstrate the student-teacher analysis of GNN-generated explanations on a real-world graph classification task, we choose the Mutagenicity dataset [9] as the starting point. By its nature of being real-world data, this dataset does not have ground truth explanations as it is, making it hard to compare GNN-generated explanations to the ground truth. However, the dataset can be transformed into a dataset with ground truth explanatory subgraph motifs. It is hypothesized that the nitro group (NO\({}_{2}\)) is one of the main reasons for the property of mutagenicity [15, 17]. Following the procedure previously proposed by Tan _et al._[25], we extract a subset of elements containing all molecules which are labeled as mutagenic and contain the benzene-NO\({}_{2}\) group as well as all the elements that are labeled as non-mutagenic and do not contain that group. Consequently, for the resulting mutagenicity subset, the benzene-NO\({}_{2}\) group can be considered as the definitive ground truth explanation for the mutagenic class label. We call the resulting dataset _MutagenicityExp_. It
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Explanations & \(STS_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline Both & 0.12\({}^{(*)}\) & 0.62\({}_{\pm 0.14}\) & 0.95\({}_{\pm 0.03}\) & 0.62\({}_{\pm 0.16}\) & 0.94\({}_{\pm 0.03}\) \\ Nodes & 0.12\({}^{(*)}\) & 0.65\({}_{\pm 0.13}\) & 0.93\({}_{\pm 0.03}\) & 0.65\({}_{\pm 0.12}\) & 0.92\({}_{\pm 0.04}\) \\ Edges & 0.10\({}^{(*)}\) & 0.67\({}_{\pm 0.15}\) & 0.93\({}_{\pm 0.03}\) & 0.67\({}_{\pm 0.12}\) & 0.94\({}_{\pm 0.03}\) \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 2: Results for 25 repetitions of the student-teacher Analysis conducted with either only node explanations, only edge explanations, or both.
consists of roughly 3500 molecular graphs, where about 700 are labeled as mutagenic. Furthermore, we designate 500 random elements as the test set, which are sampled to achieve a balanced label distribution.
Based on this dataset, we train GNN models to solve the classification problem. Additionally, we use multiple different XAI methods to generate attributional explanations for the predictions of those GNNs on the previously mentioned test set of 500 elements. These explanations, generated by the various XAI methods, are then subjected to student-teacher analysis, along with some baseline explanations. The results of an analysis with 25 repetitions can be found in Table 3. The hyperparameters of the student-teacher analysis have been chosen through a brief manual search. We use the same basic three-layer MEGAN student architecture as with the synthetic experiments. In each repetition, 10 random elements are used to train the students, and the remainder is used to assess the final test performance. Each training process employs a batch size of 10, 150 epochs, and a 0.01 learning rate. The student-teacher analysis is performed solely on the previously mentioned 500-element test set, which remained unseen to any of the trained GNN models.
As expected, the results show that the reference random explanations do not produce a statistically significant STS result. These explanations are included as a baseline sanity check because previous experiments on the synthetic dataset imply that purely random explanation noise should not have any statistically significant effect on the performance in either direction. The benzene-NO\({}_{2}\) ground truth explanations on the other hand show the largest statistically significant STS value of a median 13% accuracy improvement, as well as the largest explanation accuracy of the explanation student models. GNNexpl
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Explanations by & STS\({}_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline Ground Truth & \(\mathbf{0.13^{(*)}}\) & \(0.42\)\(\pm\)\(0.05\) & \(\mathbf{0.97}\)\(\pm\)\(0.05\) & \(0.41\)\(\pm\)\(0.05\) & \(\mathbf{0.96}\)\(\pm\)\(0.04\) \\ GNNExplainer & \(0.09^{(*)}\) & \(0.50\)\(\pm\)\(0.09\) & \(0.69\)\(\pm\)\(0.05\) & \(0.50\)\(\pm\)\(0.11\) & \(0.71\)\(\pm\)\(0.04\) \\ Gradient & \(0.07^{(*)}\) & \(0.54\)\(\pm\)\(0.18\) & \(0.84\)\(\pm\)\(0.06\) & \(0.46\)\(\pm\)\(0.17\) & \(0.67\)\(\pm\)\(0.10\) \\ MEGAN\({}_{1.0}^{2}\) & \(0.12^{(*)}\) & \(0.55\)\(\pm\)\(0.15\) & \(0.91\)\(\pm\)\(0.01\) & \(0.55\)\(\pm\)\(0.14\) & \(0.92\)\(\pm\)\(0.02\) \\ Random & \(0.01\) & \(0.50\)\(\pm\)\(0.04\) & \(0.50\)\(\pm\)\(0.03\) & \(0.50\)\(\pm\)\(0.04\) & \(0.50\)\(\pm\)\(0.04\) \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 3: Results for 25 repetitions of the student-teacher analysis for different explanations on the MutagenicityExp dataset. We mark the best result in bold and underline the second best.
explanations also show statistically significant STS values of 9% and 7% median accuracy improvement respectively. The MEGAN-generated explanations show the overall second-best results with an STS value just slightly below the ground truth.
We hypothesize that high values of explanation accuracy are a necessary but not sufficient condition for high STS results. A higher learned explanation accuracy indicates that the explanations are generally based on a more consistent set of underlying rules and can consequently be replicated more easily by the student network, which is the basic prerequisite to show any kind of effect during the student evaluation phase. This is a necessary but not sufficient condition because as shown in the previous adversarial explanation experiment, explanations can be highly deterministic yet conceptionally incorrect and thus harmful to model performance.
#### 4.2.2 **AqSolDB - Graph Regression**
The AqSolDB [24] dataset consists of roughly 10000 molecular graphs annotated with experimentally determined logS values for their corresponding solubility in water. Of these, we designate 1000 random elements as the test set.
For the concept of water solubility, there exist no definitive attributional explanations. However, there exists some approximate intuition as to what molecular structures should result in higher/lower solubility values: In a simplified manner, one can say that non-polar substructures such as carbon rings and long carbon chains generally result in lower solubility values, while polar structures such as certain nitrogen and oxygen groups are associated with higher solubility values.
Based on this dataset, we train a large MEGAN model on the training split of the elements to regress the water solubility and then generate the dual-channel attributional explanations for the previously mentioned 1000-element test split. For this experiment, we only use a MEGAN model as it is the only XAI method able to create dual-channel explanations for single value graph regression tasks [26]. These dual-channel explanations take the previously mentioned _polarity of evidence_ into account, where some substructures have an opposing influence on the solubility value. The first explanation channel contains all negatively influencing sub-graph motifs, while the second channel contains the positively influencing motifs. In addition to the MEGAN-generated explanations, we provide two baseline explanation types. Random explanations consist of randomly generated binary node and edge masks with the same shape. Trivial explanations represent the most simple implementation of the previously introduced human intuition about water solubility: The first channel contains all carbon atoms as explanations and the second channel contains all oxygen and nitrogen atoms as explanations.
The hyperparameters of the student-teacher analysis have been chosen through a brief manual search. We use the same basic three-layer MEGAN student architecture as with the synthetic experiments. In each repetition, 300 random elements are used to train the students, and the remainder is used to assess
the final test performance. Each training process employs a batch size of 32, 150 epochs, and a 0.01 learning rate. The student-teacher analysis is performed solely on the previously mentioned 1000-element test set, which remained unseen to the predictive model during training.
The results show that neither the random nor the trivial explanations result in any significant performance improvement. The MEGAN-generated explanations on the other hand result in a significant improvement of a median 0.23 for the final prediction MSE. This implies that the MEGAN-generated explanations do in fact encode additional task-related information, which goes beyond the most trivial intuition about the task. However, a possible pitfall w.r.t. to this conclusion needs to be pointed out: The MEGAN-generated explanations are evaluated by a MEGAN-based student architecture. It could be that the effect is so strong because these explanations are especially well suited to that kind of architecture, as they were generated through the same architecture. We believe that previous experiments involving architecture-independent ground truth explanations have weakened this argument to an extent. Still, it will be prudent to compare these results with explanations of a different origin in the future, such as the explanations of human experts.
## 5 Limitations
We propose the student-teacher analysis as a means to measure the content of _useful_ task-related information contained within a set of attributional graph explanations. This methodology is inspired by human simulatability studies but with the decisive advantages of being vastly more time- and cost-efficient as well as being more reproducible. However, there are currently also some limitations to the applicability of this approach. Firstly, the approach is currently limited
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(STS_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline Random & 0.00 & 0.50\({}_{\pm 0.04}\) & 0.50\({}_{\pm 0.03}\) & 0.50\({}_{\pm 0.04}\) & 0.50\({}_{\pm 0.04}\) \\ Trivial & 0.03 & 0.40\({}_{\pm 0.05}\) & **0.99\({}_{\pm 0.05}\)** & 0.42\({}_{\pm 0.05}\) & **0.99\({}_{\pm 0.04}\)** \\ MEGAN\({}_{1.0}^{2}\) & **0.23\({}^{(*)}\)** & 0.55\({}_{\pm 0.15}\) & 0.90\({}_{\pm 0.01}\) & 0.55\({}_{\pm 0.14}\) & 0.89\({}_{\pm 0.02}\) \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 4: Results for 25 repetitions of the student-teacher analysis for different explanations on the AqSolDB dataset. We highlight the best result in bold and underline the second best.
to attributional explanations, which assign a 0 to 1 importance value to each element. These kinds of explanations have been found to have issues [12, 1] and recently many different kinds of explanations have been proposed. Some examples are _counterfactuals_[20], _concept-based_ explanations [19], and _prototype-based_ explanations [23].
Another limitation is that the student-teacher analysis process itself depends on a lot of parameters. As we show in previous sections, the size of the training dataset and the specific student architectures have an impact on how pronounced the effect can be observed. For these reasons, the proposed STS metric cannot be used as an absolute measure of quality such as accuracy for example. Rather, it can be used to relatively _compare_ different sets of explanations under the condition that all experiments are conducted with the same parameters. We propose certain rules of thumb for the selection of these parameters, however, it may still be necessary to conduct a cursory parameter search for each specific application. Despite these limitations, we believe that artificial simulatability studies, as proposed in this work, are still an important step toward better practices for the evaluation of explainable AI methods. The currently most widespread metric of explanation quality is the concept of explanation _faithfulness_, which only measures how decisive an explanation is for a model's prediction. We argue, that the concept of artificial simulatability is a first step towards a measure of how intrinsically _useful_ explanation can be for the _communication_ of additional task-related information.
## 6 Conclusion
In this work, we extend the concept of artificial simulatability studies to the application domain of graph classification and regression tasks. We propose the student-teacher analysis and the _student-teacher simulatability_ (STS) metric to quantify the content of intrinsically _useful_ task-related information for a given set of node and edge attributional explanations. We conduct an ablation study on a synthetic dataset to investigate the conditions under which an explanation benefit can be observed most clearly and propose several rules of thumb for an initial choice of experimental parameters: Analysis requires a sufficient number of repetitions for statistical significance, a small number of training elements and a light-weight layer structure for the student model. Furthermore, we show evidence that the analysis method is robust towards small amounts of explanation noise and adversarial explanations. Interestingly, random explanation noise merely suppresses any explanation benefit while deterministically incorrect explanations cause significant performance degradation. This indicates that the method cannot only be used to identify good explanations but also to detect actively harmful ones. Furthermore, we can validate the applicability of our proposed analysis for several real-world datasets of molecular classification and regression.
We believe that artificial simulatability studies can provide a valuable additional tool for the evaluation of graph explanations. The student-teacher analysis mea
sures the _usefulness_ of explanations in communicating task-related knowledge, which can be seen as a complementary dimension to the current widespread practice of measuring explanation faithfulness.
For future work, it will be interesting to extend this process to other kinds of graph explanations that have recently emerged such as concept-based explanations or prototype-based explanations. Since this is a method of measuring the content of task-related information within explanations, another application may be in educational science. The method could be used to assess explanation annotations created by human students to provide quantitative feedback on their understanding of a given graph-related problem. Another line of future work is demonstrated by Fernandes _et al._[7] which uses the differentiable nature of Pruthi _et al._'s [21] original artificial simulatability procedure itself in a meta-optimization process that attempts to optimize an explanation generator for this property of explanation usefulness.
## 7 Reproducibility Statement
We make our experimental code publically available at [https://github.com/aimat-lab/gnn_student_teacher](https://github.com/aimat-lab/gnn_student_teacher). The code is implemented in the Python 3.9 programming language. Our neural networks are built with the KGCNN library by Reiser _et al._[22], which provides a framework for graph neural network implementations with TensorFlow and Keras. We make all data used in our experiments publically available on a file share provider [https://bwsyncandshare.kit.edu/s/E3MynrfQsLAHzJC](https://bwsyncandshare.kit.edu/s/E3MynrfQsLAHzJC). The datasets can be loaded, processed, and visualized with the visual graph datasets package [https://github.com/aimat-lab/visual_graph_datasets](https://github.com/aimat-lab/visual_graph_datasets). All experiments were performed on a system with the following specifications: Ubuntu 22.04 operating system, Ryzen 9 5900 processor, RTX 2060 graphics card and 80GB of memory. We have aimed to package the various experiments as independent modules and our code repository contains a brief explanation of how these can be executed. |
2310.00664 | Twin Neural Network Improved k-Nearest Neighbor Regression | Twin neural network regression is trained to predict differences between
regression targets rather than the targets themselves. A solution to the
original regression problem can be obtained by ensembling predicted differences
between the targets of an unknown data point and multiple known anchor data
points. Choosing the anchors to be the nearest neighbors of the unknown data
point leads to a neural network-based improvement of k-nearest neighbor
regression. This algorithm is shown to outperform both neural networks and
k-nearest neighbor regression on small to medium-sized data sets. | Sebastian J. Wetzel | 2023-10-01T13:20:49Z | http://arxiv.org/abs/2310.00664v1 | # Twin Neural Network Improved k-Nearest Neighbor Regression
###### Abstract
Twin neural network regression is trained to predict differences between regression targets rather than the targets themselves. A solution to the original regression problem can be obtained by ensembling predicted differences between the targets of an unknown data point and multiple known anchor data points. Choosing the anchors to be the nearest neighbors of the unknown data point leads to a neural network-based improvement of k-nearest neighbor regression. This algorithm is shown to outperform both neural networks and k-nearest neighbor regression on small to medium-sized data sets.
_Keywords_: Artificial Neural Networks, k-Nearest Neighbors, Regression |
2306.02795 | An Efficient Compact Blazed Grating Antenna for Optical Phased Arrays | Phased arrays are vital in communication systems and have received
significant interest in the field of optoelectronics and photonics, enabling a
wide range of applications such as LiDAR, holography, wireless communication,
etc. In this work, we present a blazed grating antenna that is optimized to
have upward radiation efficiency as high as 80% with a compact footprint of 3.5
{\mu}m \times 2 {\mu}m at an operational wavelength of 1.55 {\mu}m. Our
numerical investigations demonstrate that this antenna in a 64 \times 64 phased
array configuration is capable of producing desired far-field radiation
patterns. Additionally, our antenna possesses a low side lobe level of -9.7 dB
and a negligible reflection efficiency of under 1%, making it an attractive
candidate for integrated optical phased arrays. | Henna Farheen, Suraj Joshi, J. Christoph Scheytt, Viktor Myroshnychenko, Jens Förstner | 2023-06-05T11:41:17Z | http://arxiv.org/abs/2306.02795v1 | # An Efficient Compact Blazed Grating Antenna for
###### Abstract
Phased arrays are vital in communication systems and have received significant interest in the field of optoelectronics and photonics, enabling a wide range of applications such as LiDAR, holography, wireless communication, etc. In this work, we present a blazed grating antenna that is optimized to have upward radiation efficiency as high as 80% with a compact footprint of 3.5 \(\upmu\)m\(\times\)2 \(\upmu\)m at an operational wavelength of 1.55 \(\upmu\)m. Our numerical investigations demonstrate that this antenna in a 64\(\times\)64 phased array configuration is capable of producing desired far-field radiation patterns. Additionally, our antenna possesses a low side lobe level of -9.7 dB and a negligible reflection efficiency of under 1%, making it an attractive candidate for integrated optical phased arrays.
Antennas phased arrays directivity Integrated optical antennas in the transmitting mode are devices that couple localized power into free propagating radiation in their surrounding medium, where the freely radiating electromagnetic waves interfere in the far-field [1]. These antennas pave the way to numerous integrated photonic systems like optical phased arrays (OPAs) [2], wireless communication [3], coherent imagers [4], etc. These systems are commonly realized using silicon photonic integration, which is highly compatible with the complementary metal-oxide-semiconductor (CMOS) process, thus furnishing high-yield, low-cost commercial systems [5]. In particular, for OPAs, this has proven to be extremely beneficial in incorporating a large number of antennas that aid in generating desired far-field radiation patterns [6].
Conventionally, the performance of an OPA is strongly affected by the number of elements and the spacing between them, which control the angular resolution, beamwidth, and the periodicity of the grating lobes. Many of these arrays are implemented as 1D-OPAs, as they can tightly assemble long and narrow radiators [7, 8, 9]. However, such a configuration can perform beam steering solely in one direction unless used with a highly precise tunable laser [10]. On the other hand, 2D-OPAs are capable of beam steering in two directions via phase tuning at the operational wavelength. Nonetheless, they come at the cost of a limited field of view (FOV), i.e., the grating-lobe-free region for beam steering [6], due to the large footprint of the radiating elements. Having an inter-element spacing greater than half the optical wavelength
results in undesirable grating lobes in the far-field radiation patterns. However, it has been demonstrated that the issue of limited beam steering can be partially mitigated using a sparse array configuration, integrating a large number of elements in an N\(\times\)N grid in a non-uniform fashion with minimal inter-element spacing, taking into account the implications of waveguide routing [10]. An alternative approach is to employ circularly symmetric array configurations, exploiting the properties of a zero-order Bessel-like intensity distribution in the far-field which implicates a visible region with no grating lobes [11].
Together, challenges like large FOV, high radiation efficiency, and reduced form factor with compact unit cells continue to create a surge for devising new radiator designs that can fulfill these requirements [12]. Recently, it has been shown that dielectric horn antennas possess highly directive fields [13, 14, 15], while blazed gratings are capable of near vertical high radiation efficiency [16, 17, 18, 19]. In this letter, we report on a compact blazed grating horn antenna which is optimized for a wavelength of 1.55 um to produce a high upward radiation efficiency with a broadside emission angle constraint. This is done by performing full-wave numerical simulations utilizing the finite element method (FEM) in conjunction with a hybrid optimization routine that includes particle swarm optimization followed by the trust region method. The optimization goal function, the upward efficiency (\(\eta_{up}\)), can be defined as
\[\eta_{up}=\frac{\int_{0}^{2\pi}\int_{0}^{\pi/2}P_{rad}(\theta,\varphi)d\theta d \varphi}{P_{in}}, \tag{1}\]
where \(P_{rad}\) is the power radiating in the defined computation domain, \(P_{in}\) is the input optical power, \(\theta\) is the polar angle, and \(\varphi\) is the azimuthal angle.
Our proposed antenna consists of a waveguide-fed silicon horn antenna (\(n_{Si}=3.4\)) comprising four gratings in a silicon dioxide (\(n_{SiO_{2}}=1.44\)) environment. The first grating is a partially etched trapezoidal grating with a U-shape, while the other three gratings are L-shaped gratings. Fig. 1a illustrates the schematic of the antenna with an orientation along the \(xy\)-plane. Along the \(x\)-axis, which is the direction of propagation, the different lengths are classified as grating lengths (GL\({}_{x}\)), offset lengths (OL\({}_{x}\)), segment lengths (SL\({}_{x}\)), and flare length (FL), which is the initial taper length. These parameters along with the horn width (HW), result in thirteen optimization parameters whose optimal values are shown in Fig. 1b. The values in blue highlight the fixed dimensions of the structure like the width of the feeding
Figure 1: (a) Schematic representation of the optimized horn antenna, highlighting the parameters used in the optimization. (b) Values of the design parameters obtained for the optimized antenna. (c) Calculated angular linear directive gain distribution of the optimized antenna exhibiting a directivity of \(D=22\) at \(\theta=8^{\circ}\). (d) Calculated near-field distribution of the power flow for the optimized antenna in the \(xz\)-plane at \(y=0\).
waveguide (w), height of the antenna (\(h_{1}=220\,\)nm), and height of the partial etch (\(h_{2}=110\,\)nm) in the U-shaped grating. The optimized structure has a compact footprint of 3.5 \(\upmu\)m\(\times\)2 \(\upmu\)m with a linear directivity of 22, as shown in the calculated linear directive gain distribution in Fig. 1c, centered at \(\theta=8^{\circ}\). The antenna demonstrates a low side lobe level of -9.7 dB. Furthermore, Fig. 1d shows the near-field power distribution in the \(xz\)-plane at \(y=0\). Along the length of the radiator, the input power constructively interferes in the upper hemisphere due to the multi-layer up-down asymmetries, while destructive interference dominates in the lower hemisphere.
To get an insight into the broadband behavior of the antenna, the radiation efficiencies emitting up and down as functions of the wavelength are shown in Fig. 2. The blue and red curves highlight the upward and downward radiation efficiencies, respectively. As the antenna is optimized for 1.55 \(\upmu\)m, it has its best performance at this wavelength and is sensitive to the longer wavelengths thereon. The structure maintains a low downward efficiency throughout the operating range. The upward radiation is higher for shorter wavelengths and reduces drastically for longer wavelengths, which results in an increased reflection to the waveguide, whereas maintaining a consistently low downward radiation efficiency. This implies that the antenna design works well in breaking the up-down symmetry, thus preventing more downward radiation. To lower the reflection efficiency over a wide range of wavelengths, a sub-wavelength grating (SWG) design approach can also be incorporated, as demonstrated in Ref. [18]. At 1.55 \(\upmu\)m, the antenna exhibits a high upward radiation efficiency of approximately 80%, partly influenced by the partial etch in the U-shaped grating which helps create a phase difference between the upward and downward propagating radiation [6]. The aperiodic L-shaped trapezoidal diffraction gratings further strengthen the up-down asymmetry along the entire length of the structure. This reinforces the constructive interference in upward direction, destructive interference in downward direction, and consequently, reduces the in-plane propagation. Furthermore, the antenna has almost no power that is reflected back into the feeding waveguide, thus making it highly efficient.
In the next step, we employ our optimized antenna in a 2D-phased array configuration for which we consider a 9 \(\upmu\)m\(\times\)9 \(\upmu\)m unit cell. As demonstrated in Ref. [6], the unit cell accommodates a directional coupler and phase shifter for their \(64\times 64\) array configuration. The optimized antenna already has the size that can perfectly fit into this unit cell and also the benefit of much higher upward radiation efficiency compared to the 51% from the radiator described in the reference. The field for such an array can be defined as
\[\mathbf{E}_{\mathrm{array}}(\theta,\varphi)=\mathbf{E}_{\mathrm{antenna}}( \theta,\varphi)\;\mathrm{AF}(\theta,\varphi), \tag{2}\]
where \(\mathbf{E}_{\mathrm{array}}(\theta,\varphi)\) is the far-field of the OPA, \(\mathbf{E}_{\mathrm{antenna}}(\theta,\varphi)\) is the far-field of a single antenna, and \(\mathrm{AF}(\theta,\varphi)\) is the scalar function representing the array factor. Such arrays can potentially be used in imaging applications, where complex far-field radiation patterns need to be constructed. Fig. 3 illustrates the results of pattern synthesis accomplished by utilizing the optimized antenna in a \(64\times 64\) array. The desired near-field phase distribution for the elements of the uniformly excited phased array is derived using the Gerchberg-Saxton algorithm [20]. Three different images are used for this purpose, namely, the initials of the Paderborn University "UPB", our department Theoretical Electrical Engineering "TET", and the Paderborn University logo. The images used for the pattern generation are shown next to their respective far-fields. Besides, Fig. 3 presents 16 grating lobes in the far-field radiation patterns for each direction due to the large inter-element spacing of 9 \(\upmu\)m (\(\sim\)5.8\(\lambda\)). This number of interference orders \(m\) can be estimated by
\[\mid m\lambda_{n}/d\mid<2, \tag{3}\]
Figure 2: Calculated optical radiation efficiencies of the optimized antenna as functions of the wavelength.
where \(m\) is the largest value that satisfies Eq. 3, \(\lambda_{n}\) is the medium wavelength and \(d\) is the size of the unit cell. The far-field patterns reveal a brighter region along the horizontal direction, which can be attributed to the large half power beamwidth (HPBW) of \(56^{\circ}\) from each antenna. This large beamwidth is particularly desirable for array configurations that demonstrate beam steering in a range only limited by the FOV of the radiating element [11].
The large unit cell size of 9 \(\upmu\)m also constrains the FOV of the array, which in turn limits the angular range for beam steering. In addition, we illustrate the possibility of steering the beam for which we use an 8\(\times\)8 OPA that provides better visualization of the steering effect, as seen in Fig. 4. The far-field patterns are limited to an angular range of \(\theta=20^{\circ}\) to highlight the shifting positions of the main beam. Fig. 4a demonstrates the phase distribution and far-field pattern for the array when no phase input is applied to the OPA. This serves as the reference for observing the beam steering effect. Alternating the phase with zero and \(\pi\) along the rows or columns, as shown in Fig. 4b and c, steers the beam along the vertical or horizontal directions, respectively. Similarly, alternating the phase inputs along the rows and columns with zero and \(\pi\) results in the beam being steered diagonally, as shown in Fig. 4d.
Finally, Table 1 compares our results with other antennas used specifically in the context of optical phased array configurations. Our proposed antenna has the smallest footprint and highest efficiency, with a relatively low angle of emission. We accomplish an overall size reduction of 28% and 49% in comparison to the footprint reported in Refs. [6] and [16], respectively. Also, an efficiency improvement of 9% and 45% is achieved when compared to the radiators from Refs. [16] and [2], respectively. An additional useful metric for comparison is the power reflected back to the waveguide segment, which can be measured using the \(S_{11}\) parameter. At 1.55 \(\upmu\)m, Refs. [16] and [6] report \(S_{11}\) values of approximately -20 dB (1%) and -13 dB (5%). Our proposed structure possesses an \(S_{11}\) of -26 dB (0.25%), which is a quarter of the acceptable reflected power for many phased array systems [2]. Therefore, we envision that any phased array system can operate undisturbed with such a radiating element.
In conclusion, we present the design and optimization of a compact horn-shaped blazed grating antenna that utilizes a heterogeneous grating configuration consisting of a U-shaped grating and L-shaped gratings. The use of a FEM solver in conjunction with an optimization routine reveals a structure with 80% upward radiation efficiency, an appreciable \(18^{\circ}\times 56^{\circ}\) full width at half maximum, and negligible reflected power. The proposed antenna is suitable for standard OPAs utilized for pattern synthesis and beamforming, including architectures that introduce the prospect of increasing the grating-lobe-free beam steering range using a large HPBW. Overall, the given antenna design opens up the possibility of fabricating highly efficient phased array systems with desirable radiation characteristics. To our knowledge, the
\begin{table}
\begin{tabular}{l l l l l} \hline Design & Footprint & \(\eta_{up}\) & \(\theta\) & \(\sim\) S\({}_{11}\) \\ \hline Ref. [6] & 3.5 \(\upmu\)\(\times\)2.8 \(\upmu\)m & 51\% & \(15^{\circ}\) & -13 dB \\ Ref. [10] & 5 \(\upmu\)m\(\times\)2 \(\upmu\)m & 51\% & \(7.4^{\circ}\) & N/A \\ Ref. [16] & 5.5 \(\upmu\)\(\times\)2.5 \(\upmu\)m & 71\% & \(6^{\circ}\) & -20 dB \\ Ref. [2] & 5.1 \(\upmu\)\(\times\)2 \(\upmu\)m & 35\% & \(9^{\circ}\) & N/A \\ This work & 3.5 \(\upmu\)m\(\times\)2 \(\upmu\)m & 80\% & \(8^{\circ}\) & -26 dB \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of characteristics of different antennas.
Figure 3: Calculated far-field radiation patterns for a \(64\times 64\) phased array configuration with the optimized antenna to generate (a) the initials “UPB”, b) the initials “TET”, and c) the logo of Paderborn University. The original images used in the process are shown next to the respective far-field patterns.
current work presents a radiating element with the highest upward radiation efficiency with the smallest footprint applicable for direct use in 2D-OPAs.
## Acknowledgments
The work was funded by the Ministry of Culture and Science of the state of North Rhine-Westphalia (PhoQC) and Deutsche Forschungsgemeinschaft via TRR142 (C05 & B06). The authors acknowledge the computing time support provided by the Paderborn Center for Parallel Computing (PC\({}^{2}\)).
## Data Availability
Data underlying the results presented in this paper are publicly available. ([https://doi.org/10.5281/zenodo.7966024](https://doi.org/10.5281/zenodo.7966024))
|
2301.09667 | Improving Performance of Object Detection using the Mechanisms of Visual
Recognition in Humans | Object recognition systems are usually trained and evaluated on high
resolution images. However, in real world applications, it is common that the
images have low resolutions or have small sizes. In this study, we first track
the performance of the state-of-the-art deep object recognition network,
Faster- RCNN, as a function of image resolution. The results reveals negative
effects of low resolution images on recognition performance. They also show
that different spatial frequencies convey different information about the
objects in recognition process. It means multi-resolution recognition system
can provides better insight into optimal selection of features that results in
better recognition of objects. This is similar to the mechanisms of the human
visual systems that are able to implement multi-scale representation of a
visual scene simultaneously. Then, we propose a multi-resolution object
recognition framework rather than a single-resolution network. The proposed
framework is evaluated on the PASCAL VOC2007 database. The experimental results
show the performance of our adapted multi-resolution Faster-RCNN framework
outperforms the single-resolution Faster-RCNN on input images with various
resolutions with an increase in the mean Average Precision (mAP) of 9.14%
across all resolutions and 1.2% on the full-spectrum images. Furthermore, the
proposed model yields robustness of the performance over a wide range of
spatial frequencies. | Amir Ghasemi, Nasrin Bayat, Fatemeh Mottaghian, Akram Bayat | 2023-01-23T19:09:36Z | http://arxiv.org/abs/2301.09667v2 | # Improving Performance of Object Detection using the Mechanisms of Visual Recognition in Humans
###### Abstract
Object recognition systems are usually trained and evaluated on high resolution images. However, in real world applications, it is common that the images have low resolutions or have small sizes. In this study, we first track the performance of the state-of-the-art deep object recognition network, Faster-RCNN, as a function of image resolution. The results reveals negative effects of low resolution images on recognition performance. They also show that different spatial frequencies convey different information about the objects in recognition process. It means multi-resolution recognition system can provides better insight into optimal selection of features that results in better recognition of objects. This is similar to the mechanisms of the human visual systems that are able to implement multi-scale representation of a visual scene simultaneously. Then, we propose a multi-resolution object recognition framework rather than a single-resolution network. The proposed framework is evaluated on the PASCAL VOC2007 database. The experimental results show the performance of our adapted multi-resolution Faster-RCNN framework outperforms the single-resolution Faster-RCNN on input images with various resolutions with an increase in the mean Average Precision (mAP) of 9.14% across all resolutions and 1.2% on the full-spectrum images. Furthermore, the proposed model yields robustness of the performance over a wide range of spatial frequencies.
Computer Vision, Deep Neural Network, Object Recognition, Multi-Resolution, Faster-RCNN, Human Visual System
## 1 Introduction
Recent advances in deep neural networks (DNN) [1, 2, 3, 4] and access to very large datasets with million annotated data especially for computer vision applications have led to state-of-the-art results in many problem domains such as object detection and scene classification [5, 6, 7]. For example, Faster-RCNN network [8] achieved the impressive results in recognition and localization of objects in natural scenes or GoogleNet [9] reached approximately to the human performance in classification of the ImageNet database [10].
Despite the significant achievements, a major drawback of the current deep neural networks for visual recognition is that they have been trained and evaluated on high quality images in which their performances drop significantly in classification of the low resolution images [11, 12].
in other words, previous works only focus on full spectrum image resolutions when training their networks, but the variety of real-world applications, from moving objects to small size images, often demand different constraints. Given the real-world resource constraints such as low resolution and small size images, recognition efficiency and performance becomes increasingly important for object detection. However to achieve better efficiency, accuracy usually is scarifies. This paper aims to tackle this problem by systematically studying the performance of the object recognition networks (e.g., Faster-RCNN) under the various resolutions of input images. Then, we propose a method based on the mechanism of the human visual system that produces better and robust performance in deep object detection networks with both higher accuracy and better efficiency across a wide rang of image resolutions. To evaluate the performance of our proposed method for object recognition, we choose the Faster-RCNN network [8] as one of the best existing models in terms of the accuracy on PASCAL VOC [13] image database.
## 2 Related Work
**Object Detection in Human Visual System -** Visual perception in humans is a process that human acquire knowledge
about their environment. This process is initiated when surrounding light enters the eye and induces electrical signals subsequently processed within the brain where an image is formed.
A large volume of studies has shown that human visual system has the ability to adapt to changes in environment in different ways, in which each adjustment may need different mechanisms [14, 15, 16, 17]. For example adaptation to color encompasses different adjustments including sensitivity changes in the cones.
Human visual system has also spatial adaptations. As an example, tilt aftereffects can be deduced with both real and subjective contours, with asymmetries between them which encourages adaptation at different cortical sites [18].
Visual objects in the real world are observed in contextual scenes which are usually relevant from physical and semantic perspective. With regard to blurriness, Bar _et al._[19] deployed a blurred, low frequency representation of a scene and showed that human visual system is able to determine ambiguous objects.
**Object Detection in Computer Vision -** It deals with discovering instances of semantic objects of a certain class (e.g. buildings, cars, or humans) in images and videos. Most traditional approaches for object detection used well-established computer vision methods which relies on extracting feature descriptors (e.g., SIFT, SURF, BRIEF, etc.) [20, 21, 22]. However, with the emerge of deep neural networks (in particular, convolutional neural network [9]) and its remarkable success in computer vision, the majority of recent works in object detection for digital images and videos have shifted towards using them as the primary technique [23]. The state-of-the-art technique using DNN can be categorized into two main types: one-stage methods and two stage-methods. One-stage methods prioritize inference speed (e.g., YOLO [24], SSD [25] and RetinaNet [26]). Two-stage methods prioritize detection accuracy (e.g., Faster R-CNN [8], Mask R-CNN [27] and Cascade R-CNN [28]).
In this study, we focus on improving detection accuracy and efficiency in detecting objects in low resolution images and videos. As said before, the existing neural networks are generally trained and tested on high quality images, in which when they are fed in with low quality images, their performance in detecting and recognizing objects reduces remarkably [29]. However, in real life, there are many cases that images have low resolution. To solve this issue, motivated by the capability of human visual system in adapting to different range of resolution, we propose a multi-resolution method which improves the performance of the model which it is fed in with blurry images.
## III Tracking Object Recognition Network Performance
In this section, we explain our methodology to track performance of the state of the art deep object recognition network, faster-RCNN with various resolution levels along with notation and problem formulation.
### _Model Architecture_
We deploy Faster-RCNN framework for object detection and localization. The Faster-RCNN uses Region Proposal Network (RPN) and an object detection network that share convolutional layers for fast testing. The baseline network is VGG-16 and its Conv 5-3 features are used for region proposal. We adapted the Faster-RCNN based on the publicly available code in [30] and implemented it with a few modifications in TensorFlow to evaluate the performance of the network on 4952 images of the PASCAL VOC2007 test dataset with full resolution (full spectrum in the frequency domain).
### _Notation and Problem Formulation_
As said before, we evaluate the performance of the Faster-RCNN network on the PASCAL VOC2007 image database under multiple resolutions. For this purpose, we create a set of image databases whose resolutions vary systematically from extremely coarse to very fine. This is performed by applying the two-dimensional Gaussian low pass filter in various cut off frequencies to the PASCAL VOC2007 image database in the frequency domain. The results of this process resemble blurring of an image to reduce the details of high frequency components of that image in multiple levels. To simplify, instead of applying a two dimensional (2D) Gaussian function to each pixel of an image which is equivalent to convolving a 2D Gaussian with the image, we apply the product of their individual Fourier transforms. We then re-transform the resulting product into the spatial domain to obtain the image in the desired resolution (\(I_{f_{c}}\)).
Attenuating frequencies using low pass Gaussian filters [31] results in a smoother image in the spatial domain. This process is formalized in the following equations:
\[\hat{F}_{f_{c}}(u,v)=F(u,v)H_{f_{c}}(u,v),f_{min}<f_{c}<f_{max} \tag{1}\]
\[H_{f_{c}}(u,v)=e^{-\frac{u^{2}+v^{2}}{2f_{c}^{2}}} \tag{2}\]
\[I_{f_{c}}=f^{-1}\left[\hat{\mathcal{F}}_{f_{c}}(u,v)\right] \tag{3}\]
where \(F(u,v)\) is the Fourier transform of the full-spectrum image \(I\) and \(u,v\) are representative of a particular spatial frequency contained in the spatial domain of image \(I\). For instance, \(F(0,0)\) represents the DC-component of the image which corresponds to the average brightness. \(H_{f_{c}}\) is the 2D Gaussian spatial filter with cut-off frequency of \(f_{c}\) ranging from \(f_{min}\) to \(f_{max}\) which is defined systematically between the center and the edge of the Fourier image, \(F\), as follows:
\[f_{c}=c.\frac{w}{20} \tag{4}\]
In this way, for a given image \(I_{w\times h}\):
\[f_{min}=\frac{S}{20},f_{max}=S,S=max(w,h) \tag{5}\]
By doing so, we systematically create 20 image databases from the PASCAL VOC2007 database in twenty-scale resolutions, namely PASCAL VOC2007- \(R_{1}\),..., PASCAL VOC2007- \(R_{20}\).
## IV Performance v.s. Resolutions
However, the state of the art object detectors are trained and tested on high resolution images but a most important question is how we can have detectors that give us the best balance of resolution and accuracy for different application needed. In this section, we evaluate the comparison of accuracy v.s. resolution tradeoff.
**Dataset.** The PASCAL Visual Object Classes 2007 (VOC2007) Dataset is considered as our image database that contains 9963 images that are categorized into 20 object classes as explained in the previous section. The data has been split into 5011 images for training and validation and 4952 test images. The distribution of images and objects by class is approximately equal across the training, validation and test datasets [32].
Figure 2 illustrates examples of an image from PASCAL VOC2007 database in the multiple levels of spatial frequencies ranging from low to high frequencies.
### Performance Evaluation of the Faster-RCNN on Different Resolutions
To track the performance in the object detection, our performance evaluation metric is the mean Average Precision (mAP) which is a very common and popular performance measure in object detection tasks. The Average Precision is defined as the fraction of the images with a relevant detected object among all images with detected objects. In other words, the average precision is the area under the precision-recall curve for each categories of objects. The mean average precision is computed by taking the mean of the average precision for all category of objects [13, 33]. Figure 3 shows the Faster-RCNN results of detection when tested on the PASCAL VOC2007 test data in various levels of resolution (\(R_{1},...,R_{20}\)) as explained in the previous section. The results reveal that the performance of the Full-spectrum Model drops off quickly for low resolution images compared with the high resolution and full-spectrum images. This indicates that the representations that are learned for the object recognition in a deep neural network highly depend to the information from all spatial frequencies simultaneously. Hence, the lack of information from the specific scale negatively influences the recognition performance. However, the human visual system can detect objects in most of the resolution levels since the human brain provides representations of objects and scenes at multiple scales so that it can still interpret the scenes and objects even in a single level representation.
Deploying a Multi-Resolution Faster-RCNN model that is made up of 5 end-to-end trained models on various resolution levels. A combination rule is applied during the test scheme such that a given image is passed through all 5 models and the best object recognition results are derived based on the combination rule that will be discussed shortly.
The combinational rule is adopted as an external module independent of the training scheme. To detect objects in a given input image to the Multi-Resolution Faster- RCNN model, all detections are collected from each of the five individual models. Each detected object is provided in the form of a bounding box and a score indicating the predicted probability of that bounding box belonging to an object class. The number of detections in each model may vary between 0 to 300, depending on the input image. Thus it is expected to have between 1 to 1500 proposed objects (presented as bounding box coordinates, predicted class score, and object class) for the combination of all five models. However, many of these bounding boxes highly overlap. For this, non-maximum suppression is used on the collection of all detected bounding boxes to reduce the redundancy. The Intersection-over-Union (IoU) threshold for non- maximum suppression is adopted at 0.7 to remove the redundant bounding boxes [8]. All remaining objects are proposed as the detection results of the Multi-Resolution Faster-RCNN model. Figure 4 illustrates the detection scheme.
For evaluation of the the Multi-Resolution Faster-RCNN model, the process is implemented using the five models that were trained on PASCAL VOC2007 training/validation data (5K) in five different resolutions. Then, 20 test databases in 20 levels of resolutions were generated from the PASCAL VOC2007 test data (5k). For each test database (corresponding to a certain resolution), the results of object detections from 5 models are obtained. Non-maximum suppression is adapted both based on the IoU threshold value of 0.7 as well as highest overlap with the ground truth bounding boxes. The detection results for five models and the Multi-resolution Faster-RCNN on 20 test databases (PASCAL VOC2007-\(R_{1},...,20\)) are shown in Figure 1. The combined model, Multi-Resolution Faster-RCNN, outperforms all the models in detecting images in all ranges of resolutions. The results indicate the robustness, efficiency, and higher performance of the Multi-Resolution Faster-RCNN regardless of the resolution of the input image in comparison to the Faster-RCNN for object recognition.
Figure 1: The performance of multi-resolution Faster- RCNN obtained from the combination of 5 models on 5 resolution levels (\(R_{5}\), \(R_{10}\), \(R_{18}\), \(R_{20}\), and Full-spectrum). All models have been evaluated on multi-resolution test databases (\(R_{1},...,R_{20}\))
## V Conclusion and Future Work
Inspired by the capability of human visual system for adapting to different resolution of images, in this work, we developed a multi-resolution deep object recognition framework which solves the issue of significant drop in the detection accuracy that happens for deep neural networks when trained and evaluated on different resolutions. This indicates that the representations that are learned for the object recognition in deep neural network highly depend to the information from all spatial frequencies simultaneously. Hence, the lack of information from the certain scale negatively influence the recognition performance. To address this, we propose a Multi-Resolution Faster-RCNN model that is made up of 5 end to end trained models on various resolution levels. The combination rule is applied during the test scheme such that a given image is passed through all 5 models and the best object recognition results are derived based on the combination rule. Our experiments show that the Multi-Resolution Faster-RCNN outperforms the original Faster-RCNN object detector in detecting images in all ranges of resolutions. The results indicate the robustness, efficiency, and higher performance of the Multi-Resolution Faster-RCNN regardless of the resolution of the input image than Faster-RCNN for object recognition.
\begin{table}
\begin{tabular}{c||c|c|c|c|c} \hline \hline Database-Resolution & \begin{tabular}{c} Full-spectrum \\ Model mAP(\%) \\ \end{tabular} & \begin{tabular}{c} \(\frac{20}{\text{m}}\)-Model \\ mAP(\%) \\ \end{tabular} & \begin{tabular}{c} \(\frac{120}{\text{m}}\)-Model \\ mAP(\%) \\ \end{tabular} & \begin{tabular}{c} \(\frac{120}{\text{m}}\)-Model \\ mAP(\%) \\ \end{tabular} &
\begin{tabular}{c} \(\frac{50}{\text{m}}\)-Model \\ mAP(\%) \\ \end{tabular} \\ \hline Full-spectrum & 68.1\% & **68.7\%** & 68.4\% & 67.6\% & 61.6\% \\ \hline \(R_{20}\) & 63.9\% & 67.2\% & **67.3\%** & 66.6\% & 61.7\% \\ \hline \(R_{18}\) & 63.7\% & **67.5\%** & 67.3\% & 66.7\% & 62.1\% \\ \hline \(R_{10}\) & 60.1\% & 64.9\% & 65.3\% & **65.9\%** & 63.5\% \\ \hline \(R_{5}\) & 45.5\% & 52.3\% & 51.2\% & 57.0\% & **61.3\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison (mAP) of Faster- RCNN models trained on different resolutions for different resolution test cases created from PASCAL VOC2007 database
Figure 3: Illustration of the performance of Faster- RCNN network on recognition of objects on the PASCAL VOC 2007 test dataset (4952 images) in multiple resolutions. |
2303.07033 | SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency | This work presents an effective depth-consistency self-prompt Transformer for
image dehazing. It is motivated by an observation that the estimated depths of
an image with haze residuals and its clear counterpart vary. Enforcing the
depth consistency of dehazed images with clear ones, therefore, is essential
for dehazing. For this purpose, we develop a prompt based on the features of
depth differences between the hazy input images and corresponding clear
counterparts that can guide dehazing models for better restoration.
Specifically, we first apply deep features extracted from the input images to
the depth difference features for generating the prompt that contains the haze
residual information in the input. Then we propose a prompt embedding module
that is designed to perceive the haze residuals, by linearly adding the prompt
to the deep features. Further, we develop an effective prompt attention module
to pay more attention to haze residuals for better removal. By incorporating
the prompt, prompt embedding, and prompt attention into an encoder-decoder
network based on VQGAN, we can achieve better perception quality. As the depths
of clear images are not available at inference, and the dehazed images with
one-time feed-forward execution may still contain a portion of haze residuals,
we propose a new continuous self-prompt inference that can iteratively correct
the dehazing model towards better haze-free image generation. Extensive
experiments show that our method performs favorably against the
state-of-the-art approaches on both synthetic and real-world datasets in terms
of perception metrics including NIQE, PI, and PIQE. | Cong Wang, Jinshan Pan, Wanyu Lin, Jiangxin Dong, Xiao-Ming Wu | 2023-03-13T11:47:24Z | http://arxiv.org/abs/2303.07033v3 | # SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency
###### Abstract
This work presents an effective depth-consistency self-prompt Transformer for image dehazing. It is motivated by an observation that the estimated depths of an image with haze residuals and its clear counterpart vary. Enforcing the depth consistency of dehazed images with clear ones, therefore, is essential for dehazing. For this purpose, we develop a prompt based on the features of depth differences between the hazy input images and corresponding clear counterparts that can guide dehazing models for better restoration. Specifically, we first apply deep features extracted from the input images to the depth difference features for generating the prompt that contains the haze residual information in the input. Then we propose a prompt embedding module that is designed to perceive the haze residuals, by linearly adding the prompt to the deep features. Further, we develop an effective prompt attention module to pay more attention to haze residuals for better removal. By incorporating the prompt, prompt embedding, and prompt attention into an encoder-decoder network based on VQGAN, we can achieve better perception quality. As the depths of clear images are not available at inference, and the dehazed images with one-time feed-forward execution may still contain a portion of haze residuals, we propose a new continuous self-prompt inference that can iteratively correct the dehazing model towards better haze-free image generation. Extensive experiments show that our method performs favorably against the state-of-the-art approaches on both synthetic and real-world datasets in terms of perception metrics including NIQE, PI, and PIQE.
## 1 Introduction
Recent years have witnessed advanced progress in image dehazing due to the development of deep dehazing models. Mathematically, the haze process is usually modeled by an atmospheric light scattering model [15] formulated as:
\[\text{I}(x)=\text{I}(x)\text{T}(x)+(1-\text{T}(x))\text{A}, \tag{1}\]
where I and J denote a hazy and haze-free image, respectively, and A denotes the global atmospheric light, \(x\) denotes the pixel index, and the transmission map T is usually modeled as \(\text{T}(x)=e^{-\beta\text{d}(x)}\) with the scene depth \(\text{d}(x)\), and the scattering coefficient \(\beta\) reflects the haze density.
Most existing works develop various variations of deep Convolutional Neural Networks (CNNs) for image dehazing [7, 8, 14, 17, 24, 44]. They typically compute a sequence of features from the hazy input images and directly reconstruct the clear ones based on the features, which have achieved state-of-the-art results on benchmarks [23] in terms of PSNRs and SSIMs. However, as dehazing is ill-posed, very small errors in the estimated features may degrade the performance. Existing works propose to use deep CNNs as image priors and then restore the clear images iteratively. However, they cannot effectively correct the errors or remove the haze residuals in the dehazed images as these models are fixed in the iterative process [28]. It is noteworthy that the human visual system generally possesses an intrinsic correction mechanism that aids in ensuring optimal results for a task. This phenomenon has been a key inspiration behind the development of a novel dehazing approach incorporating a correction mechanism that guides deep models toward better haze-free results generation.
Specifically, if a dehazed result exists haze residuals, a correction mechanism can localize these regions and guide the relevant task toward removing them. Notably, NLP-based text prompt learning has shown promise in guiding the models by correcting the predictions [26]. However, text-based prompts may not be appropriate for tasks that require solely visual inputs without accompanying text. Recent works [12, 16] attempted to address this issue by introducing text-free prompts into vision tasks. For instance, PromptonomyViT [16] evaluates the adaptation of multi-task prompts such as depth, normal, and segmentation to improve the performance of the video Transformers. Nevertheless, these prompts may not be suitable for image dehazing tasks, as they could not capture the haze-related content.
To better guide the deep model for better image dehazing, this work develops an effective self-prompt dehazing Transformer. Specifically, it explores with the depth consistency of hazy images and their corresponding clear ones as a prompt. In particular, our study is motivated by the
substantial difference between the estimated depths of hazy images and their clear counterparts, _i.e_., the same scene captured in the same location should be consistent regarding depth. Depth is typically related to the transmission map in the atmospheric light scattering model as shown in (1). Thus, if the dehazed images can be reconstructed accurately, their estimated depths should be close to those of their clear counterparts at large. However, haze residuals often degrade the accuracy of depth estimation, resulting in significant differences between hazy and clear images, as illustrated in Fig. 1(e). Yet, the difference map of estimated depths from images with haze residuals and clear images often points to the regions affected by haze residuals.
Based on the above observation, we design a prompt to guide the deep models for perceiving and paying more attention to haze residuals. Our prompt is built upon the estimated feature-level depth differences, of which the inconsistent regions can reveal haze residual locations for deep models correction. On top of the prompt, we introduce a prompt embedding module that linearly combines input features with the prompt to better perceive haze residuals. Further, we propose a prompt attention module that employs self-attention guided by the prompt to pay more attention to haze residuals for better haze removal. Our encoder-decoder architecture combines these modules using VQGAN [10] to enhance the perception quality of the results, as opposed to relying solely on PSNRs and SSIMs metrics for evaluation.
As the depths of clear images suffer from unavailability at inference and dehazed images obtained via one-time feed-forward execution may have haze residuals, we introduce a continuous self-prompt inference to address these challenges. Specifically, our proposed approach feeds the hazy input image to the model and sets the depth difference as zero to generate clearer images that serve as the clear counterpart. The clear image participates in constructing the prompt to conduct prompt dehazing. The inference operation is continuously conducted as the depth differences can keep correcting the deep dehazing models toward better haze-free image generation.
This paper makes the following contributions:
* We make the first attempt to formulate the prompt by considering the cues of the estimated depth differences between the image with haze residuals and its clear counterpart in the image dehazing task.
* We propose a prompt embedding module and a prompt attention module to respectively perceive and pay more attention to haze residuals for better removal.
* We propose a new continuous self-prompt inference approach to iteratively correct the deep models toward better haze-free image generation.
* Experiments demonstrate that our method performs favorably against state-of-the-art approaches on both synthetic and real-world datasets in terms of perception metrics including NIQE, PI, and PIQE.
## 2 Related Work
In this section, we overview image dehazing, VQGAN image restoration, and prompt vision applications.
**Image Dehazing.** Traditional solutions usually design various hand-crafted priors captured deterministic and statistical properties of hazy and haze-free images to remove haze, such as dark channel [15], color-line [11], haze-line [1], _etc_. Recently, CNN-based dehazing approaches are gradually developed [2, 37], _e.g_., MSCNN [37] use CNN to estimate the transmission map. One limitation of these algorithms is not flexible as they are not end-to-end. To address this issue, end-to-end dehazing networks [27, 7, 35, 22, 7] are proposed. Considering the haze physics model (1), physics-based CNNs [33, 34, 49, 50] are suggested. Motivated the powerful generation ability of CycleGAN [55], cycle-based methods [32, 45] are adapted. Although these efforts, these methods usually tend to produce unsatisfactory results as they cannot effectively perceive haze residuals.
**VQGAN for Image Restoration.** Recent research [3, 13, 43, 54] has shown that VQGAN [10] is an effective tool to generate more realistic results. VQGAN-based restoration methods estimate latent clear images but often neglect deep model prior cues, which can limit their performance. Zhou _et al_. propose CodeFormer [54], which inserts regular Transformers into VQGAN for face restoration. Different from this work, our approach incorporates the estimated
Figure 1: Haze residuals pose a significant challenge to accurately estimating the depth of clear images, creating inconsistencies compared to hazy images. A difference map (e) is utilized to locate haze residuals on the estimated depth, while minimal haze residuals will result in consistent estimates. By analyzing the difference map, we can identify the impact of haze residuals, leading to the development of improved dehazing models to mitigate this effect and enhance the quality of dehazed images. The difference map (e) is derived by [hazy depth – clear depth] with equalization for better visualization.
depth inconsistency between the image with haze residuals and its clear version by using prompt embedding and prompt attention to iteratively correct deep models with a self-prompt inference scheme for image dehazing.
**Prompt Learning for Vision.** Prompt learning is first studied in natural language processing [38, 39]. Due to its high effectiveness, prompt learning is recently used in vision-related tasks [9, 46, 48, 53],, domain generalization [52], multi-modal learning [20], action understanding [25], and visual prompt tuning [19]. To our knowledge, there is no effort to exploit prompts for dehazing. This paper aims to investigate this new path.
## 3 Proposed Method
Our method comprises two branches: the prompt branch and the self-prompt dehazing Transformer branch. The prompt branch generates a prompt by using the deep depth difference and deep feature extracted from the hazy input. The other branch exploits the generated prompt to guide the deep model for image dehazing. We incorporate a prompt embedding module and prompt attention module to respectively perceive and pay more attention to the haze residuals for better removal. The proposed modules are formulated into an encoder-decoder architecture based on VQGAN for better perception quality [3, 13, 54].
### Overall Framework
Fig. 2 illustrates our method at the training stage. Given a hazy images I, we first utilize trainable encoder \(\textit{Enc}(\cdot)\) to extract features:
\[\mathbf{F}_{\text{Enc}}=\textit{Enc}(\text{I}). \tag{2}\]
Then, we compute the depth difference of the hazy image I and its corresponding clear image J in feature space:
\[\text{D}_{1}=\textit{DE}(\text{I});\ \text{D}_{2}=\textit{DE}( \text{J}),\] (3a) \[\mathbf{F}_{\text{D}_{1}}=\textit{Enc}_{\text{pre}}^{\text{ frozen}}(\text{D}_{1});\ \mathbf{F}_{\text{D}_{2}}=\textit{Enc}_{\text{pre}}^{\text{frozen}}(\text{D}_{2 }),\] (3b) \[\mathbf{F}_{\text{D}_{\textit{D}_{\textit{D}_{\textit{D}_{ \textit{D}_{\textit{D}_{\textit{D}_{\textit{D}_{\textit{D}_{\textit{D}_{ \textit{D}_{\textit{D}_{\textit{D}_{\textit{D}_{\textit{D}_{\textit{D}_{ \textit{D}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\] \,
where \(\textit{SSIM}(\cdot)\) denotes the structural similarity [41] for better structure generation. \(z_{\mathbf{q}}\) is the haze-free codebook features by feeding haze-free images J to pre-trained VQGAN while \(\bar{z}_{\mathbf{q}}\) is the reconstructed codebook features. \(\Phi(\cdot)\) denotes the feature extractor of VGG19 [40]. \(\mathcal{D}\) is the discriminator [55]. \(\lambda_{\text{code}},\lambda_{\text{per}},\lambda_{\text{adv}}\), and \(\lambda_{\text{ssim}}\) are weights.
For **inference**, we propose a new self-prompt inference approach (see details in Sec. 3.4) as our training stage involves the depth of clear images to participate in forming the prompt while clear images are not available at testing.
### Self-Prompt Transformers
The proposed self-prompt Transformer contains the prompt generated by the prompt branch, a prompt embedding module, and a prompt attention module which is contained in the prompt Transformer block. In the following, we introduce the definition of the prompt, prompt embedding module and prompt attention module, and prompt Transformer block in detail.
**Prompt** (Definition). The prompt is based on the estimated depth difference between the input image and its clear counterpart. It is defined in (4a) which can better contain haze residual features as \(\mathbf{F}_{\text{D}_{\text{aff}}}\) with higher response value reveals inconsistent parts which potentially correspond to the haze residuals in the input hazy image.
**Prompt Embedding.** Existing Transformers [52] usually use the position embedding method (Fig. 4(a)) to represent the positional correlation, which does not contain haze-related information so that it may not effectively perceive the haze residual information well. Moreover, image restoration requires processing different input sizes at inference while the position embedding is defined with fixed parameters at training [52]. Hence, position embedding may be not a good choice for image dehazing. To overcome these problems, we propose prompt embedding which is defined in (4b). By linearly adding the extracted features \(\mathbf{F}_{\text{Enc}}\) with \(\mathrm{Prompt}\), the embedded feature \(\mathbf{F}_{\text{ProEmbed}}\) perceives the haze residual features as \(\mathrm{Prompt}\) extracts the haze residual features. Note that as \(\mathbf{F}_{\text{ProEmbed}}\) has the same size as \(\mathbf{F}_{\text{Enc}}\), it does not require fixed sizes like position embedding.
**Prompt Attention.** Existing Transformers usually extract Query \(\mathbf{Q}\), Key \(\mathbf{K}\), and Value \(\mathbf{V}\) from input features to estimate scaled-dot-product attention shown in Fig. 4(c). Although Transformers are effective for feature representation, the standard operation may be not suitable for image dehazing. To ensure the Transformers pay more attention to haze residuals for better removal, we propose prompt attention \(\textit{ProAtt}(\cdot)\) by linearly adding the query with \(\mathrm{Prompt}\):
\[\mathbf{Q}=\mathbf{Q}+\mathrm{Prompt}, \tag{9a}\] \[\textit{ProAtt}(\mathbf{Q},\mathbf{K},\mathbf{V})=\textit{Softmax} \Big{(}\frac{\mathbf{Q}\mathbf{K}^{\text{T}}}{\sqrt{d_{\text{head}}}}\Big{)} \mathbf{V}, \tag{9b}\]
where \(d_{\text{head}}\) denotes the dimension of head. Fig. 4(d) illustrates the proposed prompt attention. Note that as \(\mathbf{Q}\) in attention is to achieve the similarity relation for expected inputs [6], our prompt attention by linearly adding the prompt \(\mathrm{Prompt}\) with the Query \(\mathbf{Q}\) can pay more attention to haze residuals for better removal.
**Prompt Transformer Block.** According to the above attention design, our prompt Transformer block (PTB) can be sequentially computed as:
\[\mathbf{Q},\mathbf{K},\mathbf{V}=\textit{LN}(\mathbf{X}^{l-1}), \tag{10a}\] \[\mathbf{\hat{X}}^{l}=\textit{ProAtt}(\mathbf{Q},\mathbf{K},\mathbf{V })+\mathbf{X}^{l-1},\] (10b) \[\mathbf{X}^{l}=\textit{MLP}\Big{(}\textit{LN}(\mathbf{\hat{X}}^{l}) \Big{)}+\mathbf{\hat{X}}^{l}, \tag{10c}\]
where \(\mathbf{X}^{l-1}\) and \(\mathbf{X}^{l}\) mean the input and output of the \(l^{\text{th}}\) prompt Transformer block. Specially, \(\mathbf{X}^{0}\) is the \(\mathbf{F}_{\text{ProEmbed}}\). _LN_ and _MLP_ denote the layer normalization and multilayer perceptron. The PTB is shown in the right part of Fig. 2.
Figure 4: **(a)-(b) Existing position embedding _vs_. Prompt embedding (Ours)**. Our prompt embedding can better perceive the haze information and is friendly for different input sizes. **(c)-(d) Existing regular attention _vs_. Prompt attention (Ours)**. Our prompt attention can pay more attention to the haze residuals.
Figure 3: **Continuous Self-Prompt Inference. \(i^{\text{th}}\) prompt inference contains four steps: Sequential execution from top to bottom. Step 1 obtains clearer images to participate in forming the prompt by feeding the hazy image itself to our network without prompt by setting \(\mathbf{F}_{\text{D}_{\text{aff}}}\) as zero. Step 2 generates the prompt to guide the dehazing model. Step 3 conducts the self-prompt dehazing to produce the results. Step 4 updates for the next iterative dehazing. The magenta line describes the ’self’ process that builds the prompt from the hazy image itself. Here, Dehazing Transformer means our Self-Prompt Dehazing Transformer with \(\mathbf{F}_{\text{D}_{\text{aff}}}=0\).**
It is worth noting that our prompt embedding and prompt attention are flexible as we can manually set \(\mathbf{F}_{\text{D}_{\text{aff}}}=\mathbf{0}\), the network thus automatically degrade to the model without prompt, which will be exploited to form our continuous self-prompt inference (see Sec. 3.4).
### Mutual Deformable Fusion Module
As VQGAN is less effective for preserving details [3, 13], motivated by the deformable models [5, 56] that can better fuse features, we propose a mutual deformable fusion module (MDFM) by fusing features mutually to adaptively learn more suitable offsets for better feature representation:
\[\text{off}_{1}=\textit{Conv}\Big{(}\mathcal{C}[\mathbf{F}_{\text{Eac}}^{ \text{e}},\mathbf{F}_{\text{TPB}}^{\text{e}}]\Big{)};\text{off}_{2}=\textit{ Conv}\Big{(}\mathcal{C}[\mathbf{F}_{\text{PTB}}^{\text{e}},\mathbf{F}_{ \text{Eac}}^{\text{e}}]\Big{)}, \tag{11a}\] \[\mathbf{Y}_{1}=\textit{DMC}(\mathbf{F}_{\text{Eac}}^{\text{e}}, \text{off}_{1});\mathbf{Y}_{2}=\textit{DMC}(\mathbf{F}_{\text{PTB}}^{\text{e}},\text{off}_{2}),\] (11b) \[\mathbf{F}_{\text{MDFM}}=\textit{Conv}\Big{(}\mathcal{C}[\mathbf{Y}_{1}, \mathbf{Y}_{2}]\Big{)}, \tag{11c}\]
where \(\textit{Conv}(\cdot)\), \(\mathcal{C}[\cdot]\), and \(\textit{DMC}(\cdot)\) respectively denote the \(1\times 1\) convolution, concatenation, and deformable convolution. off\({}_{k}\)\((k=1,2.)\) denotes the estimated offset.
### Continuous Self-Prompt Inference
Our model requires the depth of clear images during training, but these images are unavailable at inference. Additionally, dehazed images generated by a one-time feed-forward execution may still contain some haze residuals. To address these issues, we propose a continuous self-prompt inference approach that leverages prompt embedding and prompt attention through linear addition, as discussed in Sec. 3.2. By setting feature-level depth difference \(\mathbf{F}_{\text{D}_{\text{aff}}}\) to zero, we can feed hazy images to our trained network and obtain clearer dehazed results which participate in building the prompt to conduct prompt dehazing. The iterative inference is conducted to correct the deep models to ensure the deep models toward better haze-free image generation:
\[\overline{\text{J}}_{i}^{\text{w/o prompt}}=\mathcal{N}^{\text{ w/o prompt}}(\overline{\text{J}}_{i-1}^{\text{w/o prompt}}),\ \text{set}\ \mathbf{F}_{\text{D}_{\text{aff}}}=\mathbf{0},\ \text{\# Step 1} \tag{12a}\] \[\text{Prompt}=\mathbf{F}_{\text{D}_{\text{aff}}}\cdot\mathbf{F}_{ \text{Eac}};\mathbf{F}_{\text{Eac}}=\textit{Enc}(\overline{\text{J}}_{i-1}^{ \text{w/o prompt}}),\ \text{\# Step 2}\] (12b) \[\overline{\text{J}}_{i}^{\text{prompt}}=\mathcal{N}^{\text{w/o prompt}}(\overline{\text{J}}_{i-1}^{\text{w/o prompt}},\text{Prompt}),\ \text{\# Step 3}\] (12c) \[\overline{\text{J}}_{i}^{\text{w/o prompt}}=\overline{\text{J}}_{i}^{ \text{prompt}},\ \ (i=1,2,\cdots),\ \text{\# Step 4} \tag{12d}\]
where \(\mathcal{N}^{\text{w/o prompt}}\) denotes our trained network without prompt by setting \(\mathbf{F}_{\text{D}_{\text{aff}}}\) as zero, while \(\mathcal{N}^{\text{prompt}}\) means our trained network with prompt. \(\mathbf{F}_{\text{D}_{\text{aff}}}=|\textit{Enc}_{\text{pre}}^{\text{frozen}} (\textit{DE}(\overline{\text{J}}_{i-1}^{\text{w/o prompt}}))-\textit{Enc}_{ \text{pre}}^{\text{frozen}}(\textit{DE}(\overline{\text{J}}_{i}^{\text{w/o prompt}}))|\). \(\overline{\text{J}}_{0}^{\text{w/o prompt}}\) denotes the original hazy images, while \(\overline{\text{J}}_{i-1}^{\text{w/o prompt}}\) is regarded as the image with haze residuals and \(\overline{\text{J}}_{i}^{\text{w/o prompt}}\) in (12a) is regarded as the clear counterpart of \(\overline{\text{J}}_{i}^{\text{w/o prompt}}\). \(\overline{\text{J}}_{i}^{\text{prompt}}\) means the \(i^{\text{th}}\) prompt dehazing results.
According to (12), the inference is a **continuous self-prompt** scheme, _i.e_., we get the clear images from the hazy image itself by feeding it to \(\mathcal{N}^{\text{w/o prompt}}\) to participate in producing the prompt and the inference is continuously conducted. Fig. 3 better illustrates the inference process.
Fig. 5 shows our continuous self-prompt at \(2^{\text{nd}}\) and \(3^{\text{rd}}\) prompts outperforms the baseline which uses ground-truth (GT) to participate in forming the prompt like the process of the training stage. However, GT is not available in the real world. More detailed explanations are presented in Sec. 4.3.
## 4 Experimental Results
In this section, we evaluate the effectiveness of our method against state-of-the-art ones (SOTAs) on commonly used benchmarks and illustrate the effectiveness of the key components in the proposed method.
**Implementation Details.** We use 10 PTBs, _i.e_., \(l=10\), in our model. The details about the VQGAN are presented in the supplementary materials. We crop an image patch of \(256\times 256\) pixels. The batch size is \(10\). We use ADAM [21] with default parameters as the optimizer. The initial learning rate is 0.0001 and is divided by \(2\) at \(160\)K, \(320\)K, and \(400\)K iterations. The model training terminates after \(500\)K iterations. The weight parameters \(\lambda_{\text{code}}\), \(\lambda_{\text{per}}\), \(\lambda_{\text{adv}}\), and \(\lambda_{\text{sim}}\) are empirically set as 1, 1, 0.1, and 0.5. Our implementation is based on the PyTorch using one Nvidia 3090 GPU.
**Synthetic Datasets.** Following the protocol of [45], we use the RESIDE ITS [23] as our training dataset and the SOTS-indoor [23] and SOTS-outdoor [23] as the testing datasets.
**Real-world Datasets.** In [23], Li _et al_. also collect large-scale real-world hazy images, called UnannotatedHazyImages. We use these images as a real-world hazy dataset.
**Evaluation Metrics.** As we mainly aim to recover images with better perception quality, we use widely-used Natural Image Quality Evaluator (**NIQE**) [30], Perceptual Indexes (**PI**) [29], and Perception-based Image Quality Evaluator (**PIQE**) [31] to measure restoration quality. Since the
Figure 5: **Continuous self-prompt inference** _vs_**. GT guidance (**Baseline**) on the SOTS-indoor dataset. GT guidance means we use the GT image to participate in forming the prompt at inference like the process of the training stage, which serves as the baseline. Due to the one-time feed-forward execution that may still contain a portion of haze residuals, continuously conducting inference can ensure the results toward better haze-free image generation. Note GT guidance only conducts one-time inference to generate a result. What’s more, GT is not available in the real world. More detailed explanations are given in Sec. 4.3.
distortion metrics Peak-Signal-to-Noise-Ratio (PSNR) [18] and Structural SIMilarity (SSIM) [41] cannot model the perception quality well, we use them for reference only. Notice that all metrics are re-computed for fairness. We use the grayscale image to compute the PSNR and SSIM. We compute NIQE and PI by the provided metrics at [https://pypi.org/project/pyiqa/](https://pypi.org/project/pyiqa/). The PIQE is computed via [https://github.com/buyizhiyou/NRVQA](https://github.com/buyizhiyou/NRVQA).
### Results on Synthetic Datasets
Tab. 1 and Tab. 2 respectively report the comparison results with SOTAs on the SOTS-indoor and SOTS-outdoor datasets [23]. Our method achieves better performance in terms of NIQE, PI, and PIQE, indicating the generated results by our method possesses higher perception quality. Fig. 6 and Fig. 7 show that our method restores much clearer images while the evaluated approaches generate the results with haze residual or artifacts.
As we train the network with a one-time feed-forward process, PSNRs and SSIMs are naturally decreased (**Ours\({}_{1}\)**_vs_. **Ours\({}_{3}\)** in Tabs. 1 and 2) when inference is conducted iteratively. We argue distortion metrics including PSNRs and SSIMs are not good measures for image dehazing as Figs. 6 and 7 have shown methods with higher PSNR and SSIMs cannot recover perceptual results, _e.g._, Dehamer [14] and D4 [45], while our method with better perception metrics is able to generate more realistic results.
### Results on Real-World Datasets
Tab. 3 summarises the comparison results on the real-world datasets [23], where our method performs better than the evaluated methods. Fig. 8 illustrates that our method generates an image with vivid color and finer details.
### Analysis and Discussion
We further analyze the effectiveness of the proposed method and understand how it works on image dehazing. The results in this section are obtained from the SOTS-indoor dataset if not further mentioned. Our results are from the \(1^{\text{st}}\) prompt inference for fair comparisons, _i.e_., \(i=1\) in
\begin{table}
\begin{tabular}{c|c|c c c c c c c c|c c} Methods & GridNet [27] & PFDN [8] & UHD [51] & PSD [4] & Uformer [42] & Restorrer [47] & D4 [45] & Dehamer [14] & **Ours\({}_{1}\)** & **Ours\({}_{3}\)** \\ \hline \multirow{3}{*}{Perception} & NIQE \(\downarrow\) & 4.239 & 4.412 & 4.743 & 4.828 & 4.378 & 4.321 & 4.326 & 4.529 & 4.452 & **4.054** \\ & PI \(\downarrow\) & 3.889 & 4.143 & 4.962 & 4.567 & 3.967 & 3.936 & 3.866 & 4.035 & 3.926 & **3.857** \\ & PIQE \(\downarrow\) & 28.924 & 32.157 & 39.204 & 35.174 & 29.806 & 29.384 & 30.480 & 32.446 & 30.596 & **27.927** \\ \hline \multirow{3}{*}{Distortion} & PSNR \(\uparrow\) & 32.306 & 33.243 & 16.920 & 13.934 & 33.947 & 36.979 & 19.142 & 33.600 & 35.906 & 34.467 \\ & SSIM \(\uparrow\) & 09.840 & 09.827 & 07.7831 & 07.7160 & 09.846 & 0.9900 & 08.8520 & 09.865 & 09.9877 & 09.852 \\ \end{tabular}
\end{table}
Table 1: **Comparisons on the SOTS-indoor dataset. Our method achieves better performance in terms of NIQE, PI, and PIQE, The best results are marked in red. \(\downarrow\) (\(\uparrow\)) denotes lower (higher) is better. Ouris means the \(i^{\text{th}}\) prompt results.**
\begin{table}
\begin{tabular}{c|c|c c c c c c c c|c c} Methods & GridNet [27] & PFDN [8] & UHD [51] & PSD [4] & Uformer [42] & Restorrer [47] & D4 [45] & Dehamer [14] & **Ours\({}_{1}\)** & **Ours\({}_{3}\)** \\ \hline \multirow{3}{*}{Perception} & NIQE \(\downarrow\) & 2.844 & 2.843 & 3.756 & 2.884 & 2.903 & 2.956 & 2.917 & 3.164 & **2.646** & 2.685 \\ & PI \(\downarrow\) & 2.070 & 2.326 & 3.381 & 2.392 & 2.241 & 2.254 & 2.137 & 2.251 & **2.003** & 2.027 \\ & PIQE \(\downarrow\) & 6.547 & 6.732 & 10.891 & 8.937 & 6.748 & 6.904 & 7.567 & 6.458 & 6.577 & **6.151** \\ \hline \multirow{3}{*}{Distortion} & PSNR \(\uparrow\) & 16.327 & 16.872 & 11.758 & 15.514 & 19.618 & 18.337 & 26.138 & 21.389 & 18.471 & 16.954 \\ & SSIM \(\uparrow\) & 0.8016 & 0.8532 & 0.6074 & 0.7488 & 0.8798 & 0.8634 & 0.9540 & 0.8926 & 0.8771 & 0.8288 \\ \end{tabular}
\end{table}
Table 2: **Comparisons on the SOTS-outdoor dataset. Our method achieves better perception metrics including NIQE, PI, and PIQE, suggesting that the proposed method has a better generalization ability to unseen images for more natural results generation.**
Figure 6: **Visual comparisons on the SOTS-indoor dataset. Our method is able to generate much clearer results, even than the GT image.**
Figure 7: **Visual comparisons on the SOTS-outdoor dataset. Our method is able to generate more natural results. Note that our method produces more consistent colors in the sky region, while the others generate inconsistent colors and the D4 [45] leaves extensive haze.**
(12) if not further specifically mentioned.
**Effectiveness of prompt.** Initially, we assess the effect of the prompt on image dehazing. Notably, various prospective prompt candidates exist, such as image-level depth difference as the input of the VQGAN encoder or concatenation between deep features extracted from the input and depth features as the input of the Transformers. Our proposed prompt is compared with these candidates, as illustrated in Tab. 4(b) and 4(c), demonstrating that none of these candidates outperforms our proposed prompt.
Note our method without prompt leads to a similar model with CodeFormer [54] which directly inserts regular Transformers into VQGAN. Tab. 4 shows prompt help yield superior perception quality than the model without prompt (Tab. 4(a)). The efficacy of our model with the prompt is further affirmed by Fig. 9, indicating that the model with the prompt generates better results, while the model without prompt fails to remove haze effectively.
**Effectiveness of prompt embedding.** One might ponder the relative efficacy of our prompt embedding in contrast to the prevalent technique of position embedding (Fig. 4(a)). In this regard, we assess the effect of these embedding approaches in Tab. 5. The table reveals that our prompt embedding proves more advantageous over the position embedding, since the former is associated with haze residual information.
**Effectiveness of prompt attention.** Analyzing the efficacy of prompt attention proves intriguing. Tab. 6 indicates that our prompt attention yields better results as compared to commonly used attention methods (Fig. 4(c)). These findings signify that incorporating prompts in enhancing Query estimation accounts for the haze information, thereby culminating in more effective image dehazing results.
**Effect of the number of steps in continuous self-prompt.** The inference stage involves several steps to generate the prompt for better image dehazing. We thus examine the effect of the number of steps in the continuous self-prompt. Fig. 10 reveals that the optimal performance is achieved with a number of steps equal to \(3\) in the continuous self-prompts (_i.e._, \(i=3\) in (12)), in terms of NIQE. Notably, additional prompts do not improve the dehazing performance any further. One real-world example in Fig. 11 demonstrates that our continuous self-prompt method can grad
\begin{table}
\begin{tabular}{l|c c c c c} Experiments & NIQE \(\downarrow\) & PI \(\downarrow\) & PIQE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline (a) Regular attention & 4.300 & 4.102 & 31.486 & 35.874 & 0.9811 \\ \hline (b) Prompt attention (**Ours**) & 4.252 & 3.926 & 30.596 & 35.960 & 0.9877 \\ \end{tabular}
\end{table}
Table 6: **Effectiveness of prompt attention.** Our prompt attention is more effective than regular attention for haze removal.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} Methods & GridNet [27] & PFDN [8] & UHD [51] & PSD [4] & Uformer [42] & Restormer [47] & D4 [45] & Dehamer [14] & **Ours\({}_{1}\)** & **Ours\({}_{3}\)** \\ \hline \multirow{3}{*}{Perception} & NIQE \(\downarrow\) & 4.341 & 4.917 & 4.515 & 4.199 & 4.214 & 4.213 & 4.257 & 4.248 & 4.161 & **4.062** \\ & PI \(\downarrow\) & 3.685 & 3.736 & 3.858 & 3.521 & 3.429 & 3.436 & 3.414 & 3.495 & 3.477 & **3.391** \\ & PIQE \(\downarrow\) & 14.699 & 17.874 & 23.168 & 15.851 & 16.787 & 17.176 & 18.678 & 15.909 & 16.252 & **14.026** \\ \end{tabular}
\end{table}
Table 3: **Comparisons on the real-world dataset.** Our method achieves better performance, indicating that our method is more robust to real-world scenarios for realistic results generation.
\begin{table}
\begin{tabular}{l|c c c c c} Experiments & NIQE \(\downarrow\) & PI \(\downarrow\) & PIQE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline (a) Without the prompt & 4.258 & 3.937 & 31.904 & 35.833 & 0.9874 \\ (b) Image-level depth difference & 4.901 & 4.343 & 32.121 & 34.573 & 0.9742 \\ (c) Concatenation of image and depth features & 4.362 & 4.077 & 34.107 & 34.378 & 0.9394 \\ (d) Feature-level depth difference (**Ours**) & 4.252 & 3.926 & 30.596 & 35.960 & 0.9877 \\ \end{tabular}
\end{table}
Table 4: **Effect of the proposed prompt on image dehazing.** Feature-level depth difference is a better prompt formalization.
Figure 8: **Visual comparisons on the real-world dataset**. Our method generates much clearer results. Note that existing SOTAs always leave haze residuals, which may be because these methods cannot effectively perceive haze residuals, thus being hard to remove them.
\begin{table}
\begin{tabular}{l|c c c c c} Experiments & NIQE \(\downarrow\) & PI \(\downarrow\) & PIQE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline (a) Without embedding & 4.410 & 4.113 & 32.193 & 34.997 & 0.9704 \\ (b) Position embedding & 4.267 & 3.992 & 31.877 & 35.331 & 0.9782 \\ (c) Prompt embedding (**Ours**) & 4.252 & 3.926 & 30.596 & 35.960 & 0.9877 \\ \end{tabular}
\end{table}
Table 5: **Effectiveness of prompt embedding.** The proposed prompt embedding is better than regular position embedding.
Figure 9: **Visual comparisons of the model without prompt (b) and with prompt (c) on real-world scenarios**. Using the prompt helps generate much clearer results.
ually enhance dehazing quality.
**Continuous self-prompt _vs._ recurrent dehazing.** We use the continuous self-prompt approach to restore clear images progressively at inference. To determine whether a recurrent method that is training our model without prompt achieves similar or better results, we compare our proposed method with it in Fig. 10, demonstrating that the recurrent method is not as good as our continuous self-prompt.
**Continuous self-prompt _vs._ GT guidance.** Fig. 5 compares the NIQE performance of ground truth (GT) guidance with that of the continuous self-prompt algorithm. Results show that while GT guidance performs better than the \(1^{\text{st}}\) prompt, it falls short of the effectiveness of the \(2^{\text{nd}}\) and \(3^{\text{rd}}\) prompts. This is likely due to GT guidance's limited ability to handle haze residuals which may still exist in the dehazed images, which are addressed by the self-prompt's ability to exploit residual haze information to progressively improve dehazing quality over time. Moreover, as GT is not available in the real world, these findings may further support the use of self-prompt as a more practical alternative.
**Depth-consistency.** Fig. 12 shows heat maps of depth differences obtained by the continuous self-prompt inference with different prompt steps. The results demonstrate both image-level and feature-level depth differences decrease as the number of prompt steps increases, indicating the depths obtained with the prompt, _i.e._, (12c), become increasingly consistent with those obtained without it, _i.e._, (12a).
**Model size and running time.** Tab. 7 compares our model sizes and running time against recent Transformer-based SOTAs: Uformer [42] and Dehamer [14]. Our model is comparable with these leading methods on model sizes. While the single-iteration time speed of our method remains comparable to these two feed-forward models [14, 42], it requires slightly more time for multiple iterations since our method involves estimating depths.
## 5 Conclusion
We have proposed a simple yet effective self-prompt Transformer for image dehazing by exploring the prompt built on the estimated depth difference between the image with haze residuals and its clear counterpart. We have shown that the proposed prompt can guide the deep model for better image dehazing. To generate better dehazing images at the inference stage, we have proposed continuous self-prompt inference, where the proposed prompt strategy can remove haze progressively. We have shown that our method generates results with better perception quality in terms of NIQE, PI, and PIQE.
**Limitations.** Our model is influenced by the estimated depth of \(\bar{\text{J}}_{i-1}^{\text{w/o prompt}}\) and \(\bar{\text{J}}_{i}^{\text{w/o prompt}}\) in (12a). Our continuous self-prompt approach may not work well if the depth difference is not significant enough. Slight performance degradation occurs when multiple prompts in the SOTS-outdoor in terms of NIQE and PI (Tab. 2). We argue this may be because the depth generated from \(\bar{\text{J}}_{i-1}^{\text{w/o prompt}}\) and \(\bar{\text{J}}_{i}^{\text{w/o prompt}}\) in (12a) in SOTS-outdoor is not significant enough. Yet, the \(1^{\text{st}}\) prompt still outperforms existing methods.
\begin{table}
\begin{tabular}{l|c|c|c c} Methods & Uformer & Dehamer & \multicolumn{3}{c}{Ours} \\ & [42] & [14] & \(1^{\text{st}}\) prompt & \(2^{\text{nd}}\) prompt & \(3^{\text{rd}}\) prompt \\ \hline Parameters & 21M & 132M & 34M & 34M & 34M \\ Running time & 0.15s & 0.16s & 0.19s & 1.32s & 2.41s \\ \end{tabular}
\end{table}
Table 7: **Model sizes and running time.** The running time is reported on an image with \(460\times 620\) pixels on one 3090 GPU.
Figure 11: **Visual improvement of continuous self-prompt inference on a real-world example. Our continuous self-prompt inference can progressively improve the dehazing performance.**
Figure 12: **Illustration of continuous depth-consistency. With the steps of the number of prompts increasing, the depth difference in both image-level and feature-level becomes more consistent. Reference means the depth difference between the input hazy image and GT. The input haze image is Fig. 1(a).**
Figure 10: **Effectiveness of continuous self-prompt (Ours) _vs._ recurrent dehazing (Recur.). Our continuous self-prompt inference can further improve results towards better naturalness, and it always produces better results than recurrent dehazing. ‘Ours w/o prompt’ means the results of (12a).** |
2303.08300 | Learning From High-Dimensional Cyber-Physical Data Streams for
Diagnosing Faults in Smart Grids | The performance of fault diagnosis systems is highly affected by data quality
in cyber-physical power systems. These systems generate massive amounts of data
that overburden the system with excessive computational costs. Another issue is
the presence of noise in recorded measurements, which prevents building a
precise decision model. Furthermore, the diagnostic model is often provided
with a mixture of redundant measurements that may deviate it from learning
normal and fault distributions. This paper presents the effect of feature
engineering on mitigating the aforementioned challenges in cyber-physical
systems. Feature selection and dimensionality reduction methods are combined
with decision models to simulate data-driven fault diagnosis in a 118-bus power
system. A comparative study is enabled accordingly to compare several advanced
techniques in both domains. Dimensionality reduction and feature selection
methods are compared both jointly and separately. Finally, experiments are
concluded, and a setting is suggested that enhances data quality for fault
diagnosis. | Hossein Hassani, Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif | 2023-03-15T01:21:50Z | http://arxiv.org/abs/2303.08300v1 | # Learning From High-Dimensional Cyber-Physical Data Streams for Diagnosing Faults in Smart Grids
###### Abstract
The performance of fault diagnosis systems are highly affected by data quality in cyber-physical power systems. These systems generate massive amounts of data that overburden the system with excessive computational costs. Another issue is the presence of noise in recorded measurements, which prevents building a precise decision model. Furthermore, the diagnostic model is often provided with a mixture of redundant measurements that may deviate it from learning normal and fault distributions. This paper presents the effect of feature engineering on mitigating the aforementioned challenges in cyber-physical systems. Feature selection and dimensionality reduction methods are combined with decision models to simulate data-driven fault diagnosis in a 118-bus power system. A comparative study is enabled accordingly to compare several advanced techniques in both domains. Dimensionality reduction and feature selection methods are compared both jointly and separately. Finally, experiments are concluded, and a setting is suggested that enhances data quality for fault diagnosis.
Feature selection, dimensionality reduction, classification, fault diagnosis, cyber-physical power systems.
## I Introduction
fault diagnosis system (FDS) is a critical component of cyber-physical power systems that is crucial for detecting malfunctions, identifying their cause, and pinpointing their location in the grid [1]. Nevertheless, designing a traditional model-based FDS model becomes more challenging as the grid grows in scale and complexity. The integration of intelligent models with power systems results in a cyber-physical system that bypasses the aforementioned problem [2]. Making use of data-driven approaches for diagnostic purposes provides the model with a better generalization. Furthermore, intelligent models facilitate the decision-making process in control systems by reducing human-machine interaction [3].
Data-driven FDS is primarily carried out using classification algorithms [4, 5, 6]. Having a set of sampled records corresponding to different system states, one can build a statistical model that maps incoming signal patterns to certain conditions such as different events of faults in the systems [7]. In a fault detection setting, the FDS model can be formulated as an anomaly detector that reveals any signal pattern mismatches those of the normal or healthy system state. The past decade has witnessed numerous advancements in intelligent FDS models that resort to sensory data for monitoring power systems [8, 9]. Nevertheless, the computational efficiency of intelligent models and their dependency on data quality limit the feasibility of such methods for FDS in power systems.
Data-driven methods only perform well if supplied with high-quality data. These methods try to capture the characteristics of a data distribution based on the existing relationships in the input space and form a mapping from distribution samples to a set of states in the system [10]. However, if the input distribution at hand is not a good representative of the system, the statistical model cannot capture the system characteristics accordingly. In power systems, this situation happens due to a number of reasons. Firstly, the captured signal through sensors often contains noise, which makes it harder for machine learning models to capture the intrinsic relationships within data. Furthermore, the abundance of the set of measurements and the number of sensors (e.g., phasor measurement units) result in high-dimensional data [11]. The higher the dimensionality size of the data, the more samples are needed for obtaining an accurate statistical model. This phenomenon is also referred to as the curse of dimensionality in the literature. In turn, processing high-dimensional data significantly increases the computational burden of the FDS. In addition, the utilized measurements in the power system most likely result in some invariant and duplicate features that feed the FDS model with irrelevant information that deteriorates the precision of the constructed model.
Feature selection (FS) and dimensionality reduction (DR) techniques are two main approaches that are commonly used in tackling the curse of dimensionality and improving the data quality to ensure optimal performance of the diagnostic model. FS is referred to the process of filtering or ranking different features (i.e., dimensions of the feature space) to remove non-informative features and select only a limited set of features that lead to the optimal performance of decision-making models [12, 13, 14]. On the other hand, DR methods transform the whole feature space into a smaller space [15, 16, 17]. Under a supervised setting, DR may additionally improve distribution by making classes more distinguishable in the transformed space [18, 19].
In this paper, an experimental review is performed on state-of-the-art FS and DR methods for diagnosing faults in cyber-physical power systems. Both mentioned harsh conditions, namely noisy signals (with different levels and frequencies) and high-dimensional data are taken into account throughout the experiments. This comparative study is enabled by ana
lyzing FS and DR methods in three study groups: 1) FS, 2) DR, and 3) FS and DR. Finally, we suggest the best methods that will most likely benefit FDS in real-world power systems, where similar conditions are expected.
The remainder of this paper is organized as follows. Section II presents the employed case-study and data. Section III explains the proposed methodology. Section IV contains the experimental results and analysis. Finally, the paper is concluded in Section V.
## II Case-Study: Cyber-Physical Power Systems
In the present study, the IEEE 118-bus system has been employed in order to check for the effectiveness of the proposed diagnostic methodology. The single-line diagram of this system has been illustrated in Fig. 1. This system contains, as the name stands, 118 buses, 91 loads and 19 generation units. The system is simulated using PowerFactory, and data measurements are collected using a sampling rate of 10KHz for faulty and normal scenarios.
Three types of faults are simulated. These faults are named load-loss (LL), generator outage (GO), and generator ground (GG) faults. To simulate an LL or GO fault, a breaker has been placed between the load or generator and the corresponding bus. The breaker is initially closed, and, then, it is triggered for 25 ms to disconnect the load or generator from the bus. As for the GG faults, three-phase short-circuit faults are simulated between the generation units and the ground. In either fault case, data measurements are collected from the moment the fault appears in the system until the moment the fault has been cleared. This period of time has been set to be 25 ms and with a sampling rate of 10KHz, therefore, a total number of 250 samples are collected for each scenario.
The list of simulated faults is represented in Table I. As it can be observed from this table, there exist 31 LL fault scenarios, 19 GO, and 19 GG, in addition to the normal operational state of the system, leading to a total number of 70 simulated scenarios, where each one could be thought of as a class of data to be classified. Therefore, we are dealing with a multi-class classification problem with 70 classes. It is worth noting that as there are 19 generation units, the GO and GG faults are simulated on all of the buses connected to a generation unit. However, for the LL faults, only 31 out of 91 possible locations are considered in the construction of the fault scenarios. The selected buses to simulate the LL faults are tried to be from different zones of the grid as shown in Fig. 1, in order to consider the impact of the fault location on the performance of the proposed methodology.
In general, six datasets are constructed following the aforementioned fault scenarios. These datasets are shown by \(\{\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\}\) and the characteristics of each one is summarized in Table II. We aim to investigate the effect of the signal-to-noise-ratio (SNR) and fault resistance (FR) values on the performance of the proposed diagnostic, too. In this regard, three different SNR values including \(\{10\ \text{dB},30\ \text{dB},70\ \text{dB}\}\) are considered to model the deep noisy measurements as well as data measurements with a slight noise. Following this, the FR value has also been set to be either \(1\Omega\) or \(10\Omega\). Therefore, with three SNR and two FR values, there could be a total of six combinations to construct the set of data, as described in Table II.
Fig. 1: The single-line diagram of the IEEE 118-bus system [20].
In terms of the set of features, as given in Table II, there are a total number of 354 features. Firstly, three types of measurements are collected from each bus of the system following a simulated scenario. These measurements are voltage, frequency, and phase angle. Due to the fact that the system has 118 buses, and three types of features are collected from each bus, therefore, the constructed datasets contain 354 features. Table III summarizes the set of features.
## III Methodology
In line with what was previously mentioned in Section I, the general framework of the proposed diagnostic model, as shown in Fig. 2, is to perform FS and DR techniques on the constructed datasets described in Table II, and, then, fed the set of selected or extracted features to three classification models including k Nearest Neighbours (kNN), Support Vector Machine (SVM) and Random Forest (RF) for the sake of fault diagnosis. Within this general framework, we discuss the implementation procedure and evaluation metrics in this section.
### _Implementation Procedure_
The first step in the implementation procedure is to standardize the given datasets. This is of paramount importance because the feature sets (i.e., voltage, frequency, and phase angle) take different values from different domains. For instance, the values of voltage measurements are per unit and are close to 1, while the frequency measurements fluctuate around 50 Hz. Therefore, there is a need to standardize the given datasets for the sake of eliminating the impact of scale variation from one feature to another. In this regard, given dataset \(\mathcal{D}\), we do normalize the dataset column-wise through the following equation:
\[\mathcal{D}_{n}^{j}=\frac{\mathcal{D}^{j}-\mathcal{D}_{min}^{j}}{\mathcal{D}_{ max}^{j}-\mathcal{D}_{min}^{j}}, \tag{1}\]
where \(\mathcal{D}_{n}^{j}\) is the \(j\)th column of the normalized dataset with \(j=1,\ldots,354\), \(\mathcal{D}_{max}^{j}\) is the maximum value of a given column, and \(\mathcal{D}_{min}^{j}\) shows the minimum value of the corresponding column.
The normalized dataset will then be fed into the FS and DR modules in order to select or extract the set of most informative features. As for the FS techniques, we resort to seven techniques including Infinite Feature Selection (InfFS) [14], relief [21], Least Absolute Shrinkage and Selection Operator (LASSO) [13], Unsupervised Feature Selection with Ordinal Locality (UFSOL) [22], and Concrete Feature selection based on Mutual Information (CFMI) [13]. These techniques are selected to serve two purposes. First, the selected FS methods are categorized into filters, wrappers, and embedded. To this end, we aim to check which category of FS methods could improve diagnostic performance through a comparative study between different categories. Second, we aim to look for providing a ranking for the best FS-classifier combination for the sake of diagnosis. In the same vein, the selected DR techniques are from two broad categories namely linear and nonlinear, depending on the type of transformation. The employed DR techniques are Principal Component Analysis (PCA) [15], Linear Discriminant Analysis (LDA) [19], Multi-dimensional Scaling (MDS) [17], Locally Linear Embedding (LLE) [16] and Constrained Adversarial Dimensionality Reduction (CADR) [18]. The aforementioned two purposes are also considered in the implementation of DR techniques. Features of the selected FS and DR techniques are summarized in Table IV.
Each FS or DR technique is then combined with a classification model (kNN, SVM and RF). In this regard, the selected set of features is fed into the aforementioned classification models and the classification results are then represented in terms of the accuracy and f-measure. However, the number of selected or extracted features by means of the FS and DR techniques needs to be adjusted carefully. To this end, in all experiments, we start with one feature and increase the number of features to the value, for which the performance of the classification model does not improve significantly.
Fig. 2: General framework of the proposed methodology.
### _Evaluation Metrics_
To measure the performance of the FS/DR-classifier combinations, we resort to the classification accuracy \(\mathcal{A}\) and f-measure \(\mathcal{F}_{1}\) values. In this regard, the constructed combinations are validated through a 10-fold cross-validation manner and the attained results are collected in terms of accuracy and f-measure. It is evident that for binary classification with only positive and negative classes, the accuracy index captures the ratio of the correct decisions made by the classification model. As for the f-measure, it could be thought of as the harmonic mean of the precision \(\mathcal{P}\) and recall \(\mathcal{R}\), where the precision is a measure of the ratio of positive samples that are correctly classified into a positive category, while the recall measures the performance of the classification model in classifying the positive data samples. Following this, one case defines f-measure as given below:
\[\mathcal{F}_{1}=\frac{2\times\mathcal{P}\times\mathcal{R}}{\mathcal{P}+ \mathcal{R}}. \tag{2}\]
In terms of the multi-class classification, however, a confusion matrix of the following form could be constructed:
\[\mathcal{C}=\left[\begin{array}{ccc}cm_{11}&\dots&cm_{ll}\\ \vdots&\ddots&\vdots\\ cm_{l1}&\vdots&cm_{ll}\end{array}\right], \tag{3}\]
With \(l\) being the total number of classes. Following this structure, the precision and recall can then be defined as follows:
\[\mathcal{P} =\frac{1}{l}\sum_{k=1}^{l}\frac{cm_{kk}}{\sum_{i=1}^{l}c_{ki}}, \tag{4}\] \[\mathcal{R} =\frac{1}{l}\sum_{k=1}^{l}\frac{cm_{kk}}{\sum_{i=1}^{l}c_{ik}}. \tag{5}\]
Having \(\mathcal{P}\) and \(\mathcal{R}\) calculated, the f-measure can then be obtained based on the Eq. 2 for a multi-class classification problem.
## IV Simulation Results and Comparative Study
In this section, a comprehensive comparative study is provided to compare the performance of FS/DR-classifier combinations. In this regard, the ultimate goal is to identify what combinations perform better for the sake of fault diagnosis in cyber-physical power systems. In terms of the FS and DR techniques, we initially investigate that what category of methods could deal better with the constructed datasets. In particular, FS techniques are selected from three categories including filters, wrappers, and embedded methods, and the selected DR techniques are either linear or nonlinear. Therefore, the goal is to investigate which category of FS and DR methods is superior in comparison with other categories. Furthermore, by resorting to the overall performance of FS and DR techniques, we aim to investigate whether FS techniques are preferred over DR techniques or vice versa.
### _Baseline_
The baseline models refer to the case, in which no data reduction technique is employed. In this regard, the original datasets are only normalized and, then, fed into the classification models. Following this setup, the classification results are collected in terms of accuracy and f-measure through a 10-fold cross-validation scheme. To this end, the attained average accuracy and f-measure values by means of the baseline models are represented in Table V. The collected results in Table V denote that kNN outperforms the SVM and RF in terms of average accuracy and f-measure. Furthermore, it is evident that moving from dataset \(\mathcal{D}_{1}\) to \(\mathcal{D}_{6}\) the attained values are increased because of the fact that the level of noise is decreased from 10 dB to 70 dB.
### _FS-classifier Combinations_
In this section, the performance of the FS techniques along with their combinations with the given classification models is presented in terms of the accuracy and f-measure. The aim is to first compare the performance of the FS techniques to see which technique is superior, which is then followed by a discussion on the superiority of FS-classifier combinations to see which combination could be the best for the sake of fault diagnosis in power systems.
We begin with a comprehensive study on the performance of the given FS-classifier combinations, where the attained results are collected in Table VI in terms of accuracy and f-measure. We initially study the effects of the SNR values on the performance of combinations. From the presented results in Table VI, it could be concluded that the average values of accuracy and f-measure increases from 0.61, 0.71, 0.77 and 0.58, 0.72, 0.78, respectively, by moving from SNR values of 10 dB to 30 dB and 70 dB. Obviously, the lower the level
of noise, the better the performance of the combinations. In order to check for the performance of the given combinations in dealing with different fault resistance values, by resorting to the collected results in Table VI it could be concluded that the average values of the accuracy and f-measure are 0.69, 0.69 and 0.70, 0.70, when the fault resistance values are \(10\Omega\) and \(1\Omega\), respectively. Therefore, the attained results denote that the value of the fault resistance has slightly impacted the performance of the given combinations, where it shows that a higher value of the fault resistance could negatively impact the performance.
In terms of the comparison between the given FS techniques, the average values of the accuracy and f-measure w.r.t. to datasets \(\mathcal{D}_{1}\) to \(\mathcal{D}_{6}\) are summarized in Table VII. Firstly, it could be observed that CFMI outperforms the other FS techniques in terms of accuracy, which is then followed by relief, LASSO, InfFS, and UFSOL. However, in terms of the f-measure, the collected results denote that the relief technique outperforms the rest of the techniques, which is followed by CFMI, LASSO, InfFS, and UFSOL. Secondly, by taking into account the average values of the accuracy for the filter, wrapper, and embedded techniques, the attained results denote that all categories have almost the same performance, however, in terms of the f-measure, the filter category of methods outperforms the other two categories by 1%.
In terms of the classification models, i.e., kNN, SVM, and RF, the average values of the accuracy and f-measure w.r.t. all combinations are summarized in Table VIII. The collected results denote that SVM outperforms the other classification models, while kNN and RF have shown the same performance in dealing with the given datasets and in accordance with the constructed combinations. Further to this, by comparing the results of Table VIII with those presented in Table V, it is evident that FS techniques have considerably helped with improving the performance of the given classification models. Turning into details, it could be observed that the performance of the kNN is improved from 0.56 to 0.68, that is from 0.51 to 0.74 and from 0.38 to 0.68 for the SVM and RF, respectively, and in terms of the average accuracy values. In the same vein and in terms of the average f-measure values, the presented results show that the performance of the kNN is improved from 0.57 to 0.67, that is from 0.53 to 0.73 and from 0.38 to 0.67 for the SVM and RF, respectively.
Finally, we analyze the performance of the constructed combinations in order to rank them based on the accuracy or f-measure. In this regard, the ranked combinations are listed in Table IX w.r.t. accuracy and f-measure. Firstly, the given ranking in terms of the accuracy and f-measure denotes that CFMI-SVM is the best combination in order to deal with the constructed datasets \(\mathcal{D}_{1}\) to \(\mathcal{D}_{6}\). Secondly, it could be observed that in four out of the five first combinations, the FS techniques are combined with the SVM, showing that SVM could be of a better performance in combination with FS techniques for the sake of fault diagnosis.
of accuracy and f-measure, and, then, all the possible DR-classifier combinations are ranked based on their performance.
compare them with those of DR techniques. The attained results denote that the average accuracy and f-measure values for the FS techniques are 0.70 and 0.70, respectively, and those are 0.62 and 0.64 for the DR techniques. Therefore, the attained results denote that for this case-study, the combination of FS techniques with classification models could deal better with the classification task compared with the combinations of the DR techniques with classification models.
## V Concluding Remarks
In this paper, we studied the fault diagnosis problem of cyber-physical power systems by resorting to a data-driven technique. This proposal suggested the combination of data reduction techniques including feature selection and dimensionality reduction and their combinations with classification models in order to identify four types of faults including generator outage, generator ground, and load loss. It was proposed to make use of different types of feature selection and dimensionality reduction techniques to investigate the impact of different models on diagnostic performance. The attained results first denoted that in terms of feature selection and their combinations with classification models, all the categories performed more or less the same to deal with the classification task. However, for the dimensionality reduction techniques, it was observed that nonlinear models could improve the classification performance in comparison with the linear counterpart. Furthermore, the results of the given experiments denoted that in general, feature selection techniques could be of a better performance in comparison with dimensionality reduction methods for the sake of fault diagnosis. In addition to this, we studied the impact of noisy measurements and fault resistance values, where the results showed that when the level of noise and the value of the fault resistance decreases, the performance of the constructed combinations could be improved. Due to the high volume of available data that could be collected from large-scale power systems, this study could be further extended to the use of deep learning algorithms for the sake of classification due to their ability in dealing with large-size datasets.
|
2310.10477 | Gaining Wisdom from Setbacks: Aligning Large Language Models via Mistake
Analysis | The rapid development of large language models (LLMs) has not only provided
numerous opportunities but also presented significant challenges. This becomes
particularly evident when LLMs inadvertently generate harmful or toxic content,
either unintentionally or because of intentional inducement. Existing alignment
methods usually direct LLMs toward the favorable outcomes by utilizing
human-annotated, flawless instruction-response pairs. Conversely, this study
proposes a novel alignment technique based on mistake analysis, which
deliberately exposes LLMs to erroneous content to learn the reasons for
mistakes and how to avoid them. In this case, mistakes are repurposed into
valuable data for alignment, effectively helping to avoid the production of
erroneous responses. Without external models or human annotations, our method
leverages a model's intrinsic ability to discern undesirable mistakes and
improves the safety of its generated responses. Experimental results reveal
that our method outperforms existing alignment approaches in enhancing model
safety while maintaining the overall utility. | Kai Chen, Chunwei Wang, Kuo Yang, Jianhua Han, Lanqing Hong, Fei Mi, Hang Xu, Zhengying Liu, Wenyong Huang, Zhenguo Li, Dit-Yan Yeung, Lifeng Shang, Xin Jiang, Qun Liu | 2023-10-16T14:59:10Z | http://arxiv.org/abs/2310.10477v6 | # Gaining Wisdom from Setbacks +
###### Abstract
The rapid advancement of large language models (LLMs) presents both opportunities and challenges, particularly concerning unintentional generation of harmful and toxic responses. While the traditional alignment methods strive to steer LLMs towards desired performance and shield them from malicious content, this study proposes a novel alignment strategy rooted in mistake analysis by exposing LLMs to flawed outputs purposefully and then conducting a thorough assessment to fully comprehend internal reasons via natural language analysis. Thus, toxic responses can be transformed into instruction tuning corpus for model alignment, and LLMs can not only be deterred from generating flawed responses but also trained to self-criticize, leveraging its innate ability to discriminate toxic content. Experimental results demonstrate that the proposed method outperforms conventional alignment techniques for safety instruction following, while maintaining superior efficiency.
## 1 Introduction
In recent years, large language models (LLMs) have witnessed exponential growth in their capabilities, leading to significant advancements in various fields, such as understanding and generating human-like texts (Kaddour et al., 2023; Wang et al., 2023; OpenAI, 2023). However, achievements are accompanied by challenges. Notably, trained on expansive web text corpora, LLMs can easily produce harmful responses even without the explicit red-teaming prompts, posing substantial risks in deployment (Dian et al., 2019; Parrish et al., 2021; Liang et al., 2021; Hartvigsen et al., 2022). Considering the powerful capabilities of LLMs and their extensive range of applications, it is crucial that the models can operate beneficially with the intricate and diverse tapestry of human morals and values. Thus, aligning the LLMs with human values is not just important--it is paramount (Xu et al., 2020; Zhang et al., 2022; Dinan et al., 2022).
Existing alignment methods of LLMs mainly employ two principal methodologies: _supervised fine-tuning_ (SFT) (Radiya-Dixit & Wang, 2020; Ouyang et al., 2022; Liu et al., 2023) and _reinforcement learning with human feedback_ (RLHF) (Christiano et al., 2017; Ibarz et al., 2018; Jaques et al., 2019; Bai et al., 2022). SFT-based methods utilize large volumes of supervised instruction-response pairs to align LLMs with human values, instructing the model on what constitutes the "optimal answers" to primarily teach the model about the nature of good responses. On the other hand, RLHF requires human annotators to rank different responses for a given instruction, rewarding good responses and penalizing bad ones. While the model learns to discern the relative quality of different responses, it still remains oblivious to the internal reasons why a bad response is deemed inferior, and thus, might still suffer when generalizing to the novel instructions. Therefore, existing methods train instruction following LLMs primarily focusing on _good responses_, while avoiding them exposed to bad cases, suggesting that the fully usage of _bad responses_ is still an under-explored problem.
Meanwhile, it is widely acknowledged that humans can derive profound insights from their mistakes. As an old Chinese proved suggests, "_A fall into the pit is a gain in your wit_", which emphasizes the intrinsic value of learning from mistakes to gain a deeper understanding. However, directly exposing LLMs to toxic corpus with either SFT or RLHF might undercutly make them over-fit harmful data pattern (Liu et al., 2023). Thus, the question arises: _How can LLMs utilize and learn from mistakes for safety alignment without being affected by the toxic inputs?_
In this paper, we propose a novel alignment framework that trains LLMs through mistake analysis (see Fig. 1 as an illustration). LLMs are trained to analyze harmful responses and understand internal reasons, where the natural language analysis performs as a _"fine-grained mask"_ to decipher harmful content. Combining with normal instruction-response pairs, LLMs can simultaneously understand what should or should not be generated for better alignment performance (Sec. 5.1). Furthermore, we demonstrate that mistake analysis can efficiently defend previously aligned LLMs against novel instruction attacks with only a few number of representative mistakes (Sec. 5.2).
Moreover, we demonstrate that LLMs can even benefit from mistake analysis generated by the LLMs themselves with **detailed theoretical support**, thanks to their remarkable self-correction capabilities (Huang et al., 2022; Saunders et al., 2022; Gou et al., 2023). Specifically, an unaligned model is first induced to generate harmful responses using inductive prompts, and subsequently, alerted about the potential mistakes and instructed to evaluate its own responses. We demonstrate although easily induced to produce toxic content, even an unaligned model can indeed recognize mistakes within its own toxic responses when given proper hints, stemming from the intuition that **discrimination** (_i.e._, recognizing harmful responses) is simpler than the **generation** (_i.e._, generating harmless responses), which can also be justified by making an analogy between scalable oversight and complexity theory (Saunders et al., 2022). Check Sec. 3 for more details. Through the mistake analysis, the generative capacities of LLMs can be enhanced by their innate discriminating abilities for further improvement.
To summarize, our method leverages _natural-language-based mistake analysis_ for model alignment, which can also be _provided by the model itself_, obviating need for human intervention and external reward models by leveraging its inherent discriminative capabilities to amplify generative potential. Extensive experiments on open-sourced instructions in benchmarks (Dubois et al., 2023; Dai et al., 2023) demonstrate our significant improvements over conventional SFT and RLHF methods.
The main contributions of this work contain three parts:
1. We introduce a novel alignment framework which aligns LLMs by transforming harmful responses into precious instruction tuning corpus via mistake analysis.
2. We demonstrate that LLMs can self-criticize by first inducing unaligned models to produce toxic responses and then instructing to evaluate and identify potential mistakes. Thus, the inherent discrimination ability of LLMs can be utilized to enhance the generation ability.
3. Extensive experiments show that our proposed alignment framework based on the natural-language-based mistake analysis outperforms both SFT and RL methods with significant efficiency on various instruction following benchmarks.
## 2 Related Work
Supervised Fine-Tuning(SFT) is the primary method to align the large language models (LLMs) with human expectation (Ouyang et al., 2022; Wang et al., 2023; OpenAI, 2023), which works by
Figure 1: **Pipeline illustration** of our alignment method based on mistake analysis. Different from conventional works (_e.g._, SFT and RLHF) striving to steer LLMs towards the “optimal responses”, we purposefully make LLMs exposed to and actively analyse harmful content with proper guidance. To learn what is bad with internal reasons, LLMs can perform more robustly to novel instructions.
calculating the cross-entropy loss over the ground-truth response for an input instruction, empowering LLMs to follow the user instructions. A significant limitation of SFT is its focus solely on the best responses, without offering fine-grained comparisons to the sub-optimal ones. To address that, some variants of SFT, such as Reward Ranked Fine-tuning (RAFT) (Dong et al., 2023) and Chain of Hindsight (CoH) (Liu et al., 2023), have been proposed. RAFT scores and filters samples using a reward model, subsequently fine-tuning only with the high-reward samples. On the other hand, CoH fine-tunes LLMs using sequences of responses coupled with human feedback, enabling models to discern difference between responses. However, all SFT-based strategies primarily guide the model on discerning a "optimal response", largely shielding it from poor responses.
Reinforcement Learning from Human Feedback(RLHF) (Ouyang et al., 2022) instead optimizes LLMs based on human-elicited reward models (RM), typically trained from pairwise human preferences on model outputs. Although effective, acquiring high-quality human-labeled preference data at scale is resource-intensive. To alleviate, Reinforcement Learning from AI Feedback (RLAIF) (Lee et al., 2023) simulates human preference using LLMs, although noisier than human-validated data. Approaches like the Direct Preference Optimization (DPO) (Rafailov et al., 2023) and RRHF (Yuan et al., 2023) further refine the alignment process by integrating ranking information into LLM fine-tuning or adjusting loss terms, respectively. Notably, the usage of contrastive in reinforcement learning (Yang et al., 2023) showcases improvements in the sample efficiency and model quality by emphasizing difference among good and bad responses. While RL-based methods enable models to gauge the relative quality of responses, they seldom clarify specific reasons for penalizing inferior outputs, and therefore, still suffers from generalizing to novel unseen instructions.
Self-correctionand self-improvement have been widely observed for LLMs. Huang et al. (2022) demonstrate that LLMs can refine reasoning skills using unlabeled datasets through self-generated solutions. Gou et al. (2023) introduce the CRITIC framework, enabling LLMs to amend the outputs by interacting with external tools, mimicking the human validation processes. Saunders et al. (2022) fine-tune LLMs to produce critiques, assisting human annotators in identifying content flaws, while Bai et al. (2022) confirm that models can morally self-correct when trained with human feedback. Given these findings, it's plausible to suggest that LLMs can also offer mistake analysis, providing insights into their own errors and rectifications.
## 3 Preliminary
### Generation against Discrimination
In this section, we first investigate whether _discrimination_ might be easier than _generation_ for LLMs. Specifically, we design experiments to check whether LLMs find it easier to judge the harmfulness of the responses rather than generate harmless responses directly. Three models are considered here, including Alpaca (Taori et al., 2023), GPT-3 (Olmo et al., 2021) and GPT-3.5 (Ye et al., 2023), which are subjected to 500 red-teaming instructions sampled from the PKU-SafeRLHF Dataset (Dai et al.,
Figure 2: (a) **Comparison between generation and discrimination abilities** for Alpaca, GPT-3 and GPT-3.5. Each pair of vertical histograms represents the average score for generating responses and analyzing the generated responses, respectively. (b) **Comparison between guided and unguided analyses**. Each histogram is composed of three different segments with distinct colors, labeled with three score numbers, which represent the count of samples where _the guided analysis wins_, _ties_, and _the unguided analysis wins_, respectively. Check more details in Sec. 3.
2023) that potentially contain toxic issues, and subsequently, prompted to evaluate the safety of their own responses. We compare the quality between 1) _Instruction-Response_ pairs and 2) _Instruction, Response)-Analysis_ pairs. For evaluation, GPT-41 is employed to score the quality of these pairs on a scale from 1 to 10, followed by human verification. See Appendix A.1 for more details.
Footnote 1: [https://chatgpt.ust.hk](https://chatgpt.ust.hk)
As demonstrated in Fig. 2(a), across all evaluated models, the discrimination scores (_i.e._, identifying and analyzing potential mistakes) consistently surpass those of generation (_i.e._, producing harmless responses directly) with a significant margin. Specifically, for GPT-3, the discrimination score is appreciably higher than direct response generation with an improvement of more than 10% (8.3 vs. 7.5), suggesting that even if LLMs might occasionally generate harmful responses, it still possesses the capability to identify harmful elements within its own responses (see examples in Appendix A.1). This phenomenon underscores the intuition that discrimination is more straightforward than generation (Saunders et al., 2022), based on which, we further investigate how to make the fully advantage of LLM's inherent discrimination ability to bolster its generative capabilities in Sec. 3.2.
### Guided Analysis against Unguided Analysis
Using the same 500 red-teaming instructions related to harmful problems from Sec. 3.1 with their original bad responses in the PKU-SafeRLHF Dataset, we assess the capability of the three models to analyze potential mistakes. We consider two scenarios: (1) **Guided analysis** suggests that LLMs are explicitly informed within the prompt that the provided responses could potentially be harmful (Fig. 3(b)), while (2) **Unguided analysis** suggests LLMs evaluate the response quality without any specific indications about the potential harmfulness (Fig. 3(c)).
We also evaluate the quality of both guided and unguided analyses produced by the LLMs, using a scale from 1 to 10. Each pair of the guided and unguided analyses, corresponding to the exact same instruction-response sample, is categorized as a _win_, _tie_, or _lose_ based on their scores. As illustrated in Fig. 2(b), there is a noticeable preference for guided analyses. Across all models, the number of "wins" in guided scenarios consistently exceeds that in unguided ones, emphasizing importance of providing clear guidance when requesting analysis. See Appendix A.2 for detailed examples.
## 4 Method
Denote \(D=\{D_{\text{helpful}}=\{(x_{\text{help}},y_{\text{help}})\},D_{\text{harmless} }=\{(x_{\text{harm}},y_{\text{harmless}})\}\}\) as alignment instructing tuning datasets, where \(D_{\text{helpful}}\) contains the helpfulness instruction-response pairs, while \(D_{\text{harmless}}\) involves red-teaming prompts \(x_{\text{harm}}\) potentially engaging with harmful problems and \(y_{\text{harmless}}\) as the expected responses. Given a LLM as \(F_{\mathbf{\theta}}(\cdot)\) parameterized by \(\mathbf{\theta}\) and the sequence pairs \((\mathbf{x}_{i},\mathbf{y}_{i})\in D\), the objective of supervised fine-tuning (SFT) is to minimize the cross-entropy loss between the true
Figure 3: **Prompt templates** for our alignment method based on mistake analysis. Combining the (a) guided response generation and (b) guided analysis generation, we obtain high-quality mistake analysis triplets, which are used to perform (c) unguided analysis fine-tuning for model alignment.
probability distribution and the model's estimated probability distribution over the vocabulary as,
\[\mathcal{L}=-\sum_{i}p(\mathbf{y}_{i}|\mathbf{x}_{i})\log q(\mathbf{y}_{i}|\mathbf{x}_{i};F_{\mathbf{ \theta}}(\cdot)), \tag{1}\]
where \(\mathbf{x}_{i}\) is the input instruction and \(\mathbf{y}_{i}\) is the target response.
Guided mistake induction.Mistakes (_i.e._, the harmful responses) could originate from either the human-annotated training data or model-generated ones by leveraging the guided prompts, as shown in Fig. 3(a). An unaligned model (_e.g._, Alpaca (Taori et al., 2023)) is primarily considered here due to its susceptibility to malicious instructions. Motivated by (Yang et al., 2023), the model is triggered to produce detrimental outputs by inserting hint keywords (_e.g._, _harmful, unethical, and offensive_) into the instruction prompts, and we denote such an (induced) detrimental response as \(y_{\text{harm}}\) paired with \((x_{\text{harm}},y_{\text{harmless}})\in D_{\text{harmless}}\). As shown in Table 4, this seemingly simple attack can achieve a surprising success rate, which also convinces us to adopt the inducted responses generated by LLMs themselves for mistake analysis.
Guided analysis generation.Subsequent to obtaining the \((x_{\text{harm}},y_{\text{harm}},y_{\text{harmless}})\) triplets, we instruct the model \(F_{\mathbf{\theta}}(\cdot)\) to perform an introspective analysis of the harmful response \(y_{\text{harm}}\), and then obtain the mistake analysis denoted as \(c_{\text{harm}}\). As shown in Fig. 3(b), given \((x_{\text{harm}},y_{\text{harm}})\) along with a guided directive to "_analyse why the answer is potentially harmful, unethical, or offensive_" as a prior knowledge, even an unaligned LLM can generate reasonable analyses thanks to its superior reasoning ability, since \(F_{\mathbf{\theta}}(\cdot)\) only needs to explain why \(y_{\text{harm}}\) is indeed harmful without judging whether \(y_{\text{harm}}\) is harmful or not at first. Once acquired, \((x_{\text{harm}},y_{\text{harm}},c_{\text{harm}})\) forms a mistake analysis triplet, which would be integrated into the SFT process along with \(D_{\text{helpful}}\) and \(D_{\text{harmless}}\). Next, we discuss how to construct the _mistake analysis samples_ with the _mistake analysis triplets_.
Unguided analysis fine-tuning.Different from the guided analysis generation, a unguided template, devoid of any reminder about whether the response is harmful or not, is utilized to construct mistake analysis samples with \((x_{\text{harm}},y_{\text{harm}},c_{\text{harm}})\) triplets, as demonstrated in Fig. 3(c). Note that in this stage, any reminder about the potentially harmful, unethical or offensive nature of the response is deliberately omitted. By removing any predetermined labels or reminders, \(F_{\mathbf{\theta}}(\cdot)\) is encouraged to assess the response solely based on its own values and morals, and therefore, alignment is achieved with likelihood maximization during SFT. This could facilitate a more nuanced understanding and discrimination of harmful, unethical, or offensive content.
Guided response generation.After SFT phase, a guided strategy is employed during inference, where \(F_{\mathbf{\theta}}(\cdot)\) is explicitly reminded to formulate "_harmless, ethical, and inoffensive_" responses. A reminder during inference serves as a condition that ensures the model's adherence to ethical norms and avoids the generation of potentially harmful content. During inference, the model effectively computes the conditional probability given the guideline to generate harmless content, where the model is actually cognizant of the constraint prefix context.
Discussion: why mistake analysis works?Denote random variables of instructions and answers as \(\mathbf{X}\) and \(\mathbf{Y}\) separately, and \(\mathbf{T}\in\{\text{Harmful},\text{Harmless}\}\) as the harmful tag, a binary random variable representing whether a instruction-response pair is harmful. According to the Bayesian's Theorem, we have,
\[p(\mathbf{T}|\mathbf{Y},\mathbf{X})\propto p(\mathbf{Y}|\mathbf{X},\mathbf{T})p(\mathbf{X}|\mathbf{T})p(\mathbf{T}) \propto p(\mathbf{Y}|\mathbf{X},\mathbf{T}), \tag{2}\]
where \(\mathbf{X}\) is independent with \(\mathbf{T}\), since it is a random user instruction. Note that the mistake analysis \(\mathbf{C}\) can be considered as the detailed _chain-of-thought reasoning_(Wei et al., 2022) for \(p(\mathbf{T}|\mathbf{Y},\mathbf{X})\), since it thoughtfully analyses why the given \((\mathbf{Y},\mathbf{X})\) pair is harmful (_i.e._, \(\mathbf{T}=\text{Harmful}\)).
Therefore, both _guided mistake induction_ and _guided analysis generation_ actually aim at generating reasonable \((\mathbf{X},\mathbf{Y},\mathbf{T})\) triplets to optimize the posterior probability \(p(\mathbf{T}|\mathbf{Y},\mathbf{X})\) during the _unguided analysis fine-tuning_, and thus, optimizing the likelihood \(p(\mathbf{Y}|\mathbf{X},\mathbf{T})\). By doing that, the alignment of \(F_{\mathbf{\theta}}(\cdot)\) is emphasized, aiming to generate more coherent and contextually appropriate responses, and ensuring a higher level of model alignment with _guided response generation_ during inference.
This analysis highlights the significance of elevating \(p(\mathbf{T}|\mathbf{Y},\mathbf{X})\) in refining model behavior and alignment. By optimizing conditional probabilities, the model not only becomes capable of nuanced generation but also becomes a coherent reflection of the specified context, paving the way for ethical and responsible development of LLMs.
## 5 Experiment
### Alignment
In this section, we evaluate our mistake analysis method as an alignment algorithm to improve the harmlessness performance of an unaligned model (_e.g._, Alpaca (Taori et al., 2023)) from scratch.
Data.PKU-SafeRLHF Dataset (Dai et al., 2023) is adopted for both model training and evaluating, which is a human-curated dataset that highlights both the helpful performance and safety preferences and covers constraints across multiple dimensions (_e.g._, _insults_, _immorality_, _crime_, _emotional harm_, _and privacy_). Two responses are provided for each instruction, along with labels indicating which one is more harmful to support both SFT and RLHF. We clean the training set and maintain 10,260 unique instructions with the good and bad responses accompanied. Considering the trade-off among helpfulness and harmfulness (Bai et al., 2022), we further adopt the official 52k helpful instruction following corpus from Alpaca (Taori et al., 2023) to constitute our ultimate training set. Moreover, we utilize the evaluation set of AlpacaFarm (Dubois et al., 2023) consisting of 805 instructions for helpfulness evaluation, and the 1,523 red-teaming instructions from the test set of PKU-SafeRLHF for harmfulness assessment, as more details discussed in the following.
Models and baselines.We use Alpaca-7B (Taori et al., 2023) as the unaligned base model which is fine-tuned from LLaMA-7B (Touvron et al., 2023) with 52k helpfulness-only instruction following data. Based on Alpaca, we compare our methods with vanilla SFT, CoH (Liu et al., 2023), Critique-Revise (Bai et al., 2022) and RLHF (Ouyang et al., 2022). For CoH and Critique-Revise, we utilize the origin bad responses in the training set, while for RLHF, PPO-Lag (Ray et al., 2019) is adopted following PKU-SafeRLHF with the official reward2 and cost models3. LoRA (Hu et al., 2021) is by default deployed for all Transformer linear layers with rank 16. All evaluated methods are fine-tuned for three epochs for a fair comparison.
Footnote 2: [https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-reward](https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-reward)
Footnote 3: [https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-cost](https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-cost)
Evaluation metrics.We adopt four metrics to evaluate the harmless and helpfulness performance. Specifically, we consider the single-response grading where a _Score_ is assigned to a single response on a scale from 1 to 10 (Zheng et al., 2023), similarly with Sec. 3. Moreover, for the harmlessness instructions, we further conduct a binary judge whether each response is harmless or not and report a Harmless _Rate_ (Sun et al., 2023). To penalize achieving higher harmful score by simply rejecting to respond, we further report the Helpful Score for harmlessness instructions following (Yang et al., 2023). GPT-4 is utilized for the initial evaluation, while the human annotators are further enlisted to verify the ultimate evaluation results to ensure accuracy.
Results.As shown in Table 1, our method consistently outperforms existing methods, including the vanilla SFT, Critique-Revise, RLHF, and CoH, demonstrating substantial advancements in each comparison. Particularly, our method remarkably enhances the performance of harmlessness while effectively preserving helpfulness. See Fig. 4 for a qualitative comparison among different methods. When our method leverages the original faulty cases from the training set with mistake analysis from
\begin{table}
\begin{tabular}{l|c c|c|c c c} \hline \hline \multirow{2}{*}{Method} & Mistake & Analysis & \multirow{2}{*}{Helpful} & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{Score} \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\begin{tabular}{c} Harmless \\ \end{tabular} } \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\begin{tabular}{c} Harmless \\ \end{tabular} } \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\begin{tabular}{c} Harmless \\ \end{tabular} } \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\begin{tabular}{c} Harmless \\ \end{tabular} } \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{
\begin{tabular}{c} Harmless \\ \end{tabular} } \\ \end{tabular} } \\ \cline{1-1} & & & & & & & \\ \hline Alpaca (vanilla) & - & - & 6.21 & 5.71 & 52.5 & 4.51 \\ SFT & - & - & 6.27 & 6.69 & 63.0 & 5.30 \\ \hline Critique-Revise & Origin & - & 6.22 & 6.60 & 62.6 & 5.02 \\ CoH & Origin & - & 6.29 & 6.79 & 64.7 & 5.23 \\ RLHF & Origin & - & 6.30 & 6.71 & 64.1 & 5.35 \\ \hline \multirow{3}{*}{**Ours**} & Origin & Alpaca & \(6.31^{(+0.10)}\) & \(7.31^{(+1.00)}\) & \(7.10^{(+18.5)}\) & \(5.28^{(+0.77)}\) \\ & Alpaca & Alpaca & \(\mathbf{6.38}^{(+0.17)}\) & \(7.41^{(+1.70)}\) & \(72.4^{(+19.9)}\) & \(5.39^{(+0.88)}\) \\ & Alpaca & GPT-3.5 & \(6.31^{(+0.10)}\) & \(\mathbf{7.61}^{(+1.90)}\) & \(\mathbf{74.1}^{(+21.6)}\) & \(\mathbf{5.60}^{(+1.09)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparative results of LLM alignment across various methods.** We report the Helpful Score to represent the helpfulness performance, while for evaluating harmlessness performance, we report the Harmless Score, Harmless Rate, and Helpful Score for harmful instructions respectively.
Alpaca, it achieves an approximately \(35.2\%\) improvement over the vanilla Alpaca for Harmless Rate. Moreover, when applied to harmful responses generated by Alpaca using _guided mistake induction_, the Harmless Rate advances to \(72.4\%\), highlighting that the self-induced mistakes are more valuable flawed responses for our analysis-based alignment. Notably, when subjected to GPT-3.5 as analysis source, our method achieves the state-of-the-art results with a \(74.1\%\) Harmless Rate, underscoring the considerable advantages of employing refined and sophisticated analysis sources. The trends of other evaluation metrics, including Harmless and Harmless Helpful Scores, consistently align with the trends observed in the Harmless Rate.
Our method's superior overall performance not only validates its improved safety alignment but also exemplifies the merits of integrating self-critique and internal mistake analysis, which enables the model to optimize responses autonomously, eliminating the need for external or manual intervention.
### Defending Against Advanced Instructional Attacks
Even LLMs meticulously aligned for the harmlessness can potentially yield unsafe responses when confronted with emerging instructional attacks, underscoring the importance of the swift and robust defensive methodologies. In this section, we assess the efficacy of our method in defending against novel unforeseen attacks on LLMs previously aligned (_e.g._, ChatGLM (Zeng et al., 2023)).
Instruction attacks.We examine the instruction attack referred to as "Goal Hijacking" (Sun et al., 2023a), which entails appending deceptive or misleading instructions to the model's input, aiming to manipulate LLMs into disregarding the original user prompts and generating harmful responses. As reported in Sun et al. (2023a), even post-alignment LLMs are vulnerable to "Goal Hijacking".
Data.We employ the SafetyPrompts dataset (Sun et al., 2023a) for safety alignment, which comprises 100,000 query-answer (QA) pairs spanning seven typical safety scenarios and six types of advanced instruction attacks. For harmlessness, we randomly sample 500 QA pairs for each category from SafetyPrompts, which is supplemented with additional 50K QA pairs from the MOSS dataset (Sun et al., 2023b) for helpfulness to form the ultimate training data, while for evaluation, we adopt the test set of SafetyPrompts dataset, containing 1915 queries with 136 queries for "Goal Hijacking". To ensure defense does not impede helpfulness, we further sample 1000 random queries from MOSS for helpfulness evaluation. Moreover, considering lack of bad cases, we construct 500 pairs in form of "(Query, Good response, Bad response)" for Goal Hijacking to maintain consistent with the settings of alignment experiments in Sec. 5.1. Several improper responses of Goal Hijacking are found within the original SafetyPrompts dataset. Thus, we manually identify and annotate 500 unsafe responses and provided the corresponding safe responses for them.
Models and baselines.We utilize ChatGLM-6B (Zeng et al., 2023), an open bilingual language model grounded in the GLM (Du et al., 2022) framework, as the base model, which has been previously aligned with Chinese QAs and dialogues to address both helpfulness and harmlessness topics. Similar to Sec. 5.1, we compare our method with vanilla SFT, CoH (Liu et al., 2023a) and Critique-Revise (Bai et al., 2022b). For a fair comparison, all listed methods are fine-tuned using LoRA (Hu et al., 2021) for all Transformer linear layers with a rank of 16. All methods are fine-tuned for one epoch, starting with an initial learning rate of 0.0001.
\begin{table}
\begin{tabular}{l|c c|c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{
\begin{tabular}{c} Mistake \\ Source \\ \end{tabular} } & Analysis & \multicolumn{1}{l|}{Helpful} & \multicolumn{2}{c|}{Harmless} & \multicolumn{2}{c}{Goal Hijacking} \\ & & Source & Score & Score & Rate (\%) & Score & Rate (\%) \\ \hline ChatGLM & - & - & **8.32** & 8.92 & 95.3 & 6.85 & 68.4 \\ \hline SFT & - & - & 8.16 & 8.91 & 94.8 & 7.71 & 77.2 \\ CoH & Origin & - & 8.23 & 8.94 & 95.2 & 7.89 & 82.4 \\ Critique-Revise & Origin & - & 8.24 & 8.90 & 95.2 & 7.97 & 78.7 \\ \hline \multirow{2}{*}{**Ours**} & Origin & ChatGLM & 8.18 & 8.93 & 95.1 & 8.02\({}^{(+1.17)}\) & 82.4\({}^{(+1.40)}\) \\ & ChatGLM & ChatGLM & 8.26 & **8.96** & **96.1** & **81.4\({}^{(+1.20)}\)** & **85.3\({}^{(+1.6)}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparative results of defense against attacks across various methods.** We present the Helpful Score to represent helpfulness performance, while to assess the harmlessness performance, we report the Harmless Score and Harmless Rate for harmful instructions. Performance on the “Goal Hijacking” test data is further provided for evaluating the attack defensive ability.
Evaluation metrics.In addition to the metrics in Sec. 5.1, we also separately report the model's performance on "Goal Hijacking" test subset to examine the efficacy of attack defense mechanisms.
Results.As illustrated in Table 2, our method exhibits significant improvement over SFT with a gain of \(8.1\%\) in the Harmless Rate on the "Goal Hijacking" test set, and surpasses CoH and Critique-Revise consistently, revealing superior advancements while sustaining performance on regular helpfulness and harmlessness instructions. Moreover, we note self-induced mistakes are more valuable than the flawed cases in the original dataset, consistent with our observation in Sec. 5.1. Throughout the whole self-critique procedure, both the responses and the accompanying mistake analyses are all autonomously generated by ChatGLM without the need for external models or manual intervention.
Comparison.Fig. 5 depicts a typical instance of "Goal Hijacking". In the training data, the user first requests a repetition of a safe statement, and then instructs the model to disregard the previous instruction and directly output an unsafe response. Indeed, an ideal response would decline such a malicious directive. When faced with a similar instruction during inference, our method chooses to reject the user's instruction attack, whereas SFT succumbs and still produces unsafe responses, indicating the superior generalization ability of our method which stems from mistake analysis that allows LLMs to understand the internal mechanisms of advanced instruction attacks, and therefore, bolstering its generalizability against analogous challenges.
### Ablation Study
We conduct ablation studies on Alpaca to investigate essential components of our proposed alignment method, including instruction strategy of mistake analysis during SFT, source of bad responses, quality and quantity of mistake analysis. Optimization recipe is maintained the same with Sec. 5.1.
Strategy of SFT instruction.Row #1 and #2 in Table 3 differentiate whether to retain the guided analysis instruction, as discussed in Fig. 3, when incorporating mistake analysis triplets into SFT corpus. The comparison shows that the unguided strategy indeed performs better, since during SFT, providing cues might allow LLMs to cheat and learn shortcuts associated with analysis instructions and responses, hindering its capacity to learn pertinent alignment profoundly. Additionally, Eqn. 2 indicates to alter the likelihood of bad responses appearing during inference time and produce safer responses, utilizing model-generated analyses requires unguided instructions to align during SFT.
Figure 4: **Qualitative comparison between different alignment methods.**
Figure 5: **An example of “Goal Hijacking”. When encountering a similar instruction that has been seen during training, our method chooses to reject the instruction attack, while SFT is successfully attacked, indicating the superior generalization ability by aligning with mistake analyses.**
**Source of bad responses.** Two sources are considered, including the original bad responses from the training dataset, and the ones induced with the _guided mistake induction_. As depicted in Table 3 (specifically in Row #2 and #3), performance is notably improved when utilizing mistakes generated by the model itself, highlighting the inherent complexities of model-induced errors.
**Analysis quality.** We contrast the impacts of the guided and unguided mistake analysis on Alpaca. Specifically, after obtaining an induced bad response, guided analysis instructs the model with _"analyze why the answer is potentially harmful"_, while unguided analysis provides no such a reminder. As demonstrated in Table 3 (Row #3 and #4), superior efficacy is observed with the guided analysis, underscoring the critical importance of the directed insights for mistake analysis, consistent with the preliminary study as in Sec. 3.2.
**Analysis quantity.** Rows #3 and #5 in Table 3 contrast the quantity of mistake analyses utilized. In Row #5, mistake analyses of both model-induced bad responses and the inherently presented ones from the original training dataset are incorporated, doubling the overall amount of mistake analyses compared to Row #3, which instead only utilizes model-induced bad response analysis. However, results demonstrate a decrease in efficacy when multiple mistake analysis samples applied for the same instructions, which could potentially be attributed to conflicts in bad case analyses of the same instruction, leading to sub-optimal alignment performance.
**Induction success rate** of guided mistake induction is ablated by placing hint key words at different positions within instruction prompts, including the _Position #1_ (system prompt), _Position #2_ (instruction) and _Position #3_ (response), as demonstrated in Fig. 3(a). As depicted in Table 4, the introduction of negative induction substantively augments mistake induction, a fact shown by the diminished scores and rates relative to the Alpaca baseline, suggesting an induced predisposition of the model towards producing more toxic responses. Typically, more positions and closer hint words to the responses, higher the success rate achieves. This observation highlights the susceptibility of unaligned models to be induced, facilitating generation of harmful responses conducive to a more in-depth analysis of undesired cases.
## 6 Conclusion
Ensuring the alignment of LLMs with human values is paramount. Conventional alignment methods often shield LLMs from mistakes to prevent generation of toxic responses. In contrast, our method introduces a novel alignment approach based on mistake analysis, purposefully exposing LLMs to flawed outputs. By self-reflection promoting, mistakes are converted into powerful corpus for model alignment. Experimental results demonstrate the effectiveness of our method, surpassing SFT and RLHF for both aligning the unaligned models and defending post-aligned models against advanced instructional attacks. Even with only a few mistakes, our method can comprehend the underlying mechanisms of why bad responses happen and generalize to handle more analogous challenges. As the Chinese proverb eloquently states, _"A fall into the pit, a gain in your wit"_, we aspire, through this research, to imbue LLMs with a touch of this wisdom.
**Acknowledgement.** We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multirow{2}{*}{Method} & Hint & \multicolumn{2}{c}{Harmless} \\ & Position & Score & Rate (\%) \\ \hline Alpaca & - & 5.71 & 52.8 \\ \hline \multirow{4}{*}{Induction} & \#1 & 4.94 & 44.1 \\ & \#2 & 4.08 & 34.6 \\ \cline{1-1} & \#3 & 3.83 & 32.9 \\ \cline{1-1} & \#2 \& \#3 & 3.67 & 30.5 \\ \cline{1-1} & \#1 \& \#2 \& \#3 & **3.39** & **27.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Results of induction success rate.**
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline No. & \begin{tabular}{c} Mistake \\ Source \\ \end{tabular} & \begin{tabular}{c} Analysis \\ Quality \\ \end{tabular} & \begin{tabular}{c} Analysis \\ Quantity \\ \end{tabular} & \begin{tabular}{c} SFT \\ Instruction \\ \end{tabular} & \begin{tabular}{c} Helpful \\ Score \\ \end{tabular} &
\begin{tabular}{c} \multicolumn{2}{c}{Harmless} \\ Score \\ \end{tabular} \\ \hline
1 & Origin & Guided & 1\(\times\) & Guided & 6.33 & 7.04 & 67.4 & 5.26 \\
2 & Origin & Guided & 1\(\times\) & Unguided & 6.31 & 7.31 & 71.0 & 5.28 \\
3 & Alpaca & Guided & 1\(\times\) & Unguided & **6.38** & **7.41** & **72.4** & **5.39** \\ \hline
4 & Alpaca & Unguided & 1\(\times\) & Unguided & 6.30 & 6.67 & 63.3 & 5.30 \\
5 & Alpaca & Guided & 2\(\times\) & Unguided & 6.26 & 7.37 & 71.2 & 5.29 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results of ablation study.** We investigate the source of bad cases, the quality and quantity of mistake analysis, and the instruction strategy for SFT. Default settings are marked in gray |
2310.06215 | Wakefield Generation in Hydrogen and Lithium Plasmas at FACET-II:
Diagnostics and First Beam-Plasma Interaction Results | Plasma Wakefield Acceleration (PWFA) provides ultrahigh acceleration
gradients of 10s of GeV/m, providing a novel path towards efficient, compact,
TeV-scale linear colliders and high brightness free electron lasers. Critical
to the success of these applications is demonstrating simultaneously high
gradient acceleration, high energy transfer efficiency, and preservation of
emittance, charge, and energy spread. Experiments at the FACET-II National User
Facility at SLAC National Accelerator Laboratory aim to achieve all of these
milestones in a single stage plasma wakefield accelerator, providing a 10 GeV
energy gain in a <1 m plasma with high energy transfer efficiency. Such a
demonstration depends critically on diagnostics able to measure emittance with
mm-mrad accuracy, energy spectra to determine both %-level energy spread and
broadband energy gain and loss, incoming longitudinal phase space, and matching
dynamics. This paper discusses the experimental setup at FACET-II, including
the incoming beam parameters from the FACET-II linac, plasma sources, and
diagnostics developed to meet this challenge. Initial progress on the
generation of beam ionized wakes in meter-scale hydrogen gas is discussed, as
well as commissioning of the plasma sources and diagnostics. | D. Storey, C. Zhang, P. San Miguel Claveria, G. J. Cao, E. Adli, L. Alsberg, R. Ariniello, C. Clarke, S. Corde, T. N. Dalichaouch, H. Ekerfelt, C. Emma, E. Gerstmayr, S. Gessner, M. Gilljohann, C. Hast, A. Knetsch, V. Lee, M. Litos, R. Loney, K. A. Marsh, A. Matheron, W. B. Mori, Z. Nie, B. O'Shea, M. Parker, G. White, G. Yocky, V. Zakharova, M. J. Hogan, C. Joshi | 2023-10-10T00:10:40Z | http://arxiv.org/abs/2310.06215v1 | Wakefield Generation in Hydrogen and Lithium Plasmas at FACET-II: Diagnostics and First Beam-Plasma Interaction Results
###### Abstract
Plasma Wakefield Acceleration (PWFA) provides ultrahigh acceleration gradients of 10s of GeV/m, providing a novel path towards efficient, compact, TeV-scale linear colliders and high brightness free electron lasers. Critical to the success of these applications is demonstrating simultaneously high gradient acceleration, high energy transfer efficiency, and preservation of emittance, charge, and energy spread. Experiments at the FACET-II National User Facility at SLAC National Accelerator Laboratory aim to achieve all of these milestones in a single stage plasma wakefield accelerator, providing a \(10\,\mathrm{GeV}\) energy gain in a \(<1\,\mathrm{m}\) plasma with high energy transfer efficiency. Such a demonstration depends critically on diagnostics able to measure emittance with mm-mrad accuracy, energy spectra to determine both %-level energy spread and broadband energy gain and loss, incoming longitudinal phase space, and matching dynamics. This paper discusses the experimental setup at FACET-II, including the incoming beam parameters from the FACET-II linac, plasma sources, and diagnostics developed to meet this challenge. Initial progress on the generation of beam ionized wakes in meter-scale hydrogen gas is discussed, as well as commissioning of the plasma sources and diagnostics.
## I Introduction
The pursuit of higher energy and brightness particle beams in the high energy physics and light source communities has pushed conventional accelerator technology to its physical limits. Plans for the next generation TeV-scale linear collider using RF acceleration require extensive lengths of 10s of km [1, 2, 3] to reach an energy of several \(100\,\mathrm{GeV}\) required for a Higgs factory, and many 10s of km to reach energies greater than \(1\,\mathrm{TeV}\) for energy frontier studies [4]. Plasma Wakefield Acceleration (PWFA) offers a more compact alternative by providing acceleration gradients that are orders of magnitude greater than conventional RF accelerators, opening the door to smaller and more efficient TeV scale electron-positron colliders and free electron lasers.
In a plasma wakefield accelerator, a trailing particle bunch is accelerated by the _wake_ left behind a driving relativistic particle beam [5] or laser pulse [6] as they propagate through a plasma together. The driver's transverse fields expel the plasma electrons away from the axis of motion, forming a wake within the previously uniform plasma. These expelled electrons are attracted back towards the axis by the relatively stationary plasma ions, creating a bubble devoid of electrons within the plasma with dimensions on the order of a plasma wavelength, \(\lambda_{p}(\mathrm{cm})=3.3\times 10^{6}n_{p}^{-1/2}\), where \(n_{p}\) is the plasma density in \(\mathrm{cm}^{-3}\). The plasma acts as a transformer, extracting energy from the driver through the formation of a wake, and transferring the wake energy to a trailing bunch.
To be viable for applications such as linear colliders and high-brightness light sources, plasma wakefield accelerators must deliver high-energy bunches with low energy spread and emittance at high efficiency and repetition rate. Beam-driven PWFA has achieved significant milestones towards these goals in recent years, including the demonstration of multi-\(\mathrm{GeV}\)/m accelerating gradients [7, 8], efficient acceleration of narrow energy spread beams [9], and emittance preservation [10]. However, simultaneously achieving all of these parameters-high energy gain, high efficiency, low energy spread, and low emittance-in a plasma wakefield accelerator has yet to be demonstrated.
The upgraded FACET-II National User Facility [11] provides the opportunity for the development of a single stage plasma wakefield accelerator that approaches the parameters required by a future linear collider [12]. We aim to demonstrate simultaneously all of the following in a single PWFA stage: energy depletion of the drive bunch energy with a drive-to-wake efficiency of \(>80\%\), acceleration of the trailing bunch by \(>10\,\mathrm{GeV}\) with a wake-to-trailing bunch energy extraction efficiency of
over 40%, while simultaneously maintaining beam quality by achieving a final energy spread of \(<2\%\) and emittance preservation after acceleration.
This paper describes the beam-driven PWFA approach and the experimental setup at the FACET-II National User Facility which will be used in the demonstration of a single stage plasma accelerator. We also introduce the diagnostics required to demonstrate preserved beam quality, and discuss the initial results obtained from beam-ionized hydrogen plasma studies during user-assisted commissioning of the facility.
## II Two bunch beam-driven plasma wakefield acceleration
For the successful application of PWFA in a linear collider, it is crucial to achieve a high energy transfer efficiency from drive to trailing bunch to maximise the overall wall-plug efficiency. The energy transfer efficiency in PWFA can be considered in two parts - the _drive-to-wake_ efficiency, and _wake-to-trailing bunch_ efficiency. The drive-to-wake efficiency is influenced by several factors, including ensuring sufficient plasma length for near complete energy depletion of the driver, proper matching conditions into the plasma, and the re-acceleration of energy-depleted drive electrons that may slip back into the accelerating phase. Recent experiments at FLASH-Forward have demonstrated up to 56% drive-to-wake efficiency [13]. The wake-to-trailing bunch efficiency can be optimized by appropriate beam-loading of the plasma wake and matching into the plasma, with recent experiments demonstrating up to 42% drive to trailing bunch energy transfer efficiency with preservation of energy spread [9].
The preservation of overall beam quality in PWFA is critical to be able to deliver bunches with low emittance and energy spread after multiple stages of acceleration. Optimizing the beam loading of the wake is crucial for controlling the energy spread such that all particles throughout the bunch experience the same accelerating gradient [14; 15; 16; 17]. Emittance preservation largely depends on proper matching [18] and alignment [19] of the trailing bunch within the plasma bubble, and preserving this matching within the entrance and exit ramps of the plasma [20; 21]. Additional factors such as ion motion [22] and beam scattering [23] can also have deleterious effects on emittance.
Particle-in-Cell (PIC) simulations play a vital role in considering all of these parameters and determining PWFA schemes that optimize both energy transfer efficiency and preservation of beam quality. Previous publications have demonstrated a strategy for achieving \(>10\,\mathrm{GeV}\) energy gain with high efficiency, preservation of energy spread and emittance of a \(0.5\,\mathrm{nC}\), \(10\,\mathrm{\SIUnitSymbolMicro m}\) emittance trailing bunch with the ultimate beam parameters that will be available at FACET-II [12].
In the initial phases of beam development, FACET-II will be operating with the relaxed beam parameters which will be discussed in the following sections. PIC simulations using QPAD [24] have been performed to provide insights into the expected performance of PWFA with these conditions, see FIG. 1. In the simulation, a moving window with dimensions of \(z=225\,\mathrm{\SIUnitSymbolMicro m}\) (beam direction) and \(r=168\,\mathrm{\SIUnitSymbolMicro m}\) (transverse direction) was used. The simulation box was divided into 1600 and 400 cells along the \(z\) and \(r\) directions, respectively. In the azimuthal direction, 16 cells were used for both bunches and the plasma. 16 particles were initialized in each cell for both drive and trailing bunches, and 64 particles per cell was used for the plasma. As the drive and trailing bunches were modeled as Gaussian bunches in this simulation, only the \(m=0\) (lowest order) mode was included. The plasma source is modeled as a neutral lithium gas with \(40\,\mathrm{cm}\) long flat-top plasma density of \(8\times 10^{16}\,\mathrm{cm}^{-3}\) and realistic entrance and exit density ramps, and beam-ionization calculated using the ADK model [25].
In the two-bunch mode of beam delivery, where the linac delivers both the drive and trailing bunch, this assumes a \(1.5\,\mathrm{nC}\) drive bunch with \(25\,\mathrm{\SIUnitSymbolMicro m}\) emittance and \(27\,\mathrm{kA}\) peak current, and a \(0.5\,\mathrm{nC}\) trailing bunch with \(30\,\mathrm{\SIUnitSymbolMicro m}\) emittance and \(7\,\mathrm{kA}\) peak current. With these parameters, simulation results show that the drive bunch
Figure 1: PIC simulation of PWFA performance with initial FACET-II beam parameters, including incoming beam energy of \(10\,\mathrm{GeV}\) for both drive and trailing bunches. (a) the evolution of the total energy content of the drive and trailing bunches as they traverse a lithium plasma with \(40\,\mathrm{cm}\) long flat-top profile at a density of \(8\times 10^{16}\,\mathrm{cm}^{-3}\), resulting in an overall drive to trailing bunch efficiency of 32%. (b) shows the final energy spectra of the drive and trailing bunches, showing an acceleration of the trailing bunch by \(6.6\,\mathrm{GeV}\) with final energy spread of \(0.9\%\).
will approach energy depletion with 66% drive-to-wake transfer efficiency. Furthermore, a trailing bunch separated by 106 \(\mathrm{\SIUnitSymbolMicro m}\) can be accelerated by 6.6 \(\mathrm{\SIUnitSymbolMicro}\) with wake-to-trailing bunch efficiency of 48%.
The drive bunch transfers 6.9 \(\mathrm{J}\) out of the initial 10.4 \(\mathrm{J}\) of energy into the wake, while the trailing bunch picks up 3.3 \(\mathrm{J}\), for an overall drive to trailing bunch efficiency of 32%. The energy spectrum of the two bunches after traversing through the plasma shows the large energy spread of the energy-depleted drive bunch extending to near zero energy, and the trailing bunch accelerated with energy spread remaining below 1%. The emittance of the trailing bunch is preserved at the incoming 30 \(\mathrm{\SIUnitSymbolMicro m}\) throughout the simulation.
## III Experimental setup
The FACET-II facility will provide electron bunches for beam-driven PWFA studies with energy of 10 \(\mathrm{\SIUnitSymbolMicro}\) in either single or two bunch configurations [11]. Significant modifications have been made to the 2 \(\mathrm{km}\) SLAC linac previously used for FACET for PWFA studies between 2012 and 2016. These include the removal of the initial 1 \(\mathrm{km}\) segment of the original SLAC linac, which housed the injector, damping rings, and accelerating structures, to accommodate the installation of the new LCLS-II (Linac Coherent Light Source) superconducting RF accelerator. A new photocathode injector has been installed to generate beams with smaller and symmetric emittances, capable of producing either single or two bunches directly from the cathode. The linac now also contains three stages of bunch compression, enabling compression to peak currents exceeding 100 \(\mathrm{k}\)A. Although positron capabilities are not currently available due to the removal of the original SLAC damping rings, future plans involve the reinstatement of this capability. This will be achieved by installing a new compact positron damping ring and insertion beamline, allowing for the simultaneous delivery of electron and positron bunches for PWFA studies [26].
The flexibility of the FACET-II accelerator allows it to be operated in several different configurations to meet the needs of the various experimental programs. In the two bunch delivery mode, a double laser pulse on the cathode of the RF photocathode injector will produce the two bunches - _drive_ and _trailing_, that are co-accelerated through the linac.After the final compression, the drive and trailing bunches will be separated longitudinally by \(\sim\)150 \(\mathrm{\SIUnitSymbolMicro m}\). At the time of writing, the accelerator has been commissioned in the single bunch configuration, delivering bunch charges of up to 2 \(\mathrm{n}\mathrm{C}\) with 20 \(\mathrm{\SIUnitSymbolMicro m}\) normalized emittance. The addition of a laser heater in the FACET-II injector adds further longitudinal phase space control and suppression of the microbunching instability [27]. The design beam parameters of both drive and trailing bunches in this mode of operation are listed in Table 1, along with the currently achieved beam parameters.
After acceleration and compression, the beam is delivered to the experimental area, which has been designed to simultaneously accommodate a wide range of experiments including advanced acceleration techniques such as PWFA, applications of machine learning for accelerator diagnostics and control, novel techniques for the generation of intense coherent radiation, and probing strong-field quantum electrodynamics [28]. A schematic of the experimental area is shown in FIG. 2, highlighting the key components relating to the PWFA research program.
The final focusing system consists of two quadrupole triplets capable of focusing the few \(\mathrm{\SIUnitSymbolMicro m}\) emittance beams to a 3-4 \(\mathrm{\SIUnitSymbolMicro m}\) spot size at the beam waist, allowing for matching into the plasma source. The Interaction Point (IP) area contains the plasma sources, laser integration optics, d electron and laser diagnostics for the wide-ranging experimental program. This is followed by an imaging electron spectrometer which consists of a magnetic quadrupole triplet for capturing and refocusing the beam exiting the IP, and a dipole magnet to provide vertical dispersion for energy resolved measurements. These diagnostics will be described in detail in section IV.
### Plasma sources
Two types of plasma sources are used at the IP for PWFA studies, using either lithium vapor or hydrogen gas to form a plasma. Lithium plasma sources have been employed in the previous PWFA experiments at SLAC at the Final Focus Test Beam (FFTB) and FACET facilities. Using lithium for the plasma source makes use of the fact that the outermost electron of lithium is relatively easy to ionize either by beam-ionization or by preionization with a laser. Hydrogen gas may be used in either a static fill or gas jet, and has a higher ionization threshold which allows for plasma ramp shaping via laser ionization [20].
The lithium plasma is generated in a heat pipe oven where a uniform column of neutral lithium vapor is produced by heating a section of beam pipe containing solid lithium to temperatures of up to 1000 \(\mathrm{\SIUnitSymbolCelsius}\). The uniform
\begin{table}
\begin{tabular}{l c c} Electron beam parameter & Current1 & Design \\ \hline Bunch configuration & Single & Two-bunch \\ Delivered beam energy, (GeV) & 10 & 10.1 / 9.9 \\ Norm. emittance, (mm-mrad) & \(\sim\) 20 & \(>\) 50 / 5 \\ Charge per bunch, (nC) & 2 & 1.5 / 0.5 \\ Peak current, (kA) & – & 30 / 15 \\ RMS energy spread, (\%) & \(\sim\) 1 & 0.8 / 0.3 \\ Repetition rate, (Hz) & 1 – 30 & 1 – 30 \\ IP \(\beta^{*}\), (cm) & 50 & 5 – 50 \\ \end{tabular}
\end{table}
Table 1: FACET-II beam parameters for two-bunch PWFA. The two bunch parameters are listed as drive/trailing.
density region of this column is contained to a length of approximately 40 cm by a helium buffer gas exerting several Torr of pressure. This results in (10%-90%) lithium density ramps at the start and end of the heated section over a length of approximately 10 cm [29]. Although the vapor pressure of lithium is very sensitive to the oven temperature, we have achieved near uniform flat top lithium vapor column densities of up to \(8\times 10^{16}\) cm\({}^{-3}\) with helium buffer pressures of up to 10 Torr. The low ionization energy of lithium compared to helium allows for the formation of a pure lithium plasma with number density matching the gas density of lithium when field-induced beam ionization is used. For small incoming emittance, partial beam-ionization of the helium buffer gas can occur if the beam density reaches the helium ionization threshold, disrupting matching into the plasma ramps and injecting dark current into the plasma [30].
A parallel beamline is positioned adjacent to the lithium plasma oven to allow for beam tuning without passing the beam through the lithium oven when it is at operational temperature. The oven and this bypass beamline are installed on an actuated table to allow for remotely switching between the two. The bypass beamline also allows for the beam to be passed through a static fill of gas, such as hydrogen, that can be ionized by either the beam or a laser.
The maximum repetition rate that a plasma accelerator may be operated at is determined by the time interval for the plasma to _reset_ between shots, through recombination and diffusion of the ionized atoms, and the return to thermal equilibrium. The energy deposited into the plasma by the drive beam will go into increasing the overall thermal-kinetic energy of the system. For the lithium vapor plasma oven, this increased energy will act to lengthen the gas column as the more energetic atoms push outwards on the buffer gas at constant pressure. This will move the location of the lithium entrance and exit plasma ramps outwards, affecting both the matching conditions and length of the plasma. This places a limit on the repetition rate on the order of 1 Hz when operating continuously, or 10 Hz for shorter bursts. In a steady state system, this effect may be counteracted by a feed-forward system that adjusts the oven heater power to match the energy deposited into the plasma by the beam. However, this can be difficult to achieve in a research setting where the energy deposited into the plasma is difficult to predict due to varying input parameters. Alternatively, a meter-long hydrogen gas flowing out of multiple overlapping supersonic nozzles may allow much higher repetition rate operation, at least in a burst mode, by the complete replacement of the gas in the plasma interaction region between shots.
### Differential Pumping System
In the prior implementation at FACET, the experimental area was separated from the rest of the linac by a 50-75 um thick beryllium window to contain the gas associated with the plasma sources. However, at the increased beam intensities of FACET-II, any solid material in the beam path near the IP will be destroyed by beam heating [31]. At the ultimate FACET-II beam parameters, the solid windows will be destroyed within a single shot by this effect. A differential pumping system (DPS) has therefore been implemented to reduce the vacuum
Figure 2: The FACET-II experimental area beamline showing the key hardware in the plasma research program. The electron beam travels from left to right in the schematic. The final focus system is on the left, containing the X-band transverse deflecting cavity and upstream differential pumping system (US-DPS). This is followed by the Interaction Point (IP) area containing the plasma sources, laser integration optics, and diagnostics. The lithium oven and hydrogen plasma beamline may be remotely actuated to switch the beam path between plasma sources. The spectrometer beamline transmits the beam to the beam dump and contains the downstream differential pumping system (DS-DPS) and electron and betatron diagnostics.
pressure along the beamline without the need for solid windows in the beam path. Additionally, the DPS has the added benefit of being able to deliver the lowest emittance and highest charge (brightest) beams to the IP that the photocathode gun enabled linac can deliver.
The DPS is comprised of upstream and downstream systems (US-DPS and DS-DPS), located on either side of the IP area as depicted in FIG. 2. Each side contains a series of pumping stages interleaved with the final focus and spectrometer quadrupoles along the beamline, and separated by conductance limiting beam-pipes, which decrease the pressure by several orders of magnitude at each stage.
On the upstream side, four stages of differential pumping decrease the beamline pressure from up to 10 Torr at the IP down to approximately 1 nTorr at the location of the X-band RF transverse deflecting cavity (XTCAV)-located approximately 6 m upstream of the lithium oven. This is required to avoid RF breakdown and structure damage during operation of the XTCAV. On the downstream side of the IP, the beamline pressure must be reduced to \(<1\) mTorr to limit gas scattering in the spectrometer beamline which can contribute to emittance growth and degradation of beam measurement quality. This is accomplished by two stages of differential pumping. Each stage of pumping is provided by a magnetically levitated turbomolecular pump mounted directly to the beamline. The turbopumps were chosen for their high pumping speeds for both helium and hydrogen and for their use of magnetic levitation bearings to limit vibration transfer to the beamline.
Initial conductance restricting apertures are inserted into the beamline on either side of the IP to achieve the first pressure drop from Torr-level pressures down to several mTorr in the first stage of differential pumping. Currently this first aperture restriction is simply the previously installed beryllium windows which have had \(\sim 200\) um diameter holes drilled through in-situ by the high intensity electron beam. This has ensured that the holes are self-aligned to the nominal electron beam path without further alignment intervention. While this small orifice-type aperture is sufficient for initial commissioning, these will eventually be replaced with straw-type apertures with 5 mm diameter and 100 mm length to limit background and emittance growth from beam halo scattering on the edges of the holes. The apertures on either side of the IP are separated by a distance of approximately 4 m. The beam pipes within the quadrupole magnets provide sufficient conductance limitation between the later stages in the US-DPS and DS-DPS to reach the required reduction in beamline pressure.
The DPS has been commissioned in various modes of operation including _static_ fill of either helium or argon up to IP pressures of 10 Torr, or hydrogen gas up to 5 Torr pressure at the IP. The system has also been used with high pressure hydrogen and helium gas-jet plasma sources operating at the IP with up to 10 Hz repetition rate. The vacuum pressures at each stage of the DPS in static fill operations are listed in Table 2, showing the pressure dropping to the mTorr pressure level on the first stages on either side of the IP, and reaching down to \(\sim\)nTorr pressures at the location of the upstream XTCAV, and \(<1\times 10^{-6}\) Torr in the spectrometer beamline.
A full demonstration of the first lithium oven operation with the differential pumping system has been performed, with the oven operating with 5 Torr helium buffer gas pressure for \(>24\) hours. The buffer gas pressure was maintained to within \(\pm 2\%\) for the duration of the test, with a further upgrade reducing the pressure stability to a diurnal variation of \(<0.5\%\).
## IV PWFA diagnostics
The experimental area is equipped with a variety of diagnostics that are used to quantify the performance of the plasma wakefield acceleration. The primary electron diagnostic is an imaging spectrometer beamline that captures, refocuses, and vertically disperses the electron beam after the plasma. This allows for energy-resolved measurements on profile monitors located approximately 20 m downstream of the IP, just before the beam dump.
A set of multipurpose spectrometer diagnostics have been designed to meet the needs of various user programs that use FACET-II. Electron diagnostics include
\begin{table}
\begin{tabular}{l c c c c c c c c} & \multicolumn{3}{c}{Upstream (US) stages:} & \multicolumn{5}{c}{Downstream (DS) stages:} \\ Mode & IP & US1 & US2 & US3 & US4 & XTCAV & DS1 & DS2 & Spectrometer \\ \hline Baseline & 0 & \(6\times 10^{-10}\) & \(1\times 10^{-8}\) & \(2\times 10^{-9}\) & \(1\times 10^{-10}\) & \(3\times 10^{-9}\) & \(5\times 10^{-9}\) & \(1\times 10^{-9}\) & \(2\times 10^{-8}\) \\
5 Torr He & 5.0 & \(3\times 10^{-3}\) & \(5\times 10^{-6}\) & \(2\times 10^{-8}\) & \(1\times 10^{-9}\) & \(3\times 10^{-9}\) & \(2\times 10^{-4}\) & \(1\times 10^{-6}\) & \(5\times 10^{-7}\) \\
10 Torr He & 10.0 & \(7\times 10^{-3}\) & \(1\times 10^{-5}\) & - -1 & \(1\times 10^{-9}\) & \(3\times 10^{-9}\) & \(5\times 10^{-4}\) & \(7\times 10^{-6}\) & \(8\times 10^{-7}\) \\
5 Torr H\({}_{2}\) & 5.0 & \(1\times 10^{-2}\) & \(2\times 10^{-5}\) & \(2\times 10^{-8}\) & \(3\times 10^{-9}\) & - 1 & \(1\times 10^{-3}\) & \(5\times 10^{-6}\) & \(1\times 10^{-7}\) \\
5 Torr Ar & 5.0 & \(1\times 10^{-3}\) & \(5\times 10^{-6}\) & \(5\times 10^{-9}\) & \(2\times 10^{-10}\) & \(3\times 10^{-9}\) & \(6\times 10^{-5}\) & \(5\times 10^{-7}\) & \(4\times 10^{-8}\) \\ \end{tabular}
\end{table}
Table 2: Beamline pressures in Torr under several operating modes of the differential pumping system. The numbering of the stages increases with distance from the IP. The beamline pressure is reduced to the baseline vacuum pressure at the XTCAV location and to well below the spectrometer vacuum requirement in all modes of operation.
a high resolution in-vacuum profile monitor located immediately prior to the vacuum exit window, a large field of view camera that images a gadolinium-oxysulfide (GOS) scintillator screen just after the exit window, and a Cherenkov light spectrometer [32] that images the dispersed electron beam from full energy down to \(\sim 1\,\)GeV. A series of photon imaging screens on the zero-dispersion axis image the X-rays and gamma rays generated by the electron beam's betatron motion in the plasma, providing intensity, transverse, and spectral information.
FACET-II is capable of producing beams with exceptionally high current and beam density, resulting in extremely challenging conditions for intercepting diagnostics near the IP where the beam density and strong fields can destroy any solid material within a single shot [33]. While standard optical transition radiation (OTR) screens and wire scanners are used in the IP area, their use is limited by beam conditions. Non-intercepting diagnostics, which will be described in the following sections, are employed whenever possible.
An X-band transverse deflecting cavity is located before the plasma source to measure the incoming longitudinal profile of the beam, using an imaging screen in the spectrometer beamline. A probe laser system allows for non-invasive electro-optical sampling (EOS) for both longitudinal and transverse measurements of the incoming beam. The OTR and laser diagnostic cameras that view the beamline through viewports along the oven bypass line may also be repurposed for the direct visualization of the plasma emission light from hydrogen plasma at discrete locations. In addition, a series of thermocouples monitor the temperature distribution of the lithium oven to ensure the reproducibility of the lithium vapor column.
The measurements enabled by these diagnostics are summarized in the following sections.
### Emittance Measurements
Precision measurements of beam emittance both before and after the beam undergoes acceleration are essential for demonstrating the preservation of beam quality in PWFA. Standard methods of measuring emittance involve measuring the beam size either in multiple locations separated by a set of known optical elements in the multi-screen method, or at a single location in a multi-shot quadrupole scan measurement. The former method requires a significant length of beamline to implement multiple sets of transverse diagnostics and magnetic elements, while the latter requires multiple shots to be acquired over a time period of at least 10s of seconds while quadrupole strengths are changed. Both methods are invasive and cannot be carried out simultaneously with beam energy measurements using the spectrometer. A single-shot measurement of the horizontal emittance can be made in a single plane by analysing the transverse profile of the beam in a magnetic spectrometer beamline [34; 35]. This type of measurement was previously used in FACET to provide an upper limit to emittance measurements of beam-driven PWFA accelerated beams [36].
The transverse beam size at the _image plane_ in the spectrometer, \(\sigma_{x}\), can be related to the normalized emittance, \(\epsilon_{n}\), and Twiss parameters \(\beta_{0}\) and \(\alpha_{0}\) at the _object plane_, i.e. at the exit of the plasma. Using the known transport matrix, \(M_{ij}\), through the spectrometer beamline from the object plane to image plane, the horizontal spot size for particles of a particular energy (and Lorentz factor \(\gamma\)) is given by the formula
\[\sigma_{x}(E)^{2}=\frac{\epsilon_{n}}{\gamma}\left[M_{11}^{2}\beta_{0}-2M_{11 }M_{12}\alpha_{0}+M_{12}^{2}\left(\frac{1+\alpha_{0}^{2}}{\beta_{0}}\right)\right] \tag{1}\]
where \(M_{11}\) is the relation between positions at the object plane and the image plane, and \(M_{12}\) is the relation between the angle at the object plane and position in the image plane.
In the imaging condition, the spectrometer quadrupoles are set such that the transport matrix provides point-to-point imaging between the object and image planes, with the matrix element \(M_{12}=M_{34}=0\) for particles at the energy set-point of the spectrometer. The value of \(M_{12}\) becomes non-zero for electron energies away from the energy set-point due to the chromaticity of the spectrometer. Therefore, if the beam is at waist at the object plane, then the beam size will be re-imaged to the smallest spot size for particles at the setpoint energy where the imaging condition is perfectly met, and increase in size for electrons with larger and smaller energy. With the spectrometer dipole deflecting the beam vertically, the dispersion at the image plane results in an hourglass or butterfly-shaped transverse beam profile, allowing for the beam width to be extracted as a function of energy. Operating with \(M_{34}=0\) ensures the highest possible energy resolution.
By fitting the measured horizontal beam size as a function of energy to equation (1) using the known dependency of the transport matrix elements with energy, the horizontal projected emittance and Twiss parameters at the object plane can be extracted. This analysis holds in the condition that these parameters do not vary with energy, so is constrained by chromatic correlations (i.e. \(x\)-\(E\)) and phase mismatch within the plasma, which cause variations in emittance and Twiss parameters that are dependent on energy. This technique does not however require perfect knowledge of the plasma exit position, as this may be extracted via the fitted Twiss parameters, providing useful information into the true plasma exit location.
For the FACET-II beam parameters, the high energy of the accelerated trailing bunch and low transverse emittance result in transverse beam profiles with um-size features. FIG. 3 shows the horizontal beam size at the spectrometer image plane as a function of energy for several emittances for a fixed vacuum \(\beta_{0}=5\,\)cm at the exit of the
plasma oven, showing a minimum spot size on the order of \(10\,\mathrm{\SIUnitSymbolMicro m}\). The magnification of the spectrometer when imaging the plasma exit plane at \(20\,\mathrm{GeV}\) is \(M_{11}=5.2\). By employing a high resolution in-vacuum OTR beam profile monitor with optical imaging resolution of \(4.5\,\mathrm{\SIUnitSymbolMicro m}\), the emittance can be extracted from single-shot images of matched beams at the plasma exit with several-% level measurement uncertainty for beam emittances greater than \(\simeq\)\(10\,\mathrm{\SIUnitSymbolMicro m}\). This uncertainty estimate accounts for imaging resolution, uncertainty in the transport matrix elements, and signal noise in the imaging system.
Alternatively, if the conditions are stable enough to allow for a multi-shot measurement, a dispersive quadrupole scan can instead be made to extract the emittance for multiple energy slices across the beam. In this measurement, the quadrupole strengths are scanned over a range to vary the transport matrix while acquiring images of the energy dispersed transverse beam profile at each setting. Fitting the horizontal spot size at a given energy over the range of \(M_{11}\) and \(M_{12}\) using equation (1) provides the emittance and Twiss parameters at the object plane as a function of energy. This type of measurement allows for an assessment of the variation in beam parameters across the bunch, and is crucial for measuring the incoming trailing beam parameters when the energy spread is too small to allow for enough statistics for the single-shot emittance measurement. This allows for accurate emittance measurements for mismatched beams, where the assumption that emittance and Twiss parameters are constant across the bunch does not hold. The main limitation of this measurement is that it requires stable beam conditions over the time scale it takes to vary the quadrupole strengths and acquire imaging data at each point. This is typically on the order of several minutes for a single scan.
Commissioning of the emittance diagnostics has been performed using the single-bunch configuration with emittance on the order of \(20\)-\(40\,\mathrm{\SIUnitSymbolMicro m}\) and \(1\%\) energy spread. The final focus and spectrometer optics were configured to deliver the beam with \(\beta_{0}=50\,\mathrm{cm}\) at the location of a wire scanner, just prior to the nominal plasma entrance location, and re-imaged to the high resolution spectrometer beam profile monitor. FIG. 4(a) demonstrates a dispersive quadrupole scan measurement, indicating an emittance of approximately \(40\,\mathrm{\SIUnitSymbolMicro m}\) measured across the core of the bunch and a waist \(\beta\) of \(30\,\mathrm{cm}\). This tool will be instrumental to diagnosing the matching conditions of the drive and trailing bunches incoming to the plasma.
In the present beam configuration, the Twiss parameters vary too significantly across the bunch to allow for a single-shot emittance measurement of the incoming single-bunch beam. In the two-bunch configuration, the smaller energy spread of the trailing bunch leads to reduced parameter variation across the bunch, enabling the single-shot measurements to be applied more effectively. Nevertheless, this measurement has already been applied to data acquired during the initial PWFA commissioning, and its findings will be discussed in Section
Figure 3: (a) The electron beam size that would be measured at the spectrometer image plane for several different values of emittance with \(\beta_{0}=5\,\mathrm{cm}\) and the spectrometer set to image \(20\,\mathrm{GeV}\) electrons. (b) an estimate of the emittance measurement error for an energy-doubled trailing bunch with \(0.5\%\) energy spread, accounting for the beam transport properties and an optical imaging resolution.
Figure 4: The measured dispersive quadrupole scan of the beam in the single-bunch configuration. The plot in (a) shows the emittance extracted as a function of energy, with a value of approximately \(40\,\mathrm{\SIUnitSymbolMicro m}\) across the core of the bunch, and the waist \(\beta\) with a minimum value of \(30\,\mathrm{cm}\) in this range. The beam profile becomes non-Gaussian at energies above \(10.1\,\mathrm{GeV}\), preventing us from accurately reconstructing the emittance and \(\beta\) values in this range. (b) shows the current profile of the bunch, and (c) shows a single-shot image of the beam in the middle of the quadrupole scan when \(M_{12}\) is nominally set to \(0\).
### Electron Energy Spectra
The beam exiting the plasma ranges in energy from less than 1 GeV in the energy-depleted drive bunch to more than 20 GeV in an energy-doubled trailing bunch. Measurement of the full energy range of the drive bunch is important to understand the driver energy depletion and drive-to-wake transfer efficiency. On the other hand, high resolution measurements of the trailing bunch are required to demonstrate preservation of the energy spread at the level of \(<1\%\). The electron spectrometer beamline therefore uses different screens to meet these differing needs.
The trailing bunch will be measured by the high resolution transverse beam profile monitor used for the emittance measurements. The energy resolution can be determined by adding in quadrature both the imaging resolution \(\sigma_{im}\), and the transverse beam size as determined by the vertical emittance, \(\epsilon_{n}\), and \(\beta\)-function: \(\sigma_{R,res}/E=\sqrt{\sigma_{im}^{2}+\beta_{y}\epsilon_{n}\gamma^{-1}}/\eta\). The vertical dispersion, \(\eta\), at the location of the screen is nominally 60 mm, and the imaging resolution was measured to be 4.5 um. \(\gamma\) here again refers to the Lorentz factor at the measurement energy \(E\).
In the nominal PWFA configuration, the transverse beam size at the imaging plane \(\sqrt{\beta_{y}\epsilon_{n}\gamma^{-1}}\) will be \(<10\) um for an emittance preserved beam, leading to an overall energy resolution of \(<0.02\%\) (\(\sim 2\) MeV). For the initial run parameters with normalized emittance on the order of 40 um and with an IP beta function of 50 cm, the energy resolution at 10 GeV is dominated by the transverse beam size of \(\sim\)50 um, leading to a resolution of 0.1% (10 MeV). The energy profile of the current single bunch beam without plasma interaction was shown in FIG. 4(b and c), with a measured FWHM energy spread of 3%.
To measure large electron energy ranges, either of two large field of view profile monitors are used. The first employs the GOS-based scintillator, DRZ"-FINE [37], that stretches from above the zero-dispersion axis down to a dispersion level of 120 mm. At the nominal setting of the spectrometer dipole, the field of view extends down to an energy of approximately 5 GeV. The spectrometer dipole strength may be lowered down to 25% of the nominal value, decreasing the lower extent visible on this diagnostic down to \(\sim 1\) GeV. The relative energy resolution of this diagnostic is dominated by the pixel size of the imaging system, resulting in an energy resolution of 0.15%. A second screen located several meters upstream can be used to extend the low energy portion of the spectrum to \(\sim 0.25\) GeV. Charge below 250 MeV will be undetectable by direct observation using the magnetic spectrometer as these low energy electrons are deflected into the wall of the spectrometer dipole chamber prior to the opportunity for their measurement. The dipole strength may also be increased to allow the high energy portion of the electron spectrum to be imaged with higher energy resolution as an energy gain diagnostic.
The main limitations of the scintillation screen spectrometer diagnostic are the saturation of the scintillating centers and damage to the screen material leading to permanent loss of light output at locations of high beam intensity. To overcome these challenges, the second large field of view diagnostic that is employed is a Cherenkov light-based electron spectrometer that is described in detail in [32]. This transverse beam profile monitor images the Cherenkov light emitted by beam electrons as they pass through a small air gap before the beam dump. The Cherenkov light is reflected from the beam path by a beam intersecting polished silicon wafer. Since Cherenkov light is emitted with high linearity with charge density, and the silicon reflecting surface has a relatively high damage threshold, this diagnostic provides a higher dynamic range and robustness than the scintillator screen. The spatial resolution at 10 GeV is 250 um, limited mainly by the multiple scattering of the beam as it passes through the 5 mm aluminum vacuum exit window. This translates to an energy resolution of 0.4% at 10 GeV.
Additional non-invasive measurements of the energy spectrum of the incoming beam are performed using a synchrotron light diagnostic (SYAG) prior to the IP. This device is located within the final bunch compressor at a location with large horizontal dispersion and is comprised of a short, three-magnet vertical chicane to generate the emission of synchrotron photons from the beam electrons which are intercepted by a Cerium doped Yittrium Aluminum Garnet (YAG) scintillator screen for detection. Due to the horizontal dispersion of the beam at this location, the horizontal profile of the X-rays represents the energy distribution of the electron beam.
During the initial phases of beam development, the presence of coherent OTR (COTR) detected on OTR screens near the IP and spectrometer diagnostics indicates that the electron bunches can contain high current structure on top of the bunch profile visible on the present diagnostics. The source of this high current structure is possibly due to unmitigated microbunching occurring early in the linac. While the microbunching itself will ultimately be suppressed by the use of the laser heater, the SYAG diagnostic can provide some non-intercepting information about the presence of this longitudinal structure due to the energy chirp on beam.
FIG. 5 shows the profile measured with the SYAG diagnostic for a single shot that resulted in significant COTR emission, and the corresponding energy spectrum measured with the in-vacuum electron spectrometer using two cameras viewing the same YAG screen. One camera views the back surface of the YAG crystal oriented at 45\({}^{\circ}\) to the beam. In this orientation, the camera only collects the scintillation light, and not the forward emitted COTR. While the resolution of this camera is limited to 10s of um due to the thickness of the YAG crystal, the total intensity measured is consistent with the
bunch charge, and the profile matches the SYAG profile, within the resolution limits. A second higher resolution camera views the front surface of the same YAG screen. In this orientation, the camera collects both scintillation light and the backwards OTR emission. In this shot, this camera sees a large non-linear intensity spike at the head of the bunch due to the emission of COTR. While the SYAG diagnostic does not have the resolution to fully resolve the fine structure that leads to coherence in OTR in the optical range, it can be used to monitor for shot-to-shot longitudinal variations that often accompany the microbunching instability.
### Incoming Longitudinal Phase Space
The X-band transverse deflecting cavity [38] (XTCAV) allows for longitudinal profile measurements with down to \(1\,\mathrm{\SIUnitSymbolMicro m}\) (\(\sim 3\,\mathrm{fs}\)) resolution. In the reconfiguration of the experimental area for FACET-II, this structure has been relocated to within the final focus, several meters before the interaction point, and rotated to kick in the horizontal plane. When used in combination with the magnetic spectrometer with vertical dispersion, the XTCAV allows for single-shot measurements of the longitudinal phase space. FIG. 6 shows a simulated XTCAV measurement and the extracted current and momentum profiles for the present XTCAV implementation and the two-bunch beam configuration. The energy difference between the drive and trailing bunches impacts the longitudinal resolution due to the differences in chromatic focusing. However, improved resolution can be achieved for either the trailing or drive bunch individually by setting the spectrometer optics to focus at either bunch energy independently. With the presently achieved emittance within the experimental area, the longitudinal resolution is limited to approximately \(10\,\mathrm{\SIUnitSymbolMicro m}\), making it unable to resolve the sub-\(\mathrm{\SIUnitSymbolMicro m}\) longitudinal modulations from microbunching on the current profile.
As use of the XTCAV is invasive to the beam incoming to the plasma, it may not be used simultaneously with plasma studies. Non-invasive tools are therefore required to provide single-shot longitudinal information on the incoming beam, such as the separation of the drive and trailing bunches, that may be calibrated with the XTCAV diagnostic. A machine learning based virtual diagnostic has been developed to predict the shot-to-shot incoming longitudinal phase space and current profiles based on only non-invasive measurements using training
Figure 5: The SYAG screen intensity measured on SYAG in (a), and the energy spectrum measured on the in-vacuum electron spectrometer in (b) using two cameras simultaneously viewing a transparent YAG crystal oriented at \(45^{\circ}\) to the beam direction. The camera viewing the back side of the YAG sees only scintillation light, and provides a consistent measurement of total charge. Meanwhile, the camera viewing at the angle of OTR emission images a strong coherent OTR emission at the head of the bunch for the same shot.
Figure 6: Simulated measurement of the longitudinal phase space measured with the XTCAV and the high resolution beam profile monitor in the magnetic spectrometer. (a) The transverse profile imaged on the spectrometer high resolution beam profile monitor. The horizontal position axis provides the longitudinal profile due to the horizontal streak applied by the XTCAV and the vertical position axis provides the energy due to the magnetic spectrometer dipole. In (b) and (c), the beam energy and current profiles are shown as the shaded areas and the measured profiles from the calibrated XTCAV image are represented by the solid lines. With the beamline optics tuned to the trailing bunch energy, the drive bunch reconstruction suffers slightly due to the energy difference.
data sets acquired with the XTCAV [39]. This diagnostic has been demonstrated at LCLS, and work is underway to implement this technique at FACET-II.
The SYAG beam energy spectrometer and EOS provide direct measurements of the incoming energy and longitudinal profiles that are non-invasive to plasma studies. In addition, the EOS system can be configured with a pair of crystals on either side of the beam path to provide time-resolved transverse beam positions on a shot-by-shot basis [40]. This system has an ultimate timing resolution of 10 fs, and transverse resolution of \(<5\,\mathrm{\SIUnitSymbolMicro m}\) and is capable of resolving the properties of the drive and trailing bunches independently.
Additional non-intercepting measurements of the incoming bunch length are also acquired on a shot-to-shot basis from a pyroeelectric detector that measures the total intensity of coherent terahertz diffraction radiation after the final bunch compression. The intensity of this radiation can be correlated with relative bunch lengths within the range of 10 to 100 [41].
### Betatron Radiation
As the drive and trailing bunches transit through the plasma cell, electrons in both bunches experience the large transverse forces present within the plasma bubble. This force will induce betatron oscillations in the trajectories of the beam electrons, leading to the emission of betatron radiation in the direction of propagation that is similar to the synchrotron radiation generated in a high-\(K\) wiggler [42]. This betatron radiation can be used to diagnose the transverse dynamics of the drive and trailing bunches within the plasma, providing information on matching into the plasma and the transverse hosing instability [43]. For the FACET-II beam parameters, the betatron radiation is emitted with several milliradian divergence and with a photon energy spectra in the range of keV up to \(\sim 1\,\mathrm{MeV}\).
A set of scintillation-based detectors are employed in air in the spectrometer beamline to retrieve angular and spectral information of X-ray and \(\gamma\)-ray energy photons produced at the IP. Photons with energy below 10-20 keV are stopped by the 5 mm aluminum vacuum exit window, but higher energy photons are transmitted through for detection. Their transverse profile is measured by a CCD camera imaging either a uniform scintillator screen for high resolution measurements, or a CsI pixellated array with \(0.5\times 0.5\,\mathrm{mm}\) pixel size [44] for increased sensitivity.
A second scintillation screen provides spectral information by recording the intensity immediately behind a set of filter materials arranged in a pie shape around the photon-axis. Two of these filter materials act as a pair of Ross filters [45], with material and thickness chosen for sensitivity to photons of energy \(<100\,\mathrm{keV}\). The remaining 10 filters are comprised of various thicknesses of copper up to 8 mm, and tungsten up to 3 mm, and one segment with no filter material for reference. The scintillator response behind each filter is impacted by the photon-energy dependent conversion and transmission rates through each material. By determining the intensity behind each filter and comparing to simulated responses using GEANT4 [46], information about the photon energy distribution can be determined, such as the critical energy of a synchrotron-like spectrum. A separate publication summarizes these photon diagnostics and the first results acquired for FACET-II [47].
Additionally, a Compton spectrometer is being developed to provide energy-angular double differential measurements of the betatron radiation in the range of 180 keV to 28 MeV [48]. This device will perform the measurement in vacuum, \(\sim 2\,\mathrm{m}\) prior to the spectrometer diagnostics table.
## V Initial Beam Plasma Interaction Studies
Two decades of beam-plasma interaction experiments have clearly indicated that for a 10 GeV-class electron or positron bunch that has nC's of charge, the most sensitive diagnostics of the beam brightness is the plasma itself [49]. A beam with sufficient field intensity can ionize a plasma and generate wakefields in it through interaction with plasma particles. These interactions change the conditions of both the beam and the plasma first-hand, independent of external diagnostics which have their own limitations such as finite resolution and sensitivity. Therefore, the beam-plasma interaction can provide good diagnostics of the incoming beam conditions through changes to the beam spot size, energy loss or gain of different slices of the beam, or the radiation emitted by the charged particles as they traverse a column of gas.
Beam delivery to users at FACET-II started in 2022 for user-assisted commissioning of beam delivery and experimental systems. Delivery to users was interleaved into the beam commissioning to allow for users to exercise equipment, develop data acquisition techniques, and gain first insights into their experimental programs. During this phase, the beam was delivered in single bunch mode with nominal bunch charge of 1.6 nC, transverse spot sizes down to \(\sim\)20\(\times\)20 \(\mathrm{\SIUnitSymbolMicro m}^{2}\), and bunch lengths of \(\sim\)20 \(\mathrm{\SIUnitSymbolMicro m}\).
As part of the initial PWFA studies, the single-bunch beam was passed through several meters of gas at the IP to investigate beam ionization and for commissioning of the electron spectrometer and betatron radiation diagnostics. The differential pumping system was used to maintain a static gas pressure of either hydrogen gas up to 2 Torr or helium gas up to 5 Torr in the 4 m of beamline between the upstream and downstream beryllium windows. The beam was focused to a vacuum waist of \(\beta=50\,\mathrm{cm}\) in both x and y planes at a location approximately 0.5 m into the gas column. No laser preionization was employed in the studies presented here.
Despite the measured beam parameters suggesting insufficient beam density for field ionization using ADK theory [25], we have observed that the present beam conditions are capable of driving a strong wake in both hydrogen and helium gases. Evidently, the beam diagnostic techniques such as the XTCAV cannot presently resolve the full longitudinal characteristics of the beam. The field ionization observed in the experiment can be explained by the beam's temporal structure exhibiting one or more strong, short peaks on top of a low current background [50].
The existence of a plasma interaction however is indisputably confirmed by concurrent observations of the plasma emission light measured by cameras viewing the IP, the deceleration of electrons imaged by the electron spectrometer to energies below \(2\,\mathrm{GeV}\), and the substantial increase in X-ray photons from betatron oscillations within the plasma channel. With the present beam conditions, the wake intensity was observed to vary substantially with the incoming beam parameters. FIG. 7 shows the energy spectra as observed using the large field of view energy spectrometer and the analysis of a series of 200 sequential shots recorded at \(10\,\mathrm{Hz}\) of the beam passing through the \(4\,\mathrm{m}\) hydrogen gas column at \(2\,\mathrm{Torr}\) pressure (plasma density \(\sim 6\times 10^{16}\,\mathrm{cm}^{-3}\)), sorted by the estimated energy transferred to the wake.
The energy loss of electrons due to the interaction with the wake is shown to vary significantly due to the jitter on the incoming beam parameters. This varies from no significant plasma interaction in several shots, to the deceleration of electrons to below \(5\,\mathrm{GeV}\) - the lower limit of the field of view of the diagnostic with this spectrometer setting.
The single-shot electron energy spectrum enables a distinction between two categories of electrons: those within the beam that remain virtually unaffected by the plasma interaction, residing in the peak centered at \(10\,\mathrm{GeV}\), and those that undergo deceleration. This categorization is illustrated in FIG. 7(b), indicating that at this pressure, over 50% of the charge can be actively participating in the plasma interaction. In shots experiencing the most substantial deceleration, electrons with energies below approximately \(5\,\mathrm{GeV}\) descend beyond the lower boundary of the field of view, resulting in a loss of the total measured charge for these instances. Another factor contributing to the missing charge is the presence of electrons in less dense regions of the spectra that are not visible above the image background, comprising up to 5% of the total charge in shots where the spectrum does not extend off the imaging screen.
The energy deposited into the plasma wake is estimated as the difference between the initial energy content of the incoming beam, approximately \(16\,\mathrm{J}\) at \(10\,\mathrm{GeV}\), and total energy of the electrons after they exit the plasma. However, for shots experiencing the greatest energy loss, some electrons fall below the camera's field of view cannot be accounted for directly in this estimate. We can therefore establish only a lower-bound for the energy transferred to the wake by assuming the missing charge on the spectrometer screen possesses a maximum energy equivalent to the lower cutoff of the field of view, which is \(4.9\,\mathrm{GeV}\). By factoring in the missing charge using this approach, we can establish a conservative estimate that suggests at least \(5\,\mathrm{J}\) of energy is transferred to the wake for the shots that experience the largest energy loss. This corresponds to a minimum effective beam-to-wake
Figure 7: (a) A “waterfall” plot of the electron energy spectra after traversing the plasma. The series of 200 sequential shots acquired at a pressure of \(2\,\mathrm{Torr}\) are sorted by the estimated energy transferred to the plasma wake. (b) Charge breakdown between decelerated and unaffected portions of the spectra. For later shots in the series, the electron spectra extend beyond the lower limit of the image, as reflected by the loss of total charge measured. The total incoming charge is measured by a toroidal charge monitor just upstream of the IP. (c) The energy estimated to be transferred from the electron beam to the wake, determined from the measured energy spectra. The solid line accounts for only those electrons that are visible on the screen, while the shaded line estimates a lower bound, accounting for missing charge. Also overlaid on the plot is the intensity of the betatron radiation measured for each shot, showing strong correlation with wake energy for shots where the total charge is visible.
transfer efficiency of approximately 50% from the 1 nC of charge that participates in the interaction.
The intensity of betatron radiation is superimposed on the same plot as the estimated wake energy, revealing a strong correlation with the energy transferred to the wake in shots where the majority of the charge is visible on the screen. While there is not a conclusive argument that this correlation should always hold true, this correlation extends to follow the estimated lower limit for shots that experience significant energy loss. This trend suggests that the energy transferred to the wake is indeed higher than the lower bound estimated by visible charge alone.
Evidence of acceleration by PWFA was measured by imaging the electron spectra at energies above 10 GeV using the large field of view electron spectrometer. FIG. 8 shows the electron spectrum acquired with the spectrometer quadrupoles set to image 12.5 GeV electrons from the end of the gas column. This spectrum shows decelerated electrons with energy \(<10\) GeV, indicating the presence of a strong wake generation, and charge extending to beyond 13 GeV in this shot. We infer that this accelerated charge originated from the small fraction of electrons far within the tail of the single bunch that experiences the accelerating phase of the plasma wakefields.
Performing a single-shot emittance measurement on the charge around 12.5 GeV, as described in the prior section, provides both a normalized emittance value of approximately 1500 \(\mathrm{\SIUnitSymbolMicro m}\), and places the exit from the plasma precisely at the location of the end of the gas column, with a waist \(\beta\) of 17 cm. This measurement is not resolution limited, as the imaging resolution of this profile monitor is \(<100\,\mathrm{\SIUnitSymbolMicro m}\), far below the minimum spot size. The emittance measured here is extremely large, in part due to the presumed large emittance of the electrons far within the tail of the bunch, and also due to no effort made in matching these electrons to the plasma. This analysis however serves as a first implementation of this single-shot emittance measurement in the new FACET-II beamline, and provides useful information about the length of the plasma. The overall length of the plasma can be determined to extend at least 3 m from the first IP camera that detects plasma light near the incoming beam waist, to the location of the measured waist position at the plasma exit.
## VI Conclusion
We have described the experimental setup, accelerator parameters, and beam diagnostics required to demonstrate a single stage plasma wakefield accelerator at FACET-II, which will approach the parameters required for linear colliders and high brightness light sources. In our preliminary investigations of the beam-plasma interaction with single bunches, we have observed the formation of beam-ionized plasmas extending for several meters in length within hydrogen gas. Measurements of the resulting electron spectrum, plasma emission light, and betatron radiation indicate significant energy transfer from the drive beam to the wake with the present beam conditions.
Ongoing efforts focus on achieving efficient and stable energy transfer from the drive beam to the wake within both lithium and hydrogen plasmas as the electron beam conditions continue to improve with further beam development. Once the two-bunch configuration is operational, the ultimate goal is to demonstrate multi-GeV energy doubling of the trailing bunch, preserving both emittance and energy spread as required for accelerator applications.
###### Acknowledgements.
FACET-II is supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515. This work is supported by the U.S. Department of Energy grant DE-SC0010064:0011 and U.S. National Foundation grant 2108970 at UCLA. E. Gerstmayr was supported by the U.S. Department of Energy, Office of Science,
Figure 8: (a) The electron energy spectrum with the spectrometer set to reimage at an energy of 12.5 GeV. Energy depleted electrons are visible at energies below 10 GeV, while some 10s of pC of charge are accelerated up to \(\sim 13.5\) GeV in this shot. An emittance analysis was performed for the charge indicated by the box, with charge distribution shown in (b). (c) The beam width as a function of energy, and the emittance fit function overlaid which provides a normalized emittance of approximately 1500 \(\mathrm{\SIUnitSymbolMicro m}\). The Twiss parameters determined from the fit indicate that the beam waist (and hence the exit from the plasma) was located at the location of the Beryllium window.
Fusion Energy Sciences under Award DE-SC0020076. A. Knetsch and P. San Miguel Claveria were supported by the France-Stanford Center for Interdisciplinary Studies for their travels to SLAC National Accelerator Laboratory. H. Ekerfelt was supported by the Knut and Alice Wallenberg Foundation (KAW 2018.0450).
|
2306.13698 | 4K$\times$4K CCD Imager for the 3.6m DOT: Recent up-gradations and
results | The 4K$\times$4K CCD Imager is the first light instrument for the 3.6m
Devasthal Optical Telescope and is producing broad-band imaging observations of
many Galactic and extra-galactic sources since 2015-2016. Capabilities of the
CCD Imager are demonstrated recently through several publications using the
well-calibrated multi-band deep photometric results as expected from other
similar facilities globally. In this article, we summarize some of the recent
up-gradations made to improve the Imager, i.e., mounting the new filter wheel
casing, replacing stray light baffles and discussing the fringe pattern
corrections in redder filters. Some of the new science initiatives like
galaxy-embedded faint point sources including WR stars and the observations of
low surface brightness galaxy clusters are also discussed. | S. B. Pandey, Amit Kumar, B. K. Reddy, S. Yadav, N. Nanjappa, Amar Aryan, Rahul Gupta, Neelam Panwar, R. K. S. Yadav | 2023-06-23T14:10:55Z | http://arxiv.org/abs/2306.13698v1 | # 4K\(\times\)4K CCD Imager for the 3.6m DOT: Recent up-gradations and results
###### Abstract
The 4K\(\times\)4K CCD Imager is the first light instrument for the 3.6m Dequsthal Optical Telescope and is producing broad-band imaging observations of many Galactic and extra-galactic sources since 2015-2016. Capabilities of the CCD Imager are demonstrated recently through several publications using the well-calibrated multi-band deep photometric results as expected from other similar facilities globally. In this article, we summarize some of the recent up-gradations made to improve the Imager, i.e., mounting the new filter wheel casing, replacing stray light baffles and discussing the fringe pattern corrections in redder filters. Some of the new science initiatives like galaxy-embedded faint point sources including WR stars and the observations of low surface brightness galaxy clusters are also discussed.
Instrumentation: CCD photometry; Methods: Optical observations; Data analysis. 2022
## 1 Introduction
Location of the modern and active optics based 3.6m Devasthal Optical Telescope (DOT) is well-suited for the time-critical optical-near infrared (NIR) observations (Pandey, 2016; Kumar _et al._, 2018). Devasthal, the new observing station of Aryabhatta Research Institute of Observational Sciences (ARIES), Nainital at an altitude of \(\sim 2450\)m (longitude 79.7E, latitude 29.4N), (Sagar _et al._, 2000), having advantages like dark skies, sub-arcsec seeing conditions has now been established as a world-class astronomical site hosting other observational facilities including 1.3m Devasthal Fast Optical Telescope, and recently commissioned 4m International Liquid Mirror Telescope too. Among suit of many other back-end instruments, 4K\(\times\)4K CCD Imager is the first light axial-port imaging instrument for the 3.6m DOT and was made functional in late 2015 by observing the very first image as a crab nebula observed on 11\({}^{th}\) December 2015 as reported by Pandey (2016). The 4K\(\times\)4K CCD Imager is designed and developed in-house to be mounted at the main/axial port of the 3.6m telescope as one of the first-light instruments. The beam of the telescope is f/9, used without any focal reducer and has a plate-scale of 6.4\({}^{\prime\prime}\)/mm. Details about the CCD characterization, photometric calibration along with the site characterization (the calculation of extinction coefficients and sky brightness) using the instrument mounted at the axial port of the 3.6m DOT have been studied by Pandey _et al._ (2018); Kumar _et al._ (2022b). In recent times, the 4K\(\times\)4K CCD Imager has been utilized as a general multi-band deep imaging instrument to study various kinds of Galactic and extra-galactic astronomical sources such as star clusters (Lata _et al._, 2019; Panwar _et al._, 2022; Sagar _et al._, 2022), cosmic energetic transients (Pandey _et al._, 2019; Dastidar _et al._, 2019; Kumar _et al._, 2020, 2021; Aryan _et al._, 2021b,a; Gupta _et al._, 2021a,c,b; Pandey _et al._, 2021; Gupta _et al._, 2022b,a; Kumar _et al._, 2022c; Aryan _et al._, 2022a,b; Kumar _et al._, 2022a), active galactic nucleus (Ojha, 2022), and many others.
In this article, we present some recent updates about the attempts made towards upgrading the existing CCD Imager to improve upon some of the teething problems encountered over several observing seasons.
Some of the important ones are discussed in this study that includes improvements made towards old baffle design having problems of partial vignetting, up-gradations made towards motorized filter wheels to improve the filter repositioning and attempts to remove possible fringing issues for broad-band near-infrared filters. The results reported in this article are based on experiences over several observing seasons. After incorporating the specified improvements, the results obtained from the first light instrument have been improved considerably and have also helped to seek lessons for other back-end instruments for the 3.6m DOT. We have organized this article as follows: In section 2, we present the details of the vignetting and ghost along with the baffle design of the 4K\(\times\)4K CCD Imager, followed by the description of mechanical design and analysis in section 3. In section 4, we discuss the up-gradations of the motorized filter wheel. In section 5, we present the method for the fringe correction of red-end filters. In section 6, we mention some of the recent science cases studied using 4K\(\times\)4K CCD Imager and finally, a summary of the paper is given in section 7.
## 2 Vignetting/Ghost analysis of the 4K\(\times\)4K CCD Imager
The 4K\(\times\)4K CCD Imager was mounted on the axial port of 3.6m DOT (shown in the left panel of Figure 1). It has a fully assembled Imager set-up along with automated two filter wheels having Bessel \(UBVRI\) and SDSS \(ugriz\) filters. Each filter is assembled in its respective positions with the help of a Teflon cover. The actual size of each filter is 90\(\times\)90mm, and the clear aperture is 85\(\times\)85mm; the remaining 5mm is used for mounting with a Teflon cover as a collar to hold the filter. Shutter was integrated with the CCD flange, and this assembly was mounted with the Imager. The total assembly was mounted to the dummy of the telescope with the help of three arm structures. There has been a gap of around 300mm between the telescope flange and the Imager (which means filter wheel entrance). Initially, a conical baffle was kept between the telescope flange and the Imager by matching the hole size. Images were taken using this configuration. It was observed that there was a shadow area at the four corners of the CCD, as shown in the right panel of Figure 1.
Images were taken in various combinations using Imager to investigate the shadow/ghost feature. The width of the shadow region was less when there were no filters, and increased in the Filter-Clear combination. Dark frames were taken by closing the dome slit and switching off the lights inside the dome. The same feature was observed with less shadow width. This feature is visible in the on sky and flat images also. The same feature was observed even when the entire imager assembly shifted downwards by 7.5mm. The feature was visible in almost all images.
### Zemax analysis to verify the vignetting and ghost factors
To analyze possible issues like the vignetting and ghost images, the ZEMAX file is made with built specifications of the telescope and image plane kept at its best focus for the field of 6\({}^{\prime}\).52\(\times\)6\({}^{\prime}\).52 to cover full CCD chip size of 61.44\(\times\)61.44mm. The shaded model of the 3.6m telescope with Imager optics is shown in the top panel of Figure 2. The filter wheel and CCD window are kept in the optical path of the telescope (see the bottom panel of Figure 2 for the optical layout of the Imager). The ghost effect on the CCD chip is analyzed due to the filter and CCD window. Filter (Fused silica) with 5mm thickness and CCD window (Fused silica) of thickness 4mm are considered for this analysis. The space between the filter front surface and the CCD image plane is 123mm, and between the CCD window front surface and the image plane is 15.3mm.
The vignetting is verified in ZEMAX sequential mode as per original dimensions. The maximum beam size on the filter with theoretical distances is around 75\(\times\)75mm. The 3D layout of the Imager's beam path and optical layout of the full beam size on the filter surface is shown in the top and bottom panels of Figure 3, respectively. No vignetting due to filter size is observed for the given field of view by assuming that there is no significant mismatch in mechanical distances.
The ghost analysis is carried out to see the unwanted ghost shadows coming from the internal optical components. The ghost images can be generated by two major groups of reflections. One is due to the
reflections inside the optical elements like the filter and CCD window, including the CCD chip plane. Another is due to the reflections between CCD window/imaged planes and filter elements. A schematic diagram presenting internal reflections of the CCD window as a main cause of the ghost image is shown in the top panel of Figure 4. The ZEMAX routine "ghost focus generator" was used for this analysis, and the given results were analyzed. Double bounce (2 reflections) ghosts are generated up to the CCD chip considering the telescope's primary mirror, secondary mirror, filter, and CCD window. The final results are analyzed in terms of the most critical ghost-focalized point (closest ghost focus). After the analysis, it is observed that the most critical ghost is due to the CCD window and filter. The ghost focalized point due to the CCD window is around 5.14mm before from the CCD image plane. The ghost focalized point due to the filter is around 6.51mm before from the CCD image plane. 99% anti-reflection coating was considered for the filter, CCD window, and CCD chip. So the ghost image diameter due to the window on the CCD chip is around 570 microns which is 360 times greater than the characteristic size of the image spot (size of two pixels). It means that the ratio between the lightning is 360/.0001, which is 3.6\(\times\)10\({}^{6}\). The same was simulated in the non-sequential mode of ZEMAX (as shown in the bottom panel of Figure 4). The detector viewer figure shows the intensity values (in log-15 scale) of actual star images (red coloured) and their corresponding ghost pupils (green clouded). These ghost images due to the CCD window or filter are insignificant, and these are not visible with the original image in regular observations by considering the dynamic range of CCD and magnitude of objects.
It was concluded from the ZEMAX simulations that the shadow region at four sides of the CCD was neither due to ghost reflections of filters and CCD window nor due to vignetting from the slit wheel slot sizes. It was clear that the issue was not due to the instrument itself; rather, it was from the outside of the instrument setup, especially from the Adapter Rotator Instrument Support Structure (ARISS) telescope flange. The existing baffle was investigated (see the top panel of Figure 5), and it was found that the size of the existing baffle was much bigger than the beam size at the telescope flange, as shown in the middle panel of Figure 5. Actual beam sizes at the telescope flange and imager entrance aperture are shown in the
Figure 1: Left panel: 4K\(\times\)4K CCD Imager mounted on the axial port of 3.6m DOT. Right panel: vignetting feature at the four sides of CCD Imager.
bottom panel of Figure 5. Unwanted light from out of the field was coming through this baffle. To avoid this unwanted light, a new baffle was designed as per the beam size at the telescope flange and imager entrance window by simulating the beam in ZEMAX for a full field of 6.5'\(\times\)6.5' (see the right panel of Figure 6). The size of the baffle has been kept a little more than the actual footprint beam sizes by considering the existing Imager setup and its tolerances (an image of the new baffle is shown in the left panel of Figure 6).
After concluding the sizes from the simulations, a baffle was made using an existing ms-sheet with the help of the mechanical workshop. A dull black paper was glued to the entire surface of the baffle to
Figure 2: Top panel: the shaded model of 3.6m telescope with Imager optics. Bottom panel: optical layout of the 4K\(\times\)4K CCD Imager system.
restrict the unwanted stray light scattering. This new baffle was replaced with the old baffle of 4K\(\times\)4K CCD Imager. Later, the 4K\(\times\)4K CCD Imager was mounted with this new baffle, and images were taken to test the shadow feature at the four sides of the CCD. After integrating this new baffle with the 4K\(\times\)4K CCD Imager, the shadow feature completely disappeared.
In all, there is no vignetting feature due to the filter/wheel if we assume all mechanical distances (telescope flange to filters to CCD chip plane) to be accurate. Ghost analysis was carried out to check the ghost effect due to the filters and CCD window. Ghost due to the filters and CCD window being negligible and not visible in the original image. Finally, it was concluded that the shadow region was due to unwanted light from the big opening size of the previous baffle. The problem was resolved after mounting the new baffle designed, manufactured, assembled, and tested within the ARIES workshop. The left and right panels of Figure 7 show the setup of Imager with a new baffle and shadow-free image, respectively.
Figure 3: Top panel: 3D layout of beam path of the 4K\(\times\)4K CCD Imager system. Bottom panel: full beam size on filter surface as seen by the ZEMAX file for the set-up.
## 3 Mechanical design and analysis of the filter-housing of the 4K\(\times\)4K Imager
The primary mirror of the 3.6m DOT is actively supported with 69 actuators which correct the surface deformations with wavefront camera feedbacks mounted in the ARISS unit of the telescope. These actuators also provide suitable space at the axial port of the telescope. Later, the 4K\(\times\)4K CCD Imager was mounted in the year 2015-2016 at Devasthal astronomical site, India. Finally, it was successfully operated to study deep optical photometric observations of galactic and extra-galactic sources (Pandey _et al._, 2018; Kumar
Figure 4: Top panel: ghost image due to internal reflections of CCD window in sequential mode. Bottom Panel: ghost image due to filters, CCD window in non-sequential mode.
et al._, 2022b).
Figure 5: Top panel: existing baffle view. Middle panel: beam size at telescope flange. Bottom panel: beam size at Imager entrance point.
### Description of Mechanical Design
The Imager has a housing structure, support arms, filter wheel assembly, and 4K\(\times\)4K CCD. The housing is the main structure of the Imager instrument, which houses the two-filter wheels assembly and provides stable support to the CCD camera through a precisely machined bottom plate (see the right and left panels of Figure 8). The housing structure is a ribbed aluminium cast body of size 850\(\times\)560\(\times\)140mm and the bottom plate is an AL6061t6 ribbed machined plate in CNC.
The filter wheel assembly has two filter wheels of diameter 482mm each with drive mechanisms like bearing housings, motors, gearbox and electronics sensor, etc. Each filter wheel has 6 pockets to mount 5 filters of size 90mm\(\times\)90mm. Both the filter wheels have a _clear_ pocket too. These filter wheels are mounted parallel to each other with a gap of 10mm. The filter wheels are fabricated in Al 6061t6 alloy to reduce the inertia and black anodized to avoid rusting.
The three arms in the Imager instrument provide stable fixation support in the 3.6m DOT dummy structure. The arms are kept at 120 degrees in the dummy structure to simplify the housing structure and provide access for filter replacement without disassembling the instrument. The baffle cover is designed in a later stage to avoid the stray light in the telescope dome, and the conical shape has been changed to a
Figure 6: Left panel: close view of baffle simulated in ZEMAX non-sequential mode. Right panel: new baffle as finally mounted with the CCD Imager.
Figure 7: Left panel: Imager mounted at the axial port of the 3.6m DOT with the new light baffle. Various parts of the CCD Imager are also marked for clarity; the figure is adapted from Kumar _et al._ (2022b). Right panel: vignetting-free flat frame taken after placing new light baffle.
cylindrical shape to avoid light vignetting during observations.
To achieve the quality image in this instrument, the positions of filters in filter wheels and CCD chip are in line with the optical axis of the telescope with allowable margins of shift and tilt estimated in the optical design software. Since the gravity and thermal effects will disturb the positions above, a proper design evaluation is necessary to achieve the tolerance limit estimated in optical designs.
### Finite Element Analysis (FEA) and design optimizations
The performance of the Imager instrument setup is strongly dependent on the suitable design of the housing structure, arms, positioning of filter wheels, on providing stable support to the CCD, on how well these designs take the effect of gravity, and also on thermal distortions. These effects are considered individually and through system-level analysis in FEA. The system model reflects the preliminary design incorporating an optimized ribbed housing structure topology and filter wheel weight, and camera dummy. The comprehensive iterations and systematic studies were carried out to select the position of arms and the number of arms required, also the integration and assembly point of view of these arms were finalized.
Figure 8: Left panel: image representing the Imager housing structure, support arm, filter wheel assembly, and 4K\(\times\)4K CCD. Right panel: image showing the light baffle + filter assembly + Imager mounted at the axial port of the 3.6m DOT.
Figure 9: System level finite element model with CCD, arm and filter wheels.
Therefore, the three arms are designed in steel to avoid distortion and provide stiff support to the housing structure after FEA analysis (see Figure 9).
The housing structure was changed to aluminium casting to reduce the cost of manufacturing and the thickness of the body was reduced wherever possible. Further, ribs were provided to increase the strength of the housing structure. Brass inserts were incorporated to avoid wear and tear on threaded holes. Since the housing structure design optimization was performed on a finite element model, it was necessary to re-tune the housing structure to account for the gravity and thermal distortion effects that cause the early system to behave slightly differently in telescope elevation angles. The effect of CCD weight was incorporated by a coefficient of the frictional joint to the housing structure as an option available in Ansys software. The thickness of the housing structure, arm and filter wheel weight, etc. were also determined to keep the distortion within the optical design tolerance limits. The Filter position and CCD chip shift were determined within a pixel size.
The tilt values of the filters and CCD chip positions were determined within the optical tolerance of 10 arc minutes estimated in the optical design of Pandey \(et\)\(al.\) (2018). The stress and strain in designs were below the safe limit of material properties. Figure 10 shows the FEA estimate deflection of the Imager instrument during the telescope pointing towards the zenith and zero elevation angle.
## 4 Upgradations of the motorized filter wheel
3.6m DOT Imager instrument consists of two filter wheels which are used for imaging at different wavelengths. First, filter wheel has six filter positions, namely \(U\), \(B\), \(V\), \(R\), \(I\) and \(C\) (\(clear\)) and similarly the second one also has six filter positions, namely \(u\), \(g\), \(r\), \(i\), \(z\) and \(c\) (\(clear\)) position. Motorized filter wheel controller has been developed and implemented for controlling the above-mentioned filter wheels system. The controller is based on a microchip PIC 18F4431 microcontroller and wheels are rotated with the help of stepper motors. Controller communicates with PC GUI on Ethernet to RS232 converter.
### Filter wheel mechanism
The circular filter wheel has gear teeth machined on the periphery, and it is coupled with a stepper motor through flexible coupling and gearbox. A combination of one Hall Effect vane sensor and one ferrous vane interrupter (aligned with each other) are mounted on each wheel at a different position for initial referencing. Sensors are mounted on the fixed part of the wheel and interrupters are mounted on the moving part of the wheel. Optical filters are housed in the wheel mechanism and the required positioning is obtained by rotating the wheel with the help of a stepper motor.
Figure 10: Finite element analysis (FEA) estimates deflection of the Imager instrument during telescope pointing at zenith with zero elevation angle. The change of tilt values at chip /filter is within 10 arc minutes.
#### 4.1.1 Controller Architecture
The controller is implemented using a microchip PIC 18F4431 microcontroller having Serial Communications Interface (SCI) and Universal Synchronous Asynchronous Receiver Transmitter (USART) module. Microcontroller has an ample number of digital I/Os and supports RS232 full duplex communication with the help of a TTL to RS232 level shifter. The firmware of the microcontroller has been developed in C language and the cross compiler has been used to convert the C program into compatible hex code. RD4 and RD5 pins of PORTD are used for the wheel 1 & 2 home position sensor. RB2, RB3, RB4, and RB5 pins of PORTB are used for controlling the stepper motors. RC6 (TX) and RC7 (RX) pins of PORTC are used for interfacing with the RS232 port of a PC through the MAX232 transceiver. RD6 & RD7 pins of PORTD are used for the request of CW and ACW filter wheel - 2 rotation in manual mode, and R C0, R C1 and R C2 pins of PORT C are used for the request of both wheel homing, CW and ACW filter wheel - 1 rotation in manual mode. Manual operations are performed by the push button switches mounted on the controller.
The entire electronics system consists of a single control box which consists of a microcontroller and interfacing circuit along with a power supply. It also contains an Ethernet to RS232 converter and stepper motor driver units. The controller board has three ports, one for serial programming, a second for the PC interface and a third for connecting to sensors and the stepper motor driver unit of the filter wheel. The block diagram, schematic diagram of the Motorized Filter Wheel Controller, and the PC GUI are shown in Figure 11, Figure 12 and Figure 13, respectively.
### Operations
Once the system is set up and powered the wheel is continuously rotated (one after the other) till the home position is sensed. After homing controller keeps track of the number of steps and directions to know the exact position of the filter. The algorithm is written to follow the shortest path in either direction to reach the desired filter position. Wheels get locked at each filter position and therefore the motor needs not to be powered when the wheel is stationary. This ensures minimum heat dissipation from the motor near the focal plane. The angle between one filter to the other is 60 degrees at the wheel shaft. The gear ratio between the motor and to wheel shaft is 1:540. The stepper motor completes one rotation in 1000 steps. It takes 90000 steps for moving from one filter to other.
Initially at the POWER ON both the wheels are positioned at their \(clear\) positions. Once the PC GUI of the filter wheel is enabled, it sends a POWER ON status request to the controller and gets acknowledged. After POWER ON is acknowledged, it sends a current POSITION status request and gets the current position acknowledgement from the controller. Current POSITION status may be requested from the controller at any time once the power is energized. PC GUI has a provision to request all filter wheel position's individual movement. If \(1^{st}\) filter wheel moves to any requested position then \(2^{nd}\) one will be moved to its \(clear\) position. Any request received from a PC GUI will be acknowledged to the PC through Ethernet to RS-232 converter after the execution of the request by the controller. The controller will be ready for the execution of the next request after the execution of the previous request. During CCD readout time, the filter wheel's movement may be requested for the next required filter position, So that the filter wheels may reach its required position during the CCD readout time itself or it may save time to reach its required filter position.
### Proposed modifications
The present system is having only a combination of one sensor and one interrupter (aligned with each other) which is used for initial referencing. Filter wheel positioning is obtained through the counting of stepper motor pulses. The flexible coupling is used to mechanically couple the filter wheel and stepper motor with each other. Sometimes positioning error is observed due to loose/slippage coupling issues or count missing of stepper motor pulses and does not get displayed on the PC GUI. PC GUI only displays that the filter is in the requested position.
The modified system will be having two Hall Effect vane sensors and seven ferrous vane interrupters
mounted on each wheel at a different position. Sensors will be mounted on a fixed part of the wheel and interrupters will be mounted on the moving part of the wheel. A combination of one sensor and one interrupter (aligned with each other) will be used for initial referencing. A combination of the remaining sensor and six interrupters (aligned with each other) will be used for exact position sensing/ pulse counting.
position within the certain time limit. Once the error gets displayed on the PC GUI, coupling issues may be rectified accordingly.
## 5 Fringe Correction
Back-illuminated thin CCD images are often affected by fringe patterns that are the results of the interference of monochromatic light within the CCD (Bernstein et al. 2017). Narrowband filters and broadband filters containing strong sky emission lines are typically affected by the fringes. Lines due to atmospheric Oii and OH affect the red-end bandpasses of the CCD (wavelength \(>\) 700 nm), i.e., mainly i- and z-bands (e.g., Gullixson 1992, Howell 2006, 2012). Though fringes add only a small additional flux to the image, their removal is a necessity for scientific and cosmetic reasons.
Fringe patterns are also observed in the red-end bandpass images taken with the Imager. Removal of the fringes requires an extra step, where a description of the fringe pattern is scaled and subtracted from each image. Fringe patterns in the images are determined by the thickness variations of the CCD. Therefore, fringe patterns on the CCD are globally stable with time. It indicates that a single high signal-to-noise (S/N) fringe map may specify the fringe pattern. The fringe patterns can be generated through a series of night sky frames taken with a significant jitter pattern. To generate the fringe pattern, we median combine the night sky frames taken with a jitter pattern, so that only the fringe pattern is left. Once we have constructed a fringe map, it should be scaled to the intensity of the fringe pattern in each science image, and then subtracted.
In general, the intensity of the fringe pattern becomes more prominent with increasing exposure time. For very short exposures these are hardly visible but for longer exposures, these clearly stand out of the background. Fringe removal becomes critically important, especially when dealing with the photometry of multiple or extended objects, or faint sources to provide properly uniform photometry across the image. So, first, the fringe pattern can be scaled by the exposure time. \(IRAF\) tasks can be used for fringe correction. The \(ccdproc\) task can also be used for the fringe correction. \(IRAF\) package has the \(mkfringecor\) task,
Figure 13: A snapshot of the GUI utilized for the night observations using 4K\(\times\)4K CCD Imager.
which is used to combine frames to construct a fringe pattern. Another approach in \(IRAF\) is first to scale the fringe map to globally minimize the difference between the map and object frames, and then use the \(rmfringe\) and \(irmfringe\) tasks in the \(msered\) package (Valdes, 1998).
In Figure 14(left), we have shown an I-band raw image of a field. The fringes are clearly visible in the raw image and we found that the level of fringing in the image amounts to about 4-5% modulation. To generate the fringe pattern, we took the blank sky frames in I-band. We used the \(IRAF\) tasks \(objmasks\), \(sflatcombine\), and \(rmfringe\) to generate the fringe pattern and apply the fringe correction to the raw image. After the fringe correction, the fringe pattern is completely removed (see Figure 14 right panel) and the image is flattened.
## 6 New Science Objectives
The 4K\(\times\)4K CCD Imager is one of the most suitable instruments for the deep imaging of the galactic and extra-galactic astronomical sources in a set of ten broad-band filters (Bessel \(UBVRI\) and SDSS \(ugriz\)). In addition, the longitudinal advantage of India and, particularly, the 3.6m DOT site is crucial in observing transients up to deeper limits (Pandey \(et~{}al.\), 2018). This section highlights some of the recent initiatives suited for imaging with the 4-m class optical telescopes and observed using the 4K\(\times\)4K CCD Imager.
### Extra-galactic Wolf-Rayet Stars
Stars with the zero-age main sequence mass (M\({}_{ZAMS}\)) of \(\gtrsim\) 25 M\({}_{\odot}\) are supposed to evolve into WR stars (Hamann \(et~{}al.\), 2006). The spectra of Wolf-Rayet (WR) stars exhibit features of heavy metals as their high temperature and strong stellar winds strip off the outer hydrogen envelopes. The outer envelope stripping could be through steady line-driven winds (Puls \(et~{}al.\), 2008), eruptive episodes of mass loss (Smith & Owocki, 2006) and/or mass-transfer to a binary companion through Roche-Lobe overflow (Yoon \(et~{}al.\), 2010). WR stars are thought to be the possible progenitors of most of the stripped-envelope supernovae (SESNe) and long gamma-ray bursts (GRBs); hence are very important in understanding these explosions. To study the WR stars and their environment, M101 (Messier 101 or NGC 5457 or Pinwheel Galaxy) situated in the Ursa Major constellation is one of the ideal spiral galaxies because of its face-on orientation and distance of \(\approx\)6.4 Mpc (Shappee & Stanek, 2011; Pledger \(et~{}al.\), 2018) having \(R\)-band apparent magnitude of \(\sim\)7.76 mag as observed multiple times under the SDSS program. M101 also hosted a type Ia SN named SN 2011fe, which possibly originated from the death of a luminous red giant star (Li
Figure 14: An I-band image taken with the Imager before (left) and after fringe correction (right).
\(et\)\(al.\), 2011). Due to the large angular size and proximity of M101, stellar content and their evolution in the spiral arms could be studied better.
We also attempted to detect the WR stars situated in the crowded spiral arms of M101 with the help of the 4K\(\times\)4K CCD Imager. The \(R\)-band image of M101 with a field of view of \(\sim\)6.5\({}^{\prime}\times\)6.5\({}^{\prime}\), having an exposure time of 300 sec and observed on 2020-03-03 is shown in Figure 15. Out of 15 WR stars in M101 detected by Pledger \(et\)\(al.\) (2018), we are able to locate 9 WR stars (IDs: 49, 56, 112, 114, 1012, 1016, 1024, 1030 and 2053) that are highlighted with circles. Four of the nine encircled WR stars are well resolved and detected above the 3-sigma limit (IDs: 49, 112, 114, 1012, and 1024; see Figure 15), hence ideal for performing the photometry. Larger apertures are needed to estimate the total flux of the stars. But placing a larger aperture can also lead to more flux from the sky and also from the extended galaxy flux, which can lead to more considerable uncertainties in the fluxes of stars. Therefore, initially, we chose small apertures (1 \(\times\) FWHM) and applied the aperture correction later. We chose multiple isolated and bright field stars to estimate the aperture correction. First, the fluxes of the field stars were measured for 1 \(\times\) FWHM and 4 \(\times\) FWHM aperture radii and calculated the aperture correction factor. As the WR stars mentioned above are in crowded galaxy arms, the PSF photometry is performed based on the PSF radius of 1 \(\times\) FWHM and then applied the aperture correction term to estimate the total fluxes. The above-discussed tasks are performed using the Python scripts hosted on RedPipe (Singh, 2021). The same standard stars used for the aperture correction are also used for the calibration based on their \(R\)-band magnitudes quoted in the USNO-B1. The final estimated magnitudes of the WR stars are tabulated in Table 1, shown in bold. The \(R\)-band magnitudes for the above discussed WR stars are reported for the first time and not available in the literature hence compared with the \(F555W\) band magnitudes estimated by Pledger \(et\)\(al.\) (2018). For the above-discussed four WR stars, the colour terms for \(F555W\) and \(R\)-band magnitudes are nearly the same with a scatter of \(\sim\)0.2 mag, which may be attributed to possible temporal variability with time. The ID numbers of each WR star mentioned in this analysis are similar to those discussed in Tables 3, 4, and 5 of Pledger \(et\)\(al.\) (2018) and more details about these stars are published therein. The present analysis demonstrates the capabilities of the 4K\(\times\)4K CCD Imager plus 3.6m DOT to resolve the extra-galactic stars embedded in their parent galaxies.
Figure 15: \(R\)-band frame of M101 observed on 2020-03-03 with a field-of-view. of \(\sim\)6.5\({}^{\prime}\)\(\times\)6.5\({}^{\prime}\) and exposure time of 300 sec with the 4K\(\times\)4K CCD Imager. Out of 15 regions hosting WR stars (as reported in Pledger \(et\)\(al.\) (2018)), 9 regions were identified in our observations demonstrating capabilities of the 3.6m DOT for such studies.
### Abell Galaxy Cluster
Galaxy clusters are gravitationally bound structures containing hundreds of galaxies and X-ray-emitting hot intracluster plasma, but their mass budget is dominated by dark matter. One of the well-known survey catalogues, Abell, includes nearly 4000 galaxy clusters up to \(z\sim\)0.2 (Abell, 1958; Abell \(et\)\(al.\), 1989). The galaxy cluster with the Abell, Corwin and Olowin (ACO) number 1689 (RA: 13h:11m:29.5s, DEC: -01\({}^{\circ}\)20\({}^{\prime}\)17\({}^{\prime\prime}\); J2000) and situated in the \(Virgo\) constellation is exciting to investigate because of some of its peculiar characteristics. ACO 1689 is one of the most massive and giant galaxy clusters (ACO 1689 contains around 0.16 million globular clusters, the largest population even found in any galaxy cluster) observed so far (Tyson & Fischer, 1995), which presents significant gravitational lensing (Limousin \(et\)\(al.\), 2007; Ghosh
\begin{table}
\begin{tabular}{l c c c c} \hline ID & SDSS \(r\)-band & err & Bessel R-band & err \\ \hline
1 & 18.45 & 0.02 & 18.13 & 0.08 \\
2 & 18.30 & 0.01 & 17.90 & 0.06 \\
3 & 20.68 & 0.06 & 20.18 & 0.14 \\
4 & 20.87 & 0.06 & 20.21 & 0.15 \\ \hline \end{tabular}
\end{table}
Table 2: The R-band magnitudes of the four galaxies (IDs 1, 2, 3, and 4) are tabulated (in bold) and are also compared with the SDSS \(r\)-band magnitudes.
Figure 16: Stacked \(R\)-band frame (3\(\times\)200 sec = 600 sec) of the ACO 1689 observed using the 4K\(\times\)4K CCD Imager on 2017-04-28.
\begin{table}
\begin{tabular}{l c c c c} \hline ID & m\({}_{F555W}\) & err & R-band & err \\ \hline
49 & 24.33 & 0.02 & 21.79 & 0.16 \\
112 & 22.71 & 0.04 & 20.23 & 0.14 \\
114 & 20.60 & 0.04 & 18.25 & 0.11 \\
1024 & 23.81 & 0.03 & 21.50 & 0.15 \\ \hline \end{tabular}
\end{table}
Table 1: The \(R\)-band magnitudes of the four WR stars with IDs 49, 112, 114, and 1012 are shown in bold. The estimated magnitudes are also compared with the \(F555W\)-band (central wavelength = 5308.4 \(\AA\) and width = 1565.4 \(\AA\)) magnitudes reported by Pledger _et al._ (2018).
_et al._, 2022). Lensing effects in galaxy clusters are more powerful in central regions \(<\) 0.3 Mpc having high concentrations of both baryonic and dark matter often producing multiple images and serving as suitable natural laboratories to study possible relations between dark matter and baryonic ones and underlying physical processes within these regions. Additionally, strong gravitational lensing within galaxy clusters are also powerful probes to study high red-shift universe as clusters can magnify faint distant background sources, making them observable with the help of moderate to large size telescopes (Bartelmann, 2010). ACO 1689 is one of the well-studied clusters to be revealed with more than \(>\) 100 multiple images from 30 sources (Broadhurst _et al._, 2005).
In the current analysis, we present the \(R\)-band image of the ACO 1689, observed on 2017-04-28 with the help of the 4K\(\times\)4K CCD Imager at the 3.6m DOT as a part of calibration test observations (see Figure 16). After the pre-processing and alignment, three individual \(R\)-band images, each of 200 sec, were stacked to achieve a better signal-to-noise ratio. To highlight the capability of 3.6m DOT + 4K\(\times\)4K CCD Imager in observing the extended objects, we also performed the photometry of four random member galaxies with a range of brightness (up to \(\sim\) 20.2 mags), marked with IDs 1, 2, 3 and 4 in Figure 16. The photometric analysis is accomplished using the SEP (Source-Extractor), which is a python package suitable for performing faint-galaxy photometry because of its ability to measure accurate PSF and galaxy model fitting (Bertin & Arnouts, 1996; Barbary, 2016). The estimated \(R\)-band magnitudes for the above-discussed four galaxies are tabulated in Table 2 and also compared with those of SDSS \(r\)-band magnitudes. The \(R-r\) colour terms for the four galaxies discussed here are consistent with the colour equation discussed by Jordi _et al._ (2006). Our observations based on deep imaging of the ACO 1689 in the \(R\)-band demonstrate the capabilities of the 3.6m DOT for such studies. We have been able to detect many galaxies and lensed sources within the central region of the cluster down to a limiting magnitude of \(\sim\)21 mag in the \(R\)-band in an exposure time of 600s. The detailed analysis of these observations and the photometric analysis procedures are ongoing and will soon be published elsewhere.
### Gamma-Ray Bursts and host galaxies
GRBs are among the most exotic and luminous (L\({}_{\gamma,\rm iso}\sim\) 10\({}^{48}\)\(-\) 10\({}^{54}\) erg/s) phenomena studied in modern astronomy (Piran, 2004). They have two distinct emission phases; one is the prompt emission (the initial burst phase, peak at sub-MeV energy range), followed by a long-lived multiwavelength afterglow phase (Kumar & Zhang, 2015). The longitudinal advantage of 3.6m DOT and deep imaging capabilities of 4K\(\times\)4K CCD Imager together provide a unique opportunity to perform deep and longer follow-up observations of the optical counterparts/ associated host galaxies of GRBs. Recently, 3.6m DOT+ 4K\(\times\)4K CCD Imager discovered a few interesting results such as the detection of orphan and dark afterglows (Gupta _et al._, 2021c), the detection of most delayed optical flare (originated due to refreshed shock) observed from any GRB so far (Kumar _et al._, 2022c), detection of long GRB (GRB 211211A) from the binary merger (Gupta _et al._, 2021b). In addition, we also studied a sample of host galaxies of GRBs observed using the 3.6m DOT+ 4K\(\times\)4K CCD Imager and compared the results with a larger sample of normal star-forming galaxies. We noted that the physical properties such as star-formation rate, mass, and specific star-formation rate of the host galaxies of GRBs are more similar to the normal star-forming galaxies at the higher redshifts (Gupta _et al._, 2022a).
In this work, we report the host galaxy observations of one of the interesting and nearest (\(z\) = 0.162) short GRB (GRB 160821B). GRB 160821B was triggered and localized by the _Swift_ mission (Siegel _et al._, 2016). Troja _et al._ (2019); Lamb _et al._ (2019) performed the analyses of the multiwavelength afterglow data and revealed an optical-infrared kilonova emission, powered by heavy-element nucleosynthesis in a binary neutron star (BNS) merger. Later on, Acciari _et al._ (2021) reported the detection of very high energy emission (VHE) from GRB 160821B at a significance of \(\sim\) 3 \(\sigma\) using MAGIC telescope, making it the first short GRB with TeV emission. The detection of VHE emission from short bursts is useful to constrain their possible progenitors and enhance our understanding towards BNS (Acciari _et al._, 2021).
We observed the field of GRB 160821B using 3.6m DOT+ 4K\(\times\)4K CCD Imager in multiple broadband filters (\(BRI\)) \(\sim\)0.685 years post burst. Figure 17 shows the finding chart (RGB image) of the field of GRB 160821B took using 3.6m DOT + 4K\(\times\)4K CCD Imager. We noted that the associated afterglow/kilonova
emission at the position of GRB 160821B has significantly faded, consistent with the typical decay rate. However, we clearly detected the associated host galaxy of GRB 160821B in all the filters. We also constrain the \(B\), \(R\) and \(I\)-band magnitudes of the host galaxy of GRB 160821B utilising the SExtractor as discussed in Section 6.2. The GRB host's estimated \(B\), \(R\) and \(I\)-band magnitudes (\(\sim\)19.5 \(\pm\) 0.10, 19.16 \(\pm\) 0.06 and 19.02 \(\pm\) 0.09 mags, respectively) are consistent with those published earlier (\(B\)\(\sim\)19.6; Troja \(et~{}al.\) 2019 and \(R\)\(\sim\)19.2; Xu \(et~{}al.\) 2016).
Such deep photometric observations of the host galaxy are crucial to estimate the metallicity, mass, ages of the galaxies, star-formation rates, and other physical characteristics of environments of GRBs and hence progenitors. Recently, Fong \(et~{}al.\) (2022); Nugent \(et~{}al.\) (2022) studied the host galaxies properties of a larger sample of short GRBs including the host of GRB 160821B and calculated the following stellar population parameters for GRB 160821B using spectral energy distribution modelling of the host: log(M/M\({}_{\odot}\)) = 9.24\({}^{+0.01}_{-0.01}\), log(Z/Z\({}_{\odot}\)) = 0.1\({}^{+0.05}_{-0.05}\), \(t_{gal}\) = 0.58\({}^{+0.02}_{-0.02}\) Gyr, \(A_{V}\) = 0.01\({}^{+0.01}_{-0.01}\) mag, and SFR = 0.24\({}^{+0.01}_{-0.01}\) M\({}_{\odot}\)/yr, where log(M/M\({}_{\odot}\)), log(Z/Z\({}_{\odot}\)), \(t_{gal}\), \(A_{V}\), and SFR are stellar mass formed, stellar metallicity, age of the galaxy, rest-frame dust attenuation in the host, and star-formation rate, respectively. The calculated physical parameters for GRB 160821B are in agreement with typical host galaxies of short GRBs. The afterglow position of GRB 160821B is located at \(\sim\) 16 kpcs projected physical distance from the centre of the associated spiral host galaxy, such large offset values are observed in the case of short GRBs only (Troja \(et~{}al.\), 2019). In the near future, we plan to continue deep and long follow-up photometric observations of optical afterglows/ host galaxies of new exciting transients during LIGO O4 run and later on visible in the DOT sky utilizing the unique capabilities of 3.6m DOT+ 4K\(\times\)4K CCD Imager and other back-end instruments.
## 7 Summary
This article summarises details of various sub-components of the 4K\(\times\)4K CCD Imager not published so far. The present details were not covered in the earlier published papers related to the instrument (Pandey \(et~{}al.\), 2018; Kumar \(et~{}al.\), 2022b). Issues noticed in the images, like partial vignetting towards the edges, were understood and resolved with the help of a modified light baffle now installed with the imager and discussed in detail in section 2. Section 3 summarised the mechanical design and related analysis like
Figure 17: An \(RGB\) image prepared using the \(I\), \(R\), and \(B\) band frames of the GRB 160821B field observed on 2017-04-28 using the CCD imager. Each frame of \(BRI\) bands exhibited exposure times of 600 sec. The position of GRB 160821B is also marked with a circle. GRB 160821B was not detected up to 3\(\sigma\) upper limit of B \(\approx\)22.2 mag at the epoch of our observations. |
2304.14956 | PAO: A general particle swarm algorithm with exact dynamics and
closed-form transition densities | A great deal of research has been conducted in the consideration of
meta-heuristic optimisation methods that are able to find global optima in
settings that gradient based optimisers have traditionally struggled. Of these,
so-called particle swarm optimisation (PSO) approaches have proven to be highly
effective in a number of application areas. Given the maturity of the PSO
field, it is likely that novel variants of the PSO algorithm stand to offer
only marginal gains in terms of performance -- there is, after all, no free
lunch. Instead of only chasing performance on suites of benchmark optimisation
functions, it is argued herein that research effort is better placed in the
pursuit of algorithms that also have other useful properties. In this work, a
highly-general, interpretable variant of the PSO algorithm -- particle
attractor algorithm (PAO) -- is proposed. Furthermore, the algorithm is
designed such that the transition densities (describing the motions of the
particles from one generation to the next) can be computed exactly in closed
form for each step. Access to closed-form transition densities has important
ramifications for the closely-related field of Sequential Monte Carlo (SMC). In
order to demonstrate that the useful properties do not come at the cost of
performance, PAO is compared to several other state-of-the art heuristic
optimisation algorithms in a benchmark comparison study. | Max D. Champneys, Timothy J. Rogers | 2023-04-28T16:19:27Z | http://arxiv.org/abs/2304.14956v1 | # PAO: A general particle swarm algorithm with exact dynamics and closed-form transition densities
###### Abstract
A great deal of research has been conducted in the consideration of meta-heuristic optimisation methods that are able to find global optima in settings that gradient based optimisers have traditionally struggled. Of these, so-called particle swarm optimisation (PSO) approaches have proven to be highly effective in a number of application areas. Given the maturity of the PSO field, it is likely that novel variants of the PSO algorithm stand to offer only marginal gains in terms of performance--there is, after all, no free lunch. Instead of only chasing performance on suites of benchmark optimisation functions, it is argued herein that research effort is better placed in the pursuit of algorithms that also have other useful properties. In this work, a highly-general, interpretable variant of the PSO algorithm --particle attractor algorithm (PAO)-- is proposed. Furthermore, the algorithm is designed such that the transition densities (describing the motions of the particles from one generation to the next) can be computed exactly in closed form for each step. Access to closed-form transition densities has important ramifications for the closely-related field of Sequential Monte Carlo (SMC). In order to demonstrate that the useful properties do not come at the cost of performance, PAO is compared to several other state-of-the art heuristic optimisation algorithms in a benchmark comparison study.
Meta-heuristic optimisation Particle swarm optimisation Stochastic differential equations
## 1 Introduction
A great number of challenges in science, engineering and beyond can be cast as optimisation problems. Indeed, a great deal of research effort has been expended in the specification of methods that are able to provide optimal parameters for a given objective function. Of particular interest are approaches that can perform the optimisation without access to gradient information. So called _heuristic methods_ have attracted a great deal of research interest over many decades. Already, countless methods have been proposed, ranging from early genetic algorithms [1] to more recent evolutionarily inspired approaches. Recent comprehensive reviews of the field can be found in [2, 3]. Despite strong empirical performance, the no-free-lunch theorem for optimisation [4] precludes the existence of a single 'best' approach. For any two optimisation algorithms it is always possible to find an optimisation task upon which one algorithm outperforms the other (and _vice versa_).
Although no algorithm can be proven to be better than any other, this does not mean that new approaches should not be sought out. Instead of placing the focus solely on benchmark performance, it is useful to consider other factors that make some heuristic methods more useful in practice. In this work, a novel methodology is proposed that incorporates both _interpretable hyperparameters_, _exact dynamics_ and _closed-form transition densities_.
One issue (amongst many [5]) with several heuristic and meta-heuristic optimisation strategies is that the performance of the approach comes to depends strongly on the choice of several hyperparameters. In many optimisation settings there exists a trade-off between so-called exploration (sampling the search space for promising minima) and exploitation (greedily searching within a minima for an optima) behaviours. Hyperparameter choice can play a key role in biasing the search to one mode of optimisation or the other. A motivation of the current work is that this trade-off is made as interpretable as possible.
A further motivation for the approach presented in this work is direct closed-form access to the transition densities of the particles. The position of any particle in the next generation can be expressed as by a Gaussian distribution that depends only on the previous value. This is desirable as it enables the optimisation process to be used as a proposal step within a sequential Monte Carlo (SMC) scheme [6] in order to provide robust uncertainty quantification in the presence of noisy objective functions.
### Background and related work
Optimisation algorithms inspired by nature and evolution have been a a topic of considerable research since the early twentieth century. Although many algorithms have been proposed, there has been recent concern that many of these offer at best incremental novelty and at worst a deliberate deception due to inherent bias towards optimisation benchmark functions where the optima lie at the centre of the search space [5]. Although many new algorithms are published every year, many application papers rely on a core number of highly-cited methods and their variants including ant colony optimisation [7], particle swarm optimisation (PSO) [8] and differential evolution (DE) [9].
Among the most intensively researched heuristics are those belonging to the particle swarm class. First proposed by Kennedy and Eberhart in [8], the approach has been extensively studied and extended. Applications of particle swarm methods can be found in diverse fields from power systems [10] to image segmentation [11]. Since the original proposition, a great deal of variants to the method have been proposed, including additive noise [12], quantum leaps [13], Gaussian attractors [14], ordinary differential equation attractors in [15] and multiple constraints in [16] as well as many others.
In this study, the PSO algorithm is treated as a stochastic differential equation (SDE). Casting the particle swarm as an SDE is not in itself a novel approach. In a series of papers Grassi et al. [17, 18] cast several variants of the particle swarm algorithm as SDEs in order to derive convergence results from mean-field limit theory. However, the derivations in [17, 18] result in nonlinear SDEs that cannot be solved exactly in order to recover closed-form transition densities.
### Contribution
Although particle swarms have been separately cast as SDEs [17, 18] and employed with additive stochastic terms [12], the authors are not presently aware of any methods that have employed both of these approaches with the specific aim of providing a particle swarm algorithm with a closed-form transition density.
In this paper, the authors propose a variant PSO approach - the particle attractor optimisation (PAO)2 algorithm. In the proposed approach, the motions of each element of each particle are given by the solution of a scalar, stochastic differential equation with additive noise. Although the formulation bears a close resemblance to existing approaches, the linear stochastic formulation here comes with a number of distinct advantages over existing particle swarm methods:
Footnote 2: Pronounced ‘pau’.
* The dynamics of the particles are computed exactly and no difference-equation approximation is required.
* The method is highly flexible through the choice of the attractors.
* The hyperparameters of the approach are interpretable as the parameters of a dynamic system (damping ratio, natural frequency etc.).
* The forward transition densities of the particles are available in closed form.
## 2 Methodology
In this work, a generalised version of a particle swarm optimisation algorithm is proposed that enables the particle dynamics from one iteration to the next to be represented as a linear, time-invariant SDE with additive stochasticity. As in other PSO algorithms, the motions of the particles are driven by restoring forces that act in the direction of a number of attractors in the search space. The positions of the attractors are user-defined and updated each iteration. Choices for the attraction points might include:
* _Global best_: The best particle location found across all particles.
* _Local best_: The best location in the location history of a single particle.
* _Average local best_: The local best positions averaged across all particles3. Footnote 3: Care must be taken to avoid introducing centre-bias to the optimisation.
* _Average particle_: The average particle location.
* _Weighted average particle_: The average particle location weighted by the fitness scores.
* Attractors based on other meta-heuristic methods i.e 'rand/1/bin' from differential evolution [9].
* Stochastic attractors such as those in [14].
However, the formulation of the approach here does not depend on the choice of attractors. In order to ensure that the underlying SDE has an exact solution, stochasticity is introduced to the dynamics by the application of independent forcing terms that are applied to each element of each particle independently and additively. The noise process is modelled as a zero-mean Gaussian with variance given by,
\[\sigma^{2}=q\nu(\alpha) \tag{1}\]
where \(\nu\) is a user-defined function that depends only on the positions of the attractors, scaled by the hyperparameter \(q_{0}\). In this work, \(\nu(\alpha)\) is defined as the sum of the squared distances between the average and global best particle locations. The reasoning here is that the amount of noise should decrease as these attraction centres converge onto each other. The forces acting on each particle are depicted in Figure 1.
Mathematically, the motions of the particles can be described by a second-order differential equation. In order to simplify the forthcoming notation, let, \(x=x_{ij}\) be the position of the \(j\)th element of the \(i\)th particle in the search space. Similarly, let, \(\alpha_{r}=\alpha_{rij}\) be the position of the \(j\)th element of the \(i\)th particle for the \(r\)th attractor. Because the dynamics of the motions of the elements of the particles are all independent from each other, they can be individually represented by the second-order scalar differential equation,
\[m\ddot{x}+c\dot{x}+\sum_{r}k_{r}(x-\alpha_{r})+w(t) \tag{2}\]
where \(m\), \(c\) and \(k_{i}\) are hyperparameters relating to the inertia, viscous drag and relative weights of the attractors \(\alpha_{r}\) and each over-dot represents a time derivative. \(w(t)\) is a continuous time white-noise process with zero mean and spectral density given by \(\sigma^{2}\).
The equation of motion above can be trivially recast into a first-order state-space form as,
\[\dot{\mathbf{x}}=\begin{bmatrix}\dot{x}^{\prime}\\ \dot{z}\end{bmatrix}=\begin{bmatrix}z\\ \frac{1}{m}(-cz-k^{\prime}x^{\prime}-\epsilon)\end{bmatrix}=\mathbf{0} \tag{3}\]
where the following coordinate transformations have been made,
\[k^{\prime}=\sum_{r}k_{r} \tag{4}\]
\[x^{\prime}=x-\frac{1}{k^{\prime}}\sum_{r}k_{r}\alpha_{r} \tag{5}\]
\[z=\dot{x}^{\prime} \tag{6}\]
To aid with the interpretability of the hyperparameters, it is convenient to express the viscous damping term in the above in terms of the damping ratio [19],
\[\zeta=\frac{c}{2\sqrt{k^{\prime}m}} \tag{7}\]
The above differential equation admits an Ito process representation of the form,
\[\mathrm{d}\mathbf{x}=F\mathbf{x}\ \mathrm{d}t+\mathbf{L}\ \mathrm{d}\beta \tag{8}\]
whereby
\[F=\begin{bmatrix}0&1\\ -\frac{k^{\prime}}{m}&-2\sqrt{\frac{k^{\prime}}{m}}\zeta\end{bmatrix} \tag{9}\]
\[\mathbf{L}=\begin{bmatrix}0\\ 1\end{bmatrix} \tag{10}\]
Figure 1: Visualisation of the motions of a single particle in the proposed algorithm. The particle moves as if attached to linear elastic springs at each attraction point. The movement of the particle is subject to inertial and viscosity terms that resist motion. Stochasticity is introduced as a Gaussian white noise excitation term. The red arrow represents the restoring force from equivalent single spring that acts towards the overall attractor weighted by the stiffness terms.
and \(\beta\) is the formal Brownian motion process in one dimension with diffusion coefficient given by \(Q=\sigma^{2}\) (the spectral density of the driving white noise process). If one assumes that all \(m\), \(c\), \(k^{\prime}\), \(\alpha_{r}\) and \(q\) are constant in the interval \([t_{0},t]\),4 then the solution to (8) is a standard result in the SDE literature [20]. In order to keep this paper self contained, the salient results are reproduced here. The general solution to (8) is given by,
Footnote 4: This is an implicit assumption in the discretisation of the dynamics in practically every particle swarm optimisation method.
\[\mathbf{x}(t)=\exp\left(F\left(t-t_{0}\right)\right)\boldsymbol{x}\left(t_{0 }\right)+\int_{t_{0}}^{t}\exp(F(t-\tau))\boldsymbol{L}\mathrm{d}\beta(\tau) \tag{11}\]
where \(\exp(\cdot)\) represents the matrix exponential operation. Denoting now,
\[A=\exp\left(F\Delta t\right)=\exp\left(F\left(t-t_{0}\right)\right) \tag{12}\]
then the time evolution of each element of each particle is given by,
\[\boldsymbol{x}_{t}=A\boldsymbol{x}_{0}+\int_{0}^{\Delta t}A\boldsymbol{L} \mathrm{d}\beta \tag{13}\]
where \(\boldsymbol{x}_{0}\) is a given (deterministic) initial condition. The mean and covariance of the above can now be computed,
\[\mathbb{E}[\boldsymbol{x}_{t}]=A\boldsymbol{x}_{0} \tag{14}\]
\[\mathbb{E}[\boldsymbol{x}_{t}^{\mathsf{T}}\boldsymbol{x}_{t}]=\int_{0}^{ \Delta t}\exp\left(F\left(\Delta t-\tau\right)\right)\boldsymbol{L}Q\boldsymbol {L}^{\mathsf{T}}\exp\left(F\left(\Delta t-\tau\right)\right)^{\mathsf{T}} \mathrm{d}\beta \tag{15}\]
\[\mathbb{E}[\boldsymbol{x}_{t}^{\mathsf{T}}\boldsymbol{x}_{t}]=\Sigma= \boldsymbol{L}Q\boldsymbol{L}^{\mathsf{T}}-A\boldsymbol{L}Q\boldsymbol{L}^{ \mathsf{T}}A^{\mathsf{T}} \tag{16}\]
Which implies that the dynamics for each element of each particle can be updated according to,
\[\boldsymbol{x}_{t+1}=A\boldsymbol{x}_{t}+\boldsymbol{d},\quad\boldsymbol{d} \sim\mathcal{N}(\boldsymbol{0},\Sigma) \tag{17}\]
The transition probability from \(\boldsymbol{x}_{t}\) to \(\boldsymbol{x}_{t+1}\) can therefore be written in the form a a Gaussian distribution,
\[p(\boldsymbol{x}_{t+1}|\boldsymbol{x}_{t})=\mathcal{N}(A\boldsymbol{x}_{t},\Sigma) \tag{18}\]
A naive implementation of the proposed algorithm is sketched in 1.
#### Notes on implementation
Since the positions of the global and local best particles change in each iteration (and therefore the value of \(Q\)), it is necessary to recompute \(\Sigma\) each time the swarm moves (Note that \(A\), and \(Q\) only come to depend on fixed hyperparameters.). Unfortunately, the relation,
\[\Sigma=\boldsymbol{L}Q\boldsymbol{L}^{\mathsf{T}}-A\boldsymbol{L}Q\boldsymbol {L}^{\mathsf{T}}A^{\mathsf{T}} \tag{19}\]
is prone to numerical instability as both terms in the difference are similar in magnitude. Fortunately, the method of _matrix fraction decomposition_ can be employed to jointly calculate \(A\) and \(\Sigma\) employing only a single matrix exponential. The interested reader is directed to [20] for a full treatment, but the salient result is,
\[\exp(\Phi\Delta t)=\exp\left(\begin{bmatrix}F&\mathbf{LQ}\mathbf{L}^{\mathsf{T}}\\ \mathbf{0}&-F^{\mathsf{T}}\end{bmatrix}\Delta t\right)=\begin{bmatrix}A&\Sigma(A^{- 1})^{\mathsf{T}}\\ \mathbf{0}&(A^{-1})^{\mathsf{T}}\end{bmatrix} \tag{20}\]
Thus, \(\Sigma\) can be read off as the upper right block of \(\exp(\Phi\Delta t)\) post-multiplied by the upper left block.
Computing the matrix exponential every iteration still adds undesirable computational complexity to the method. In order to significantly alleviate this, one notices that the covariance structure is independent for every element of every particle and can therefore be extracted as a constant in the computation. If one pulls out Q as a factor from the above, the evolution of the dynamics can thus be written,
\[\mathbf{x}_{t+1}=A\mathbf{x}_{t}+q_{0}\nu(\alpha)\mathbf{d},\quad\mathbf{d}\sim \mathcal{N}(\mathbf{0},\Sigma) \tag{21}\]
where \(A\) and \(\Sigma\) can be precomputed. Further numerical advantage can be achieved by pre-computing the Cholesky decomposition of \(\Sigma\),
\[\Sigma=HH^{\mathsf{T}} \tag{22}\]
so that the above can be written,
\[\mathbf{x}_{t+1}=A\mathbf{x}_{t}+\sqrt{q_{0}\nu(\alpha)}H\mathbf{d},\quad\mathbf{ d}\sim\mathcal{N}(0,I_{2}) \tag{23}\]
The overall computation (for every element of every particle) can be efficiently implemented on modern architecture using tensor operations. The proposed algorithm is sketched in algorithm 2. In the notation below, \(A\odot B\) refers to a broadcasted matrix-vector multiplication between the last two indices of the tensor \(A\) and the last axis of the Tensor \(B\). \(A\otimes B\) represents a tensor outer product.
```
0: Number of particles \(N\), hyperparameters \(\{m,\zeta,k_{i},q,\Delta t\}\) Initialise an initial swarm of \(N\) particle vectors \(\mathbf{x}_{j}\in\mathbb{R}^{D}\) Initialise the initial velocities of the \(N\) particle vectors. Compute the value of the objective function for each particle. for each generation do for each particle do Compute the positions attraction centres \(\alpha_{r}\). Compute the position of the particle in the transformed coordinates as \(\mathbf{x}^{\prime}=\mathbf{x}-\frac{1}{k^{\prime}}\sum_{r}k_{r}\alpha_{r}\). Compute the mean and the covariances for the transition density as above. Sample a new particle from the \(\mathcal{N}(A\mathbf{x}_{t},\Sigma)\) Recover the positions of the new particle in the original coordinates as \(\mathbf{x}=\mathbf{x}^{\prime}-\frac{1}{k^{\prime}}\sum_{r}k_{r}\alpha_{r}\). Compute the value of the objective function for the new particle. endfor endfor
```
**Algorithm 1** Particle attractor optimisation (PAO)
## 3 Benchmark comparison study
In order to demonstrate that the proposed approach is competitive with other approaches a simple benchmark study is presented. Although it is not the primary objective of this algorithm to produce a heuristic method that out-performs
state-of-the art methods, it is important to establish that the proposed method remains effective at the task of optimisation and that the benefits of interpretable hyperparameters, exact dynamics and closed form transition densities do not come at the cost of optimisation performance.
In order to assess the performance of PAO, a suite of nine standard optimisation benchmark problems are considered. The problems are selected in order to demonstrate the effectiveness of the model in a number of challenging scenarios. The considered problems are collected in Table 1 in approximate order of difficulty.
For each benchmark function, both two and eight dimensional problems are considered. The performance of the proposed algorithm (PAO) is compared to several very popular optimisation approaches that have proven effective in a wide range of fields:
* Particle swarm optimisation (PSO) [8].
* Quantum particle swarm optimisation (QPSO) [13].
* Differential Evolution (DE) [9].
* Self-adapting differential evolution (SADE) [21].
All of the above algorithms (including PAO) are implemented with default hyperparameters from the freelunch python package developed by the authors.5 To best highlight the performance of the PAO approach, the attractors used in this study are simply the local best and global best particle locations as in the standard PSO algorithm. For each of the nine benchmark functions, each optimiser was randomly initialised 100 times and run for 100 generations with a population size of 100. In all cases this corresponds to 100 runs of each optimiser with a budget of
\begin{table}
\begin{tabular}{l l l l} Problem & Domain & \(f(\mathbf{x})\) & \(\mathbf{x}^{*}\) \\ \hline De Jong’s function & \(x_{i}\in[-5.12,5.12]\) & \(\sum_{i}^{N}x_{i}^{2}\) & \(x_{i}=0\) \\ Hyper-ellipsoid function & \(x_{i}\in[-5.12,5.12]\) & \(\sum_{i}^{N}ix_{i}^{2}\) & \(x_{i}=0\) \\ Rotated Hyper-ellipsoid & \(x_{i}\in[-65.54,65.54]\) & \(\sum_{i}^{n}\sum_{j}^{i}x_{j}^{2}\) & \(x_{i}=0\) \\ Power-sum function & \(x_{i}\in[-1,1]\) & \(\sum_{i}^{n}|x_{i}|^{i+1}\) & \(x_{i}=0\) \\ Rosenbrock function & \(x_{i}\in[-2.048,2.048]\) & \(\sum_{i}^{n}[100(x_{i+1}-x_{i}^{2})^{2}+(1-x_{i})^{2}]\) & \(x_{i}=1\) \\ Griewangk’s function & \(x_{i}\in[-600,600]\) & \(\frac{1}{400}\sum_{i}^{n}x_{i}^{2}-\prod_{i}^{n}\cos(\frac{x_{i}}{\sqrt{4}})+1\) & \(x_{i}=0\) \\ Rastrigin’s function & \(x_{i}\in[-5.12,5.12]\) & \(10n+\sum_{i}^{n}[x_{i}^{2}-10*\cos(2\pi x_{i})]\) & \(x_{i}=0\) \\ Ackley’s function & \(x_{i}\in[-32.77,32.77]\) & \(-20e^{-0.2\sqrt{\frac{n}{n}\sum_{i}^{n}x_{i}^{2}}}-e^{\frac{1}{n}\sum_{i}^{n} \cos(2\pi x_{i})}+20+e\) & \(x_{i}=0\) \\ Schwefel’s function & \(x_{i}\in[-500,500]\) & \(\sum_{i}^{n}[-x_{i}\sin(\sqrt{|x_{i}|})]\) & \(x_{i}\approx 420.97\) \\ \end{tabular}
\end{table}
Table 1: Benchmark optimisation functions considered in this study.
function evaluations. Figures 2 and 3 depict convergence histories averaged over the 100 runs for each algorithm on each optimisation problem for the 2D and 8D suites respectively. To aid comparison, the convergence histories in the figures are shifted such that value of the objective function at the true optimum lies at zero, i.e. \(f(\mathbf{x}^{*})=0\). For the convenience of the reader, all parameters relating to the benchmark study and PAO are collected in Table 2
## 4 Discussion
As can be seen from the figures, the PAO algorithm has performed well on the benchmark optimisation problems. In the 2D set of problems, the performance of PAO on the continuous, unimodal problems (from DeJong to Power sum inclusive) is excellent, surpassed only by the SADE method which is more computationally demanding. On the nonconvex and multimodal problems, the performance of PAO is very much in line with the other approaches, finding exact optima (within numerical tolerance) on both the Rastrigin and Ackley problems.
In the 8D suite of problems, the performance of PAO is excellent on the continuous multimodal benchmarks, clearly outstripping the other methods. In the other problems PAO performs similarly to other methods and manages to avoid becoming trapped in poor minima on the Rastrigin and Griewangk functions as is observed with the PSO and DE methods. In the challenging Schwefel and Ackley functions, almost all methods (PAO included) were unable to locate the global minima in the higher dimensional problems with default hyperparameters.
The benchmark results are especially impressive given that the attractors in PAO implementation were simply set to the be local and global best particles as in the standard PSO algorithm. Indeed, PAO has outperformed the PSO approach in every problem. It might be expected that the the selection of more specialised attractors may enable the practitioner to input domain knowledge into the optimisation procedure resulting in greater performance. For example, one could imagine that local minima might be avoided by using a stochastic attractor (in which the position of the attractor is drawn from some distribution for each particle) when the problem is known to be highly multi-modal.
Although the benchmark performance is encouraging, the authors would make the argument that it is not the only feature that should be considered in the assessment of a novel heuristic. As well as optimisation power, PAO has been shown to have a number of desirable features including interpretable hyperparameters, exact dynamics and closed-form transition densities.
The hyperparameters of the PAO approach are familiar to engineers as the inertial, damping and stiffness terms of a linear dynamic system. This choice is deliberate on the part of the authors in order to permit an interpretable way to select hyperparameters. Large choices for the inertial parameter slow converge and encourage exploration. Whereas large choices for the values of the stiffness parameters accelerate convergence and promote exploitation. Particularly interesting is the choice of the damping ratio parameter \(\zeta\). As is the case for linear dynamics, the selection of values \(\zeta<1\) give rise to underdamped dynamics and oscillation, values \(\zeta>1\) lead to overdamped dynamics and slow convergence. The choice of the stochastic scaling parameter \(q\) can be related to the excitation level of the dynamics and controls the extent to which the PAO algorithm is dominated by random search.
An advantage of PAO compared to other PSO methods is that the motions of the particles can be expressed exactly in terms of a time-step \(\Delta t\). Unlike methods that rely on discretisation to move the particles in successive iterations, the
\begin{table}
\begin{tabular}{l l l} Parameter & Description & Value \\ \hline \(m\) & PAO inertia coefficient & 1.0 \\ \(\zeta\) & PAO damping ratio & 0.2 \\ \(k_{r}\) & PAO stiffness parameters & 1.0 \\ \(q_{0}\) & PAO stochastic parameter & 1.0 \\ \(\Delta t\) & PAO integration interval & 1.0 \\ \(N\) & Population size (all optimisers) & 100 \\ \(G\) & Iterations per run (all optimisers) & 100 \\ \end{tabular}
\end{table}
Table 2: Parameters relating to the benchmark comparison study.
Figure 2: Convergence histories of the best particle (averaged over 100 runs of each optimiser) for the Molka benchmark suite in 2D.
Figure 3: Convergence histories of the best particle (averaged over 100 runs of each optimiser) for the Molka benchmark suite in 8D.
exact nature of the dynamics means that no error is accrued even as the value of \(\Delta t\) becomes very large. This means that the \(\Delta t\) could conceivably be altered during the run in either a scheduled or an adaptive fashion similar to the energy function in simulated annealing [22]. A scheduling approach could used to promote large movements during the initial stages of the optimisation procedure (thus quickly moving to promising regions of the search space) and then more fine-grained motions close to convergence. In highly multi-modal problems, the opposite approach might be taken in order to attempt to escape minima as the particles converge.
A major advantage of the proposed approach is that the transition densities (parametrised by the integration interval \(\Delta t\)) are available in closed form. One particular advantage is that at the end of the optimisation run, the final transition density (averaged over every particle, weighted by objective score) might be viewed as a kind of uncertainty on the position of the optima. This could provide the practitioner with insight in the case of noisy objective functions such as those that are typically encountered when performing parameter identification tasks from data.
A more formal treatment of uncertainty can also be envisaged. Access to the transitions densities would allow the PAO method to be sued as the proposal step within an SMC scheme. This would enable a Bayesian viewpoint to be taken and would give access to valid posterior distributions over the positions of optima. The application of PAO within an SMC framework has the potential to drastically improve the convergence of SMC algorithms in line with other heuristic proposal schemes such as the DE-based approach in [23].
In this paper, a novel variant of the particle swarm algorithm has been proposed with a number of distinct advantages including interpretable hyperparameters, exact dynamics and closed-form access to the transition density. The performance of the proposed approach has been shown to be in line with other common choices for heuristic optimisation, indicating the advantages of the proposed approach do not come at the cost of reduced performance.
## Author contributions
The authors contributed equally to the conception and development of the method. MDC collected the data and wrote the manuscript.
## Acknowledgements
The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Data Driven Engineering (DDE) where work on this paper was undertaken. This work was supported by EPSRC grant no EP/W002140/1.
|
2310.16006 | Machine-learning the phase diagram of a strongly-interacting Fermi gas | We determine the phase diagram of strongly correlated fermions in the
crossover from Bose-Einstein condensates of molecules (BEC) to Cooper pairs of
fermions (BCS) utilizing an artificial neural network. By applying advanced
image recognition techniques to the momentum distribution of the fermions, a
quantity which has been widely considered as featureless for providing
information about the condensed state, we measure the critical temperature and
show that it exhibits a maximum on the bosonic side of the crossover.
Additionally, we back-analyze the trained neural network and demonstrate that
it interprets physically relevant quantities. | M. Link, K. Gao, A. Kell, M. Breyer, D. Eberz, B. Rauf, M. Köhl | 2023-10-24T17:00:05Z | http://arxiv.org/abs/2310.16006v1 | # Machine-learning the phase diagram of a strongly-interacting Fermi gas
###### Abstract
We determine the phase diagram of strongly correlated fermions in the crossover from Bose-Einstein condensates of molecules (BEC) to Cooper pairs of fermions (BCS) utilizing an artificial neural network. By applying advanced image recognition techniques to the momentum distribution of the fermions, a quantity which has been widely considered as featureless for providing information about the condensed state, we measure the critical temperature and show that it exhibits a maximum on the bosonic side of the crossover. Additionally, we back-analyze the trained neural network and demonstrate that it interprets physically relevant quantities.
When an ensemble of attractively interacting fermions is cooled to below a critical temperature \(T_{c}\) it transitions from a normal phase into a superfluid or superconducting phase. The precise value of the phase transition temperature is governed by the microscopic details of the system, such as the interaction strength and interparticle correlations, and can exhibit non-trivial dependencies. For example, in the crossover from BCS to BEC, it has been theoretically predicted that the critical temperature depends non-monotonically on the interaction parameter [1; 2; 3; 4; 5; 6; 7; 8], see Figure 1a. The non-monotonic behaviour is rooted in the fundamental change of the nature of pairing below the critical temperature. For Cooper pairing (BCS) one expects an exponential dependence of \(T_{c}\) on the interaction strength whereas dimer pairing (BEC) implies a nearly constant \(T_{c}\). The division between the two regimes is not at unitarity but is expected to be on the BEC side of the crossover [9; 10]. In this manuscript we study the critical temperature across the BCS/BEC crossover using an artificial neural network to analyze the momentum-distribution of ultracold atomic Fermi gases.
A precision determination of the critical temperature across a broad range of interaction strengths have so far been hindered by insufficient experimental detection capabilities. One main challenge is that upon release from the trap in a conventional time-of-flight study, Cooper pairs break and are not amenable for direct detection. Nevertheless, they leave a weak imprint onto the momentum distribution of the fermions. In Figure 1b we compare the momentum distribution of a homogeneous Fermi gas at a temperature of \(T/T_{F}=0.15\) (i.e. near the critical temperature) with the momentum distributions of BCS ground state wave functions for different interaction parameters. The pairing signature is by far not as pronounced as the celebrated bimodal momentum distribution of a Bose-Einstein condensate and therefore the detection of the condensate fraction is much more difficult. Additionally, finite temperature, interactions and the inhomogeneity of the harmonically trapped sample further obscure the pairing signature [11]. In order to detect the minuscule modifications of the momentum distribution in the time-of-flight images, we have developed and applied a neural network for advanced image recognition. We favour neural network processing over standard data fitting since the neural network is unbiased as compared to applying a predetermined fitting function and therefore might detect physical signatures beyond a model-based analysis. Recently, applications of these sophisticated techniques have entered into the field of quantum physics for the identification of phases of quantum matter [12; 13; 14; 15; 16; 17; 18]. However, even when being successfully trained, artificial neural networks have acted as
Figure 1: **A** Sketch of the phase diagram across the BCS/BEC crossover including the critical temperature for condensation (solid line). BCS ground state is dominated by long-range Cooper pairs whereas the BEC exhibits dimer pairing of the fermions. **B** Momentum distribution of an ideal Fermi gas at a temperature of \(T/T_{F}=0.15\) (dashed line) in comparison with the momentum distribution of the BCS ground state for different interaction strengths: \(1/(k_{F}a)=-1\) (blue), \(1/(k_{F}a)=0\) (yellow), and \(1/(k_{F}a)=1\) (red). **C** Principle of the data analysis using a neural network. Condensate fractions are determined from time-of-flight images after a rapid-ramp. This information is used to label time-of-flight data for equal parameters but without rapid ramp. A neural network is trained on the labeled data to predict the condensate fraction.
"black boxes" hiding their decision criteria. Specifically, whether or not the network actually identifies physically relevant criteria for computing its output has remained obscure. Generally, the interpretation of neural networks and their causality is rather challenging and currently a major topic in computer science [19]. In this work, we demonstrate that the back-analysis of neural networks provides further details of the physics, which are not accessible by conventional means.
Experimentally, we prepare a quantum gas of \(\sim 3\cdot 10^{5}\) atoms per spin state in the two lowest hyperfine states \(|1\rangle\) and \(|2\rangle\) of \({}^{6}\)Li in an optical dipole trap, similar to our previous work [20]. We adjust the interaction strength of the sample by Feshbach resonance and the temperature by changing the trap [21]. The interaction and temperature are tuned independently of each other and the thermalized cloud is detected by absorption imaging after ballistic expansion, see Appendix.
The neural network employed for image analysis comprises of three convolutional and pooling layers and three fully-connected layers and is trained through stochastic gradient descent with Adam optimizer [22], see Appendix. In order to train and validate our neural network, we employ a supervised learning method [23]. To this end, we measure two different density distributions after time-of-flight, see Figure 1c: (1) The density distribution \(n_{A}\) of the atoms directly released from the optical dipole trap. During the expansion, the Cooper pairs are broken and \(n_{A}\) is related to the momentum distribution of the fermions convolved with interaction effects during the expansion. (2) The density distribution \(n_{RR}\) after applying the rapid ramp technique [24; 25; 26; 27], which measures the momentum distribution of the molecules that have been created from the Cooper pairs. Even though this technique is expected to preserve physics in many cases, quantitatively and principally there are open questions about the adiabaticity of the ramp and how this might affect weak signatures such as small condensate fractions near the critical temperature. During the training process, we label input pictures of \(n_{A}\) with condensate fractions obtained from bimodal fits to \(n_{RR}\) at the same experimental parameters. We exclude data with temperatures near the critical temperature from learning. Moreover, in order to prevent the network from learning unwanted correlations between directly accessible parameters (such as atom number and condensate fraction), we use training data from different interaction values throughout the crossover at \(1/(k_{F}a)=\{1.6,1.0,0.5,0.0,-0.5,-0.6\}\) on a total of 7895 labeled examples. Here, \(k_{F}\) denotes the Fermi wave vector calculated from the atom number and the trap parameters and \(a\) the s-wave scattering length. We extract the critical temperature from the neural network predictions for the direct-release time-of-flight images across the whole range of the BCS/BEC crossover by taking a piecewise linear fit of the condensate fraction5.
Qualitatively, the behaviour of the critical temperature of the superfluid transition across the BCS/BEC crossover can be understood by starting from the extreme regimes: in the weakly-attractive BCS limit, the critical temperature scales \(k_{B}T_{c}\sim E_{F}\exp[-\pi/(2k_{F}|a|)]\)[1; 28]. Here, \(E_{F}\) denotes the Fermi energy. In the opposite regime, far on the BEC side, we encounter a weakly-repulsively interacting gas of bosons. The bosons have twice the mass of the fermions \(M_{B}=2m\) and half the density \(n_{B}=n/2\). The critical temperature of the ideal Bose gas is simply given by \(k_{B}T_{c}^{0}\sim\frac{\hbar^{2}n_{B}^{2/3}}{M_{B}}\). Unlike in the BCS regime, the critical temperature of the Bose gas has a very weak dependence on the interaction strength between the bosons \(T_{c}(a_{B})=T_{c}^{0}\left[1+cn_{B}^{1/3}a_{B}+...\right]\), where \(a_{B}=0.6\,a\)[29] denotes the s-wave scattering length between two bosons, and \(c\) is a positive constant [30; 31]. From this simple argument, we expect an increase of the critical temperature when approaching the crossover from the BEC side and hence a maximum critical temperature somewhere in the crossover regime.
From the previous consideration it is obvious that a careful determination of both density and temperature is very important. In the trapped gas of our experiment, the two quantities are inversely related to each other and, furthermore, also interparticle interactions change the density.
The calibration of density and temperature proceeds in the following way: We take _in-situ_ absorption images of the trapped gas along two orthogonal spatial directions (in order to account for asymmetries of the trapped cloud) for different interaction strengths and temperatures. On these data, we perform an inverse Abel transform to reconstruct the density distribution inside the trap. This serves two purposes: on the one hand, we obtain the center density which we use for the normalisation of the data and on the other hand, the density distribution \(n_{\sigma}(r)\) feeds into the temperature calibration in the next step. Then, we use the data from the unitary Fermi gas [\(1/(k_{F}a)=0\)] and its both theoretically [2; 8; 36; 37] and experimentally [32; 33; 34; 35] well known critical temperature of \(T_{c}=0.167\,T_{F}\) to precisely reconstruct our trapping potential. To this end, the inverse equation of state of the unitary Fermi gas [35] is applied to the intra-trap density distribution reconstructed from _in-situ_ high-intensity absorption images of the cloud at \(T=T_{c}\). In the final step, we use the obtained knowledge of trap potential and measured _in-situ_ density profiles \(n_{\sigma}(r)\) to determine the temperature by fitting a virial expansion of the equation of state to the outermost regions of the trapped cloud where the gas is not condensed.
In Figure 2, we show the results of the critical temperature for a homogeneous gas in comparison with theoretical predictions as a function of the interaction parameter \(1/(k_{F}a)\). Since the condensation will initiate at regions of highest density, i.e., at the center of the trap |
2303.01442 | On the fundamental groups of solenoid complements in $\mathbb{S}^3$ | We show that fundamental groups of the complements of knotted solenoids in
$\mathbb{S}^3$ is solely determined by a canonical sequence of knot groups.
Moreover it its determined by the embedding up to mirror reflection. | Xueming Hui | 2023-03-02T17:59:08Z | http://arxiv.org/abs/2303.01442v1 | # On the fundamental groups of solenoid complements in \(\mathbb{S}^{3}\)
###### Abstract
We show that fundamental groups of the complements of knotted solenoids in \(\mathbb{S}^{3}\) is solely determined by a canonical sequence of knot groups. Moreover it its determined by the embedding up to mirror reflection.
**Keywords.** Knots; Solenoids; 3-manifold theory; JSJ-decompositions; fundamental groups; knot subgroups of knot groups.
## 1 Introduction
A knot is by definition an isotopy class of embeddings of \(\mathbb{S}^{1}\) into \(\mathbb{S}^{3}\). The main purpose of knot theory in the beginning is to tell knots apart. For example, the (right-handed) trefoil knot is different from the unknot as shown below. There are many invariants for knots. For example, the coloring invariants, racks and quandle, fundamental groups of the knot complements, Alexander polynomials, Jones polynomials and HOMFLY polynomials etc.
In this paper we consider special sequences of knots, more specifically we consider knots sequence \(\{K_{n}\}_{n=0}^{\infty}\) such that \(K_{n+1}\) is obtained from \(K_{n}\) by satellite constructions. We will restrict to the case of closed braid patterns. In this case, the sequence gives an embedding of a topological object called solenoid. A _Solenoid_ is a topological space that is the inverse limit of an inverse system of topological groups and continuous homomorphisms
\[(S_{i},f_{i}),f_{i}:S_{i+1}\mapsto S_{i}.i\geq 0\]
\(S_{i+1}\)\(n_{i}\) times around the circle \(S_{i}\), \(n_{i}>1\). In other words, if we regard \(S_{i}\)'s as unit circle in the complex plane, then \(f_{i}(z)=z^{n_{i}}\). Solenoid was first introduced by L. Vietoris[9] in the case when \(n_{i}=2\) and D. van Dantzig[8] for \(n_{i}=n\) fixed. The general case where \(n_{i}\) is non constant was studied by R.H. Bing, he gives a complete classification of the solenoids in [1]. See also M.C. McCord[7]. A 2-solenoid embedded in \(\mathbb{S}^{3}\) is shown below.
Figure 1: The Unknot and the (right-handed) trefoil knot.
Figure 2: A Soleinoid embedded in \(\mathbb{S}^{3}\)(picture is from [3]).
For an embedding of a solenoid \(\Sigma\) in \(\mathbb{S}^{3}\), we can study the complement of the solenoid, that is \(\mathbb{S}^{3}-\Sigma\). Like in knot theory, we would like to use the fundamental groups of the complement to study different embeddings of solenoids. Previously, B. Jiang, S. Wang, H. Zheng and Q. Zhou [2] studied the embeddings of solenoid in \(\mathbb{S}^{3}\). Our paper concerns the fundamental group which was considered by G. Conner, M. Meilstrup and Dusan D. Repovs [3]. We extend their results and answer some conjectures.
For simplicity, an isotopy class of tame solenoid embeddings is called a **soleknot**. We will discuss the precise meaning of _tame_ later in next section.
There is a canonical sequence of knot groups and homomorphisms for a given soleknot \(\Sigma\),
\[K_{0}\stackrel{{\varphi_{0}}}{{\longmapsto}}K_{1}\stackrel{{ \varphi_{1}}}{{\longmapsto}}K_{2}\stackrel{{\varphi_{2}}}{{ \longmapsto}}\ldots\stackrel{{\varphi_{n-1}}}{{\longmapsto}}K_{n }\stackrel{{\varphi_{n}}}{{\longmapsto}}\ldots\]
where all the \(\varphi_{n}\)'s are naturally induced by inclusion of knot complements. We call this sequence the **filtration** of the soleknot \(\Sigma\). We have the following which is Theorem 4.8 in section 4.
**Theorem 1.1**.: _Let \(\Sigma\) and \(\Sigma^{\prime}\) be two knotted soleknots, \((K_{n},\varphi_{n})\) and \((K^{\prime}_{n},\psi_{n})\) be the filtrations of \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\) and \(\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) respectively. Then \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\simeq\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) if and only if \(K_{n}\simeq K^{\prime}_{n}\) for each \(n\geq 0\)._
We will define the precise meaning of knotted soleknots, filtrations, etc later. An important fact needed for the above theorem is the following(Theorem 3.5 in section 3)
**Theorem 1.2**.: _A satellite knot with a closed braid pattern of winding number greater than \(1\) is prime._
For prime knot, one has the following theorem of C. Gordon and J. Luecke,
**Theorem 1.3**.: _[_6_]_ _If two prime knots have isomorphic groups then they are equivalent up to mirror image._
Together with some results on the knot subgroups of a knot group by F. Gonzalez-Acuna and W. Whitten[5], we are able to prove Theorem 1.1.
C. Gordon and J. Luecke's theorem[6] shows that up to mirror image, fundamental groups distinguish the prime knots. A natural question is: do the fundamental groups of the complements of soleknots tell them apart? The following theorem(Theorem 4.9 in section 4) answers this question in the positive. This proves conjecture 4.4 in [3].
**Theorem 1.4**.: _Let \(\Sigma\) and \(\Sigma^{\prime}\) be two knotted soleknots in \(\mathbb{S}^{3}\), \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\simeq\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) if and only if they are equivalent or their mirror images are equivalent._
Preliminaries
In this section, we introduce the basics of 3-manifolds and definitions on solenoid embeddings that will be used.
Let \(S\) be a connected compact surface properly embedded in a compact oriented 3-manifold \(M\). A **compressing disk**\(D\) is a disk embedded in \(M\) such that \(D\cap S=\partial D\) and the intersection is transverse. If the curve \(\partial D\) does not bound a disk inside of \(S\), then \(D\) is called a **nontrivial** compressing disk. If \(S\) has a nontrivial compressing disk, then we call \(S\) a **compressible surface** in \(M\). If \(S\) is neither the 2-sphere nor a compressible surface, then we call the surface **incompressible**. Assume \(\chi(S)\leq 0\), we say that \(S\) is **essential** if it is incompressible, \(\partial\)-incompressible, and not \(\partial\)-parallel.
Now we are ready to define the central concept used in this paper called satellite knots. A knot \(K\subset\mathbb{S}^{3}\) is a **satellite** if its complement contains an essential torus. An equivalent and more intuitive definition is the following,
Let \(K_{2}\) be a nontrivial oriented knot in \(\mathbb{S}^{3}\) and \(V\) a closed regular neighborhood of \(K_{2}\). Let \(\tilde{V}\) be an oriented unknotted closed solid torus in \(\mathbb{S}^{3}\) and \(K_{1}\) an oriented knot in the interior of \(\tilde{V}\). A meridional disk of \(\tilde{V}\) will meet \(K_{1}\) in a finite subset. The least number of times a meridional disk of \(\tilde{V}\) must meet \(K_{1}\) is called the **wrapping number** of the pattern. Suppose that the wrapping number of the pattern is greater than zero and let \(h:(\tilde{V},K_{1})\mapsto(V,K)\) be an oriented homeomorphism of pairs. The image of \(K_{1}\) under \(h\), denoted by \(K\), is a knot in \(V\subset\mathbb{S}^{3}\) called a **satellite knot**. The knot \(K_{2}\) is called a **companion knot** of \(K\) and the torus \(\partial V\) is called a **companion torus**. The pair \((K_{1},\tilde{V})\) is called a **pattern** of \(K\).
When the pattern of a satellite knot is a torus knot, we call the satellite knot a **cable knot**. A **torus knot** is a knot that lies on the surface of an unknotted torus in \(\mathbb{S}^{3}\). It's clear that satellite knot construction is highly non unique since there are infinitely many ways to identify the boundary torus of the pattern and the companion torus. We will restrict to the case of untwisted satellite knot. That is the one that sends standard longitude of the boundary torus of the pattern to the standard longitude of the companion torus. Standard is determined by their embeddings in \(\mathbb{S}^{3}\).
The JSJ-decomposition is used in the proof of the main theorem.
**Theorem 2.1** (JSJ-decomposition).: _Irreducible orientable and boundary irreducible 3-manifolds have a unique (up to isotopy) minimal collection of disjointly embedded incompressible tori such that each component of the 3-manifold obtained by cutting along the tori is either (homotopically) atoroidal or Seifert-fibered._
A 3-manifold \(M\) is **irreducible** if every sphere \(S\) contained in the interior of \(M\) bounds a ball.
All the remaining definitions in this section are from [2]. Some of them are stated differently in [2]. But one can show that they are equivalent. These definitions lead to an important concept called the maximal defining sequence of a soleknot.
**Definition 2.2**.: _Let \(N\) be a solid torus and \(\beta\) be a nontrivial closed braid embedded in \(N\). A closed regular neighborhood of \(\beta\) in \(N\) is called a **thick braid** in \(N\)._
This is essentially a braid version of satellite knot construction. Next we give an equivalent definition of solenoid. This definition is more constructive compare to the one we give earlier in the paper. In particular, this definition also defines an embedding of solenoid in either the solid torus or \(\mathbb{S}^{3}\).
**Definition 2.3**.: _Let \(\{N_{n}\}_{n=0}^{\infty}\) be a nested sequence of solid torus such that \(N_{n}\) is embedded in \(N_{n-1}\) as a thick braid for every \(n\geq 1\). If the diameter of the meridian disk of \(N_{n}\) tends to zero uniformly as \(n\) goes to infinity then we call \(\Sigma=\bigcap_{n=0}^{\infty}N_{n}\) a **solenoid**._
_The embedding \(\Sigma\subset N_{0}\) is called a **standard embedding** of \(\Sigma\) in \(N_{0}\)._
We will call \(\{N_{n}\}_{n=1}^{\infty}\) a **defining sequence** of the standard embedding \(\Sigma\subset\mathbb{S}^{3}\). Just like there are tame knots and wild knots. There are tame embeddings and wild embeddings for solenoids as well.
**Definition 2.4**.: _Let \(\Sigma\) be a solenoid embedded in a solid torus \(N_{0}\). The embedding \(\Sigma\subset N_{0}\) is called a **tame** embedding if there is a homeomorphism \(f:(N_{0},\Sigma)\mapsto(N_{0},\Sigma^{\prime})\) for some standard embedding \(\Sigma^{\prime}\subset N_{0}\)._
Call \(\{f^{-1}(N_{n}^{\prime})\}_{n=1}^{\infty}\) a **defining sequence** of the embedding \(\Sigma\subset N_{0}\), where \(\{N_{n}^{\prime}\}_{n=1}^{\infty}\) is a defining sequence of \(\Sigma^{\prime}\).
**Definition 2.5**.: _An embedding \(\Sigma\subset\mathbb{S}^{3}\) of a solenoid is called **tame** if it can be factored as \(\Sigma\subset N_{0}\subset\mathbb{S}^{3}\) in which \(\Sigma\subset N_{0}\) is tame._
The sequence \(\{N_{n}\}_{n=0}^{\infty}\) is called a **defining sequence** of the embedding \(\Sigma\subset\mathbb{S}^{3}\).
**Definition 2.6**.: _A tame embedding of a solenoid \(\Sigma\subset\mathbb{S}^{3}\) with defining sequence \(\{N_{n}\}_{n=0}^{\infty}\) is called **knotted** if some defining solid torus \(N_{n}\subset\mathbb{S}^{3}\) is knotted; otherwise we call the embedding **unknotted**._
For two tame solenoid embeddings in \(\mathbb{S}^{3}\), we can talk about when they are equivalent.
**Definition 2.7**.: _Call two tame solenoids \(\Sigma,\Sigma^{\prime}\subset\mathbb{S}^{3}\)**equivalent** if there is an orientation preserving homeomorphism \(f:\mathbb{S}^{3}\mapsto\mathbb{S}^{3}\) such that \(f(\Sigma)=\Sigma^{\prime}\)._
_An equivalence class of tame solenoid embeddings is called a **soleknot**._
When there is no ambiguity, the image of a tame solenoid embedding will also be called a soleknot.
**Definition 2.8**.: _We say two defining sequences \(\{N_{n}\}_{n=0}^{\infty}\) and \(\{N_{n}^{\prime}\}_{n=0}^{\infty}\) of tame solenoid embeddings in \(\mathbb{S}^{3}\) are **strongly equivalent** if there is an orientation preserving homeomorphism \(f_{0}:(\mathbb{S}^{3},N_{0})\mapsto(\mathbb{S}^{3},N_{0}^{\prime})\) and orientation preserving homeomorphisms \(f_{n}:(N_{n-1},N_{n})\mapsto(N_{n-1}^{\prime},N_{n}^{\prime})\) with \(f_{n}|\partial N_{n}=f_{n-1}|\partial N_{n-1}\) for \(n\geq 1\)._
The following is shown in [2]. It says that \(\Sigma\) and \(\Sigma^{\prime}\) having strongly equivalent defining sequences implies that \(\Sigma\) is equivalent to \(\Sigma^{\prime}\).
**Proposition 2.9**.: _[_2_]_ _Up to strong equivalence, each knotted tame solenoid \(\Sigma\subset\mathbb{S}^{3}\) has a unique maximal defining sequence \(\{N_{n}\},n\geq 0\) such that \(N_{0}\) is knotted and any other defining sequence \(\{N_{n}^{\prime}\},n\geq 0\) with \(N_{0}^{\prime}\) knotted is a subsequence of \(\{N_{n}\},n\geq 0\)._
By a **maximal** defining sequence, we mean \(N_{n}\setminus N_{n+1}\) contains no essential torus for each \(n\geq 0\).
## 3 A satellite knot with braid pattern is prime
In this section, we prove a fact that will be used later. That is a satellite knot with braid pattern is prime.
Consider a braid \(\beta\) with \(n\) strands. Let \(\hat{\beta}\) be it's closure in a solid torus \(W\). Let \(D\) be a meridian disc of \(W\), then \(W-\hat{\beta}\) is the mapping torus \(M_{f}\) of \(D-\cup_{i=1}^{n}\{p_{i}\}\). For every \(i\), \(p_{i}\) is a point in the interior of \(D\) and \(f\) is the mapping class of \(D-\cup_{i=1}^{n}\{p_{i}\}\) determined by \(\beta\). Let \(U_{\hat{\beta}}\) be a tubular neighborhood of \(\hat{\beta}\) in \(W\), then \(U_{\hat{\beta}}\) intersects \(D\) at \(n\) open balls \(B_{1},B_{2},\ldots,B_{n}\). Let \(D_{n}:=D-\cup_{i=1}^{n}B_{i}\). Choose a base point for \(D_{n}\) as in Figure 3.
\(\beta\) acts on \(\pi_{1}(D_{n})\) as an automorphism. See Figure 3. It's easy to see that \(\pi_{1}(D_{n})\) is isomorphic to the free group of rank \(n\) and \(\pi_{1}(W-U_{\hat{\beta}})\simeq\langle\ x_{1},x_{2},\ldots,x_{n},t\ |\ t^{-1}x_{i}t=\beta(x_{i})\ \rangle\) which is the HNN-extension of \(\pi_{1}(D_{n})\) by \(\beta\). In this case, it is actually a semi-direct product.
**Lemma 3.1**.: _Let \(\beta\) be a braid with \(n\) strands such \(\hat{\beta}\) is a knot. Let \(\pi_{1}(W-U_{\hat{\beta}})=\langle\ x_{1},x_{2},\ldots,x_{n},t\ |\ t^{-1}x_{i}t=\beta(x_{i})\ \rangle\) be a presentation of \(\pi_{1}(W-U_{\hat{\beta}})\) as above._
_The centralizer of \(x_{1}\) in \(\pi_{1}(W-U_{\hat{\beta}})\) is \(\{(t^{n}w)^{k}x_{1}^{l}\ |\ k,l\in\mathbb{Z}\}\) where \(w\) is the unique element(which doesn't end with a power of \(x_{1}\)) such that \(\beta^{n}(x_{1})=wx_{1}w^{-1}\)._
Proof.: Let \(y\in\pi_{1}(W-U_{\hat{\beta}})\) be any element that commutes with \(x_{1}\).
\[x_{1}y=yx_{1}\]
\[y^{-1}x_{1}y=x_{1}\]
As an element of \(\pi_{1}(W-U_{\hat{\beta}})\), \(y\) can be uniquely written as \(y=t^{m}z\) for some integer \(m\) and \(z\in\pi_{1}(D_{n})\). Hence,
\[t^{-m}x_{1}t^{m} =zx_{1}z^{-1}\] \[\beta^{m}(x_{1}) =zx_{1}z^{-1}\]
Braid automorphisms map \(x_{i}\) to conjugates of \(x_{j}\). It induces a natural permutation \(\pi\) of \(\{1,2,\ldots,n\}\). The number of cycles in the cycle decomposition of \(\pi\) is the number of components of the the closure of the braid. The length of each cycle is the smallest iteration it takes to map \(x_{i}\) to a conjugate of itself. Because the braid given by \(U\) is a knot, it has only one component. So the the permutation induced by the braid is an \(n\)-cycle and \(m=kn\) for some integer \(k\). If \(k=1\), then \(\beta^{n}(x_{1})=wx_{1}w^{-1}\) for some \(w\) in the free group generated by \(x_{1},x_{2},\ldots,x_{n}\). \(w\) is unique up to multiplication by \(x_{1}^{l}\) on the right. We claim that for each integer \(k\), \(\beta^{kn}(x_{1})=t^{-kn}(t^{n}w)^{k}x_{1}(t^{n}w)^{-k}t^{kn}\), this implies the only elements that commute with \(x_{1}\) are \((t^{n}w)^{k}x_{1}^{l},k,l\in\mathbb{Z}\). \(k=0\), this is obviously true. For \(k>0\), notice that
\[t^{-kn}(t^{n}w)^{k}x_{1}(t^{n}w)^{-k}t^{kn}=\beta^{(k-1)n}(w)\ldots\beta^{n}(w )wx_{1}w^{-1}\beta^{n}(w^{-1})\ldots\beta^{(k-1)n}(w^{-1})\]
For \(k=1\), \(\beta^{n}(x_{1})=wx_{1}w^{-1}\) by the definition of \(w\). We assume the claim is true for \(k=i\). Then
\[\beta^{(i+1)n}(x_{1}) =\beta^{n}(\beta^{in}(x_{1}))\] \[=\beta^{n}(\beta^{(i-1)n}(w)\ldots\beta^{n}(w)wx_{1}w^{-1}\beta^{n }(w^{-1})\ldots\beta^{(i-1)n}(w^{-1}))\] \[=\beta^{in}(w)\ldots\beta^{2n}(w)\beta^{n}(w)\beta^{n}(x_{1}) \beta^{n}(w^{-1})\beta^{2n}(w^{-1})\ldots\beta^{in}(w^{-1})\] \[=\beta^{in}(w)\ldots\beta^{2n}(w)\beta^{n}(w)wx_{1}w^{-1}\beta^{2 n}(w^{-1})\ldots\beta^{in}(w^{-1})\]
Replace \(k\) by \(-k\) gives
\[t^{kn}(t^{n}w)^{-k}x_{1}(t^{n}w)^{k}t^{-kn}=\beta^{-kn}(w^{-1})\ldots\beta^{-2 n}(w^{-1})\beta^{-n}(w^{-1})x_{1}\beta^{-n}(w)\beta^{-2n}(w)\ldots\beta^{-kn}(w)\]
For \(k>0\),
\[\beta^{kn}(x_{1})=\beta^{(k-1)n}(w)\ldots\beta^{2n}(w)\beta^{n}(w)wx_{1}w^{- 1}\beta^{2n}(w^{-1})\ldots\beta^{(k-1)}(w^{-1})\]
Apply \(\beta^{-kn}\) on both sides of the equation,
\[x_{1}=\beta^{-n}(w)\ldots\beta^{-(k-2)n}(w)\beta^{-(k-1)n}(w)\beta^{-kn}(w) \beta^{-kn}(x_{1})\beta^{-kn}(w^{-1})\beta^{-(k-2)n}(w^{-1})\ldots\beta^{-n}( w^{-1})\]
Rearrange the equation,
\[\beta^{-kn}(x_{1})=\beta^{-kn}(w^{-1})\ldots\beta^{-2n}(w^{-1})\beta^{-n}(w^{ -1})x_{1}\beta^{-n}(w)\beta^{-2n}(w)\ldots\beta^{-kn}(w)\]
This proves the claim in every case and therefore finishes the proof of the lemma.
**Proposition 3.2**.: _There is a unique maximal subgroup of \(\pi_{1}(W-U_{\hat{\beta}})\) isomorphic to \(\mathbb{Z}\times\mathbb{Z}\) that contains \(x_{1}\). Furthermore, this subgroup is \(i_{*}(\pi_{1}(T))\) for a torus \(T\) in \(W-U_{\hat{\beta}}\) that contains the base point and \(i_{*}(\pi_{1}(T))\) contains \(x_{1}\). Here \(i_{*}\) is the homomorphism induced by inclusion._
Proof.: The uniqueness of such a subgroup is proved in Lemma 3.1. All the elements of this subgroup are \((t^{n}w)^{k}x_{1}^{l}\), \(k,l\in\mathbb{Z}\). This is the subgroup generated by \(x_{1}\) and \(t^{n}w\). The homomorphism onto \(\mathbb{Z}\times\mathbb{Z}\) which maps \(x_{1}\) to \((0,1)\) and \(t^{n}w\) to \((1,0)\) is an isomorphism. Clearly, \((t^{n}w)^{k}x_{1}^{l}\) is mapped to \((k,l)\).
Let \(T\) be a torus in \(W-U_{\hat{\beta}}\) that contains the base point and \(i_{*}(\pi_{1}(T))\) contains \(x_{1}\). Clearly this surface exists, the torus containing the base point and parallels to \(\partial U_{\hat{\beta}}\) is one such surface.
**Lemma 3.3**.: _Let \(M\) be a 3-dimensional submanifold of \(\mathbb{S}^{3}\) with two incompressible boundary component \(T_{1}\) and \(T_{2}\) that are both torus. If \(i_{*}(\pi_{1}(T_{1}))\) is conjugate to \(i_{*}(\pi_{1}(T_{2}))\) in \(\pi_{1}(M)\), then \(T_{1}\) is parallel to \(T_{2}\)._
Proof.: Such a submanifold \(M\) is either the exterior of a 2 components link \(L\) in \(\mathbb{S}^{3}\) or it is a submanifold of a solid torus embedded in \(\mathbb{S}^{3}\). The two cases are not exclusive. If \(M\) is a submanifold of a solid torus embedded in \(\mathbb{S}^{3}\), then since we only care about \(M\) not how it is embedded in \(\mathbb{S}^{3}\), we can construct a new embedding of \(M\) such that it can be viewed as a link complement \(\mathbb{S}^{3}-L\) where one component of \(L\) is the unknot in \(\mathbb{S}^{3}\).
Denote the components of \(L\) by \(K_{1}\) and \(K_{2}\). Then \(T_{1}\) and \(T_{2}\) are the boundaries corresponds to \(K_{1}\) and \(K_{2}\) respectively. The meridian \(m_{1}\) of \(T_{1}\) can be deformed into \(T_{2}\) since \(i_{*}(\pi_{1}(T_{1}))\) is conjugate to \(i_{*}(\pi_{1}(T_{2}))\). Furthermore, \(m_{1}\) can be deformed into a simple closed curve in \(T_{2}\), this curve is non trivial in \(i_{*}(\pi_{1}(T_{1}))\), so it is also non-trivial in \(i_{*}(M)\) and \(i_{*}(\pi_{1}(T_{2}))\). Therefore it can be identified with the curve \(\gamma=(p,q)\) where \(p\) and \(q\) are coprime. The Dehn filling of \(m_{1}\) makes \(\gamma\) trivial in the exterior of \(K_{2}\). This is impossible if \(K_{2}\) is knotted since the boundary is \(\pi_{1}\)-injective in this case. So \(K_{2}\) has to be the unknot. In particular, \(m_{1}\) is homotopic to the longitude of \(K_{2}\). Do the same argument for \(m_{2}\), we have \(K_{1}\) is also the unknot. Moreover, \(K_{1}\) intersects a disc bounded by \(K_{2}\) at just one point. This implies that \(L\) has to be the Hopf link. Hence, \(M\) is homeomorphic to \(T\times[0,1]\) and \(T_{1}\) is parallel to \(T_{2}\).
**Theorem 3.4**.: _There is no essential torus in \(W-U_{\hat{\beta}}\) such that the meridian of \(U_{\hat{\beta}}\) is homotopic to a curve on it. In particular, there is no swallow-follow torus in \(W-U_{\hat{\beta}}\)._
Proof.: If there is one such essential torus, then there are two different \(\mathbb{Z}\times\mathbb{Z}\) subgroups of \(\pi_{1}(W-U_{\hat{\beta}})\) containing \(x_{1}\) since the boundary torus \(\partial U_{\hat{\beta}}\) is another one. This contradicts Proposition 3.2 and Lemma 3.3. The second part is true because for any swallow follow torus \(S\) in \(W-U_{\hat{\beta}}\), the meridian of \(U_{\hat{\beta}}\) is homotopic to the meridian of \(S\).
**Theorem 3.5**.: _A satellite knot with a closed braid pattern of winding number greater than \(1\) is prime._
Proof.: Let \(K\) be a satellite with companion torus \(V\) and pattern \(P\subset W\). Since the pattern \(P\) is a closed braid of winding number greater than \(1\), \(K\) is a proper satellite. Proof by contradiction, assume there is a factorising sphere \(S\) which decomposes \(K\) as a product. We assume that \(K\), \(S\) and \(\partial V\) are in general position.
The argument in the proof of Theorem 4.4.1 in Cromwell[4] can be adapt without any changes. Therefore, \(S\) must lie inside \(V\) bounding a \(3\)-ball \(B\subset V\), and its preimage \(h^{-1}(S)\) in \(W\) decomposes the pattern \(P\) as product of two nontrivial factors.
Assign the pattern \(P\) an orientation, then \(P\) intersects \(h^{-1}(S)\) in two points, one for \(P\) entering \(B\), one leaving it. Let \(U\) be an open tubular neighborhood of \(P\), then \(T:=\partial(B-U)\) is a swallow follow torus for \(P\). Since \(P\) is a closed braid in \(W\), this contradicts Theorem 3.4. Hence \(h(P\cap B)\) is an unknotted tangle. So \(K\) must be prime.
The last step may seem intuitively obvious, but the author didn't found a satisfying geometric proof. We have another proof using JSJ-decomposition, but we think the algebraic approach is more clear and elegant in this case. Theorem 3.5 is probably known, but as the author can tell, it was not written down anywhere in the literature.
## 4 The fundamental groups of soleknots complement in \(\mathbb{S}^{3}\)
It's easy to see that all the higher homotopy groups of the soleknot complements in \(\mathbb{S}^{3}\) are trivial. This is the following.
**Theorem 4.1**.: _Let \(\Sigma\subset\mathbb{S}^{3}\) be a soleknot. Then \(\mathbb{S}^{3}-\Sigma\) is an Eilenberg-MacLane space \(K(G,1)\) where \(G=\pi_{1}(\mathbb{S}^{3}-\Sigma)\)._
Proof.: Let \(n\geq 2\) and \(f\) be any continuous map from \(\mathbb{S}^{n}\) to \(\mathbb{S}^{3}-\Sigma\). By continuity of \(f\) and compactness of \(\mathbb{S}^{n}\), \(f(\mathbb{S})\) is contained in a compact subset of \(\mathbb{S}^{3}-\Sigma\). Therefore
there exists a torus \(T\)(this is one of the defining torus of \(\Sigma\)) in \(\mathbb{S}^{3}-\Sigma\) such that \(f(\mathbb{S}^{n})\) is contained in the compact component of \(\mathbb{S}^{3}-\Sigma-T\). This component is a knot complement. It's a classical theorem of knot theory that knot complements are aspherical. Hence, \(f\) restricts on this component is homotopic to a constant map. Therefore it is homotopic to a constant map in \(\mathbb{S}^{3}-\Sigma\). This shows that \(\pi_{n}(\mathbb{S}^{3}-\Sigma)=0\) for every \(n\geq 2\) and \(\mathbb{S}^{3}-\Sigma\) is Eilenberg-MacLane space \(K(G,1)\), where \(G=\pi_{1}(\mathbb{S}^{3}-\Sigma)\).
**Corollary 4.2**.: _Every isomorphism between soleknot groups is realized by a homotopy equivalence unique up to homotopy._
**Corollary 4.3**.: _Every automorphism of a soleknot group is induced by a homotopy equivalence unique up to homotopy. Every self homotopy equivalence induces an automorphism up to conjugacy._
Before proving our next theorem, we first introduce some notations and terms. First, recall Proposition 2.9 gives a unique maximal defining sequence for a soleknot. For a knotted soleknot \(\Sigma\), this maximal defining sequence gives a canonical sequence of knot exteriors \(M_{n}\) and knot groups \(K_{n}\) and homomorphisms,
\[K_{0}\stackrel{{\varphi_{0}}}{{\longmapsto}}K_{1}\stackrel{{ \varphi_{1}}}{{\longmapsto}}K_{2}\stackrel{{\varphi_{2}}}{{ \longmapsto}}\ldots\stackrel{{\varphi_{n-1}}}{{\longmapsto}}K_{n }\stackrel{{\varphi_{n}}}{{\longmapsto}}\ldots\]
where all the \(\varphi_{n}\)'s are naturally induced by inclusion of knot complements. By Lemma 2.1 in [3], for any solid tori \(T_{1}\) and \(T_{2}\) in \(\mathbb{S}^{3}\) with \(T_{1}\subset int(T_{2})\), let \(J\) be the core curve of \(T_{1}\) and \(K\) the meridian curve of \(T_{2}\), linking number \(lk(J,K)\neq 0\) implies that the homomorphism \(\pi_{1}(\mathbb{S}^{3}-T_{1})\mapsto\pi_{1}(\mathbb{S}^{3}-T_{2})\) induced by inclusion is injective. Clearly, any thick braid of winding number greater than \(1\) satisfies the condition, therefore \(\varphi_{n}\)'s are all injective. And the base point for all the fundamental groups will be a chosen point in \(\mathbb{S}^{3}-N_{0}\). The direct limit of this sequence is the fundamental group of \(\mathbb{S}^{3}-\Sigma\). Call this sequence the **filtration** of \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\). We will also denote the filtration by \((K_{n},\varphi_{n})\).
**Theorem 4.4**.: _Let \(\Sigma\) and \(\Sigma^{\prime}\) be two knotted soleknots in \(\mathbb{S}^{3}\), \(K_{n}\simeq K_{n}^{\prime}\) for each \(n\geq 0\) if and only if they are equivalent or their mirror images are equivalent._
Proof.: We adopt the notations from previous discussions. The 'if' part is a direct consequence of Proposition 2.9.
We next show the 'only if' part: for any \(n>0\), \(K_{n}\) and \(K_{n}^{\prime}\) are knot groups of some prime knots by Theorem 3.5. If \(K_{n}\simeq K_{n}\), by the classical theorem of C. Gordon and J. Luecke[6], \(M_{n}\) is homeomorphic to \(M_{n}^{\prime}\). Moreover, up to mirror
reflection, \(M_{n}\) and \(M_{n}^{\prime}\) are knot complements of the same knot in \(\mathbb{S}^{3}\). So up to mirror reflection, we can identify \(M_{n}\) and \(M_{n}^{\prime}\). The uniqueness of JSJ-decomposition of \(M_{n}\) and \(M_{n}^{\prime}\) implies that the embeddings of \(M_{n-1}\) in \(M_{n}\) and \(M_{n-1}^{\prime}\) in \(M_{n}^{\prime}\) are isotopic (we identified \(M_{n}\) and \(M_{n}^{\prime}\)). This implies that \(\Sigma\) and \(\Sigma^{\prime}\) have strongly equivalent maximal defining sequences. Therefore \(\Sigma\) and \(\Sigma^{\prime}\) are equivalent up to mirror reflection.
The proof of the Theorem 4.8 relies on some tools from [5]. In [5], F. Gonzalez-Acuna and W. Whitten studied knot subgroups of a knot group. By knot group we mean a group that is isomorphic to the fundamental group of the complement of a knot in \(\mathbb{S}^{3}\). Not all theorems and definitions from [5] will be shown here. We will only quickly go through the ones are used in the proof of Theorem 4.8 and explain the ideas as much as we can. The readers are encouraged to read [5] for technical details. We will not try to explain everything in the original statement of theorems and definitions.
Let \(M\) be a knot exterior, denote the union of the JSJ-pieces of \(M\) that meet the boundaries of \(M\) by \(\gamma M\). A subgroup \(H\) of \(\pi_{1}(M)\) is **loose** if, for some component \(C\) of \(M-\gamma M\), there is a conjugate of \(H\) contained in \(i_{*}(\pi_{1}(C))\), where \(i:C\mapsto M\) is inclusion. Otherwise, \(H\) is **tight**.
Let \(G\) be a knot group, and let \(H<G\). Then \(H\) is a **companion** of \(G\), if there is a knot complement \(E\) containing an essential torus \(T\) and if there is an isomorphism \(\phi:G\mapsto\pi_{1}(E)\) sending \(H\) onto \(i_{*}(\pi_{1}(E_{1}))\) where \(E_{1}\) is the component of \(cl(E-T)\) that is a knot complement and \(i:E_{1}\mapsto E\) is inclusion.
**Remark 4.5**.: _[_5_]_ If \(E\) is the complement of a prime knot \(K\), then the complements \(E_{1},\ldots,E_{r}\) of the companions of \(K\)(in the sense of satellite knots) are naturally embedded in \(E\), and \(\pi_{1}(E_{1}),\ldots,\pi_{1}(E_{r})\) are up to conjugacy all the loose companions of \(\pi_{1}(E)\).
**Theorem 4.6**.: _[_5_]_ _Any noncyclic knot-subgroup of a knot group \(G\) is a tight subgroup of \(G\) or of a loose companion of \(G\)._
Theorem 4.6 reduces the problem of finding the knot subgroups of a knot group to that of finding the tight subgroups of the knot group and its loose companions. We have a good understanding of loose companions especially in the case of prime knots by remark 4.5. The following theorem classifies all the tight subgroups of a prime satellite knot.
**Theorem 4.7**.: _[_5_]_ _Let \(G\) be the group of a prime satellite knot \(K\), and let \(G_{1}\) be the group of a nontrivial knot \(K_{1}\)(If \(K_{1}\) is a cable knot, assume that \(G_{1}\not\cong G\) ). Then \(G_{1}\) properly embeds in \(G\) as a tight subgroup if and only if there are integers \(s,t,p,d,\epsilon\), and \(\delta\) such that_
1. \(K\) _is the_ \((s,t)-\)_cable of the_ \((p,q)-\)_torus knot;_
2. \(pq-\frac{s}{t}=-\frac{\epsilon}{d}+\frac{\delta zw}{dt}\)_,_ \(d>1\)_,_ \(|\epsilon|=1\)_, and_ \(|\delta|\leq 1\)_, where_ \(z=(t,d)>1\) _and_ \(w=(s,dpq+\epsilon)\)_; and_
3. _if_ \(|\delta|=1\)_, then_ \(K_{1}\) _is the_ \((sw^{-1},\epsilon\delta z)-\)_cable of the_ \((p,q)-\)_torus knot, and if_ \(\delta=0\)_, then_ \(K_{1}\) _is a composite knot every prime factor of which is a_ \((p,q)-\)_torus knot or its mirror image._
The technical details in this theorem is not important, the key point here is that \(G_{1}\) properly embeds in \(G\) as a tight subgroup if and only if \(K\) is a cable knot of a torus knot. Theorem 3.5 proves that this is all we need when the pattern is a closed braid which is how we construct solenoids.
**Theorem 4.8**.: _Let \(\Sigma\) and \(\Sigma^{\prime}\) be two knotted soleknots, \((K_{n},\varphi_{n})\) and \((K^{\prime}_{n},\psi_{n})\) be the filtrations of \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\) and \(\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) respectively. Then \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\simeq\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) if and only if \(K_{n}\simeq K^{\prime}_{n}\) for each \(n\geq 0\)._
Proof.: We have proved in Theorem 4.4 that \(K_{n}\simeq K^{\prime}_{n}\) for every \(n\) implies that the soleknots \(\Sigma\) and \(\Sigma^{\prime}\) are equivalent. Therefore the soleknot complements \(\mathbb{S}^{3}-\Sigma\) and \(\mathbb{S}^{3}-\Sigma^{\prime}\) are homeomorphic. Hence, \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\simeq\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\). This proves one direction.
Now let \(F:\pi_{1}(\mathbb{S}^{3}-\Sigma)\mapsto\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) be an isomorphism. Since all the \(\psi_{n}\)'s are injective, the canonical map \(K^{\prime}_{n}\mapsto\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) is injective and \(K^{\prime}_{n}\) can be regarded as subgroups of \(\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\). By the classical theorem of C. Gordon and J. Luecke, up to mirror image, fundamental groups distinguish the prime knots. So \(\{K_{n}\}_{n=1}^{\infty}\) are pairwise non isomorphic since by Theorem 3.5, these are all prime knots. For each \(i\geq 1\), \(K_{i}\) is finitely generated, \(F(K_{i})\) is a finitely generated subgroup of \(\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\). So there exists a smallest natural number \(n_{i}\) such that \(F(K_{i})<g^{-1}K^{\prime}_{n_{i}}g\) for some \(g\in\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\). By the results from [5], we know that any knot groups can only have finitely many knot subgroups up to isomorphisms. So we can choose an \(i>0\) such that \(n_{i}\) is greater than \(1\).
By Theorem 4.6, \(F(K_{i})\) is a tight subgroup of \(g^{-1}K^{\prime}_{n_{i}}g\) or of a loose companion of \(g^{-1}K^{\prime}_{n_{i}}g\). Every loose companion of \(g^{-1}K^{\prime}_{n_{i}}g\) will give rise to a knot complement of a companion knot of the knot \(L\) corresponding to \(M^{\prime}_{n_{i}}\)(This means \(M^{\prime}_{n_{i}}\) is the knot complement of \(L\)). Its boundary must be isotopic to one of the tori in the JSJ-decomposition of \(M^{\prime}_{n_{i}}\). The JSJ-decomposition of \(M^{\prime}_{n_{i}}\) is a subset of \(\{M^{\prime}_{k}\}_{k=0}^{n_{i}-1}\) and the JSJ-decomposition of \(M^{\prime}_{0}\). Therefore \(F(K_{i})\) is not a tight subgroup of a loose companion of \(g^{-1}K^{\prime}_{n_{i}}g\) since \(F(K_{i})\) is not contained in any conjugates of \(g^{-1}K^{\prime}_{n_{i}-1}g\) by the choice of \(n_{i}\). So \(F(K_{i})\) is a tight subgroup of \(g^{-1}K^{\prime}_{n_{i}}g\).
By uniqueness of JSJ-decomposition, \(M^{\prime}_{n_{i}}\) is the knot complement of a cable knot of a torus knot if and only if \(n_{i}=1\). So it is not the complement of the cable of a torus knot since \(n_{i}>1\). By Theorem 4.7, the group of a nontrivial knot properly embeds in the group of a prime satellite knot \(K\) as tight subgroup if and only if \(K\) is a cable of the some torus knot. This implies that \(F(K_{i})\) does not properly embed in \(g^{-1}K^{\prime}_{n_{i}}g\), therefore \(F(K_{i})=g^{-1}K^{\prime}_{n_{i}}g\). \(F(K_{i})=g^{-1}K^{\prime}_{n_{i}}g\) implies that \(K_{i}\) and \(K^{\prime}_{n_{i}}\) correspond to the same prime knot up to mirror image. So \(M_{i}\) is homeomorphic to \(M^{\prime}_{n_{i}}\). By the uniqueness of JSJ-decomposition of \(M_{i}\) and \(M^{\prime}_{n_{i}}\), \(i=n_{i}\) and \(K_{n}\simeq K^{\prime}_{n}\) for every \(n\leq i\). The same argument works for all \(n\geq i\), hence \(K_{n}\simeq K^{\prime}_{n}\) for all \(n\geq 0\).
Combine Theorem 4.4 and Theorem 4.8, we have the following.
**Theorem 4.9**.: _Let \(\Sigma\) and \(\Sigma^{\prime}\) be two knotted soleknots in \(\mathbb{S}^{3}\), \(\pi_{1}(\mathbb{S}^{3}-\Sigma)\simeq\pi_{1}(\mathbb{S}^{3}-\Sigma^{\prime})\) if and only if there is a homeomorphism \(f\) of \(S^{3}\) such that \(f(\Sigma)=\Sigma^{\prime}\), in other words, they are equivalent or they are mirror image of each other._
Theorem 4.9 improves Theorem 5.4 in [3]. They show that there exists uncountably many inequivalent knotted soleknots complements using hyperbolic structures. Notice that by [3] and [2], the fundamental group of the complements of unknotted soleknots only determine the solenoids as a topological space, not the tame embeddings or soleknots as we call them here. In fact, there are uncountably many different unknotted soleknots with isomorphic fundamental group for their complements.
## Acknowledgments
The author would like to thank Gregory Conner for introducing this problem to him and conversations on the topic, Mark Hughes and Jessica Purcell for interests and comments on the paper.
|
2303.16658 | SPARC HSBs, and LSBs, the surface density of dark matter haloes, and
MOND | In this paper, we use SPARC's HSBs, and LSBs galaxies to verify two issues.
The first one is related to one claim of \citep{Donato} D09, namely: is the DM
surface density (DMsd) a constant universal quantity, equal to $\log{(\rm
\Sigma/M_\odot pc^{-2})}=2.15 \pm 0.2$, or does it depend on the baryon surface
density of the system? The second one, is based on a MOND prediction that for
HSBs the DMsd is constant, and equal to $\log{(\rm \Sigma/M_\odot
pc^{-2})}=2.14$, while for LSBs the surface density is not constant and takes
values that are smaller than for HSBs and D09 prediction \citep{Milgrom2009}.
We find that HSBs shows a constant DMsd vs magnitude as in D09, and a constant
DMsd vs $\Sigma_{\rm eff}$ as in MOND prediction, for HSBs with $\Sigma_{\rm
eff}>200 L_\odot/pc^2$, and $\Sigma_{\rm eff}>300 L_\odot/pc^2$. However, the
value of the DMsd is larger, $\Sigma \simeq 2.61$ (in the case of the
DMsd-magnitude with $\Sigma_{\rm eff}>300 L_\odot/pc^2$), and $\Sigma \simeq
2.54$ (in the case of the surface DMsd-surface brightness with $\Sigma_{\rm
eff}>200 L_\odot/pc^2$). This value slightly depends on the threshold to
determine wheter a galaxy is HSB. In the case of LSBs, for $\Sigma_{\rm
eff}<100 L_\odot/pc^2$, and $\Sigma_{\rm eff}<25 L_\odot/pc^2$, the surface
density vs magnitude, for lower magnitudes, is approximately equal to that
predicted by D09, but several galaxies, for magnitude $M>-17$, have smaller
values than those predicted by D09. The DMsd vs $\Sigma_{\rm eff}$ shows a
similar behavior in qualitative, but not quantitative, agreement with MOND
predictions. In summary, in the case of HSBs both D09 and MOND are in
qualitative, but not quantitative, agreement with the data. In the case of LSBs
D09 is mainly in disagreement with the data, and MOND only in qualitative
agreement with them. | Antonino Del Popolo | 2023-03-29T13:06:08Z | http://arxiv.org/abs/2303.16658v1 | # SPARC HSBs, and LSBs, the surface density of dark matter haloes, and MOND.
###### Abstract
In this paper, we use SPARC's HSBs, and LSBs galaxies to verify two issues. The first one is related to one claim of [1] D09, namely: is the DM surface density (DMsd) a constant universal quantity, equal to \(\log\left(\Sigma/{\rm M}_{\odot}{\rm pc}^{-2}\right)=2.15\pm 0.2\), or does it depend on the baryon surface density of the system? The second one, is based on a MOND prediction that for HSBs the DMsd is constant, and equal to \(\log\left(\Sigma/{\rm M}_{\odot}{\rm pc}^{-2}\right)=2.14\), while for LSBs the surface density is not constant and takes values that are smaller than for HSBs and D09 prediction [2]. We find that HSBs shows a constant DMsd vs magnitude as in D09, and a constant DMsd vs \(\Sigma_{\rm eff}\) as in MOND prediction, for HSBs with \(\Sigma_{\rm eff}>200L_{\odot}/pc^{2}\), and \(\Sigma_{\rm eff}>300L_{\odot}/pc^{2}\). However, the value of the DMsd is larger, \(\Sigma\simeq 2.61\) (in the case of the DMsd-magnitude with \(\Sigma_{\rm eff}>300L_{\odot}/pc^{2}\)), and \(\Sigma\simeq 2.54\) (in the case of the surface DMsd-surface brightness with \(\Sigma_{\rm eff}>200L_{\odot}/pc^{2}\)). This value slightly depends on the threshold to determine where a galaxy is HSB. In the case of LSBs, for \(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), and \(\Sigma_{\rm eff}<25L_{\odot}/pc^{2}\), the surface density vs magnitude, for lower magnitudes, is approximately equal to that predicted by D09, but several galaxies, for magnitude \(M>-17\) have smaller values than those predicted by D09. The DMsd vs \(\Sigma_{\rm eff}\) shows a similar behavior in qualitative, but not quantitative, agreement with MOND predictions. In summary, in the case of HSBs both D09 and MOND are in qualitative, but not quantitative, agreement with the data. In the case of LSBs D09 is mainly in disagreement with the data, and MOND only in qualitative
agreement with them.
keywords: Galaxies; Alternative theory of gravity; galaxies surface density +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
In spite of the fact that the \(\Lambda\)CDM model, is a very good model in describing the observations on intermediate scales [3; 4; 5; 6; 7; 8; 9], and cosmology scales [6; 7; 8], it shows several drawbacks. To start with we recall the cosmological constant problem [10; 11], and the cosmic coincidence problem [12]. Apart the mentioned problems several others are present. At large scale there is the unsolved tensions between the the current value of the Hubble parameter, \(H_{0}\), having different values when measured with different methods, for example using the CMB, and supernovae and stars in the relatively recent universe [13]. Moreover, there is another tension between the data of Planck 2015 and the \(\sigma_{8}\) growth rate of perturbations [14], and with CFHTLenS weak lensing [15] data. In the large-angle fluctuations in the CMB are present a power hemispherical asymmetry [16; 17; 18; 19; 20; 21], a quadrupole-octupole alignment [22; 23; 24; 25; 26], and a cold spot [27; 28; 29].
Other problems are present on smaller scales (\(\simeq 1-10\) kpcs). Those often recalled are the "Cusp/Core" problem, namely the discrepancy between the cuspy profiles obtained in dark matter (DM) only N-body simulations ([30; 31; 32], and the observations of dwarf spirals, dwarf spheroidals (dSphs), and Low Surface Brightness (LSBs) galaxies showing cored profiles ([33; 34; 35; 36; 37; 38; 39; 40; 36; 37; 38; 39; 41]). The "missing satellite problem" consists in the discrepancy between the number of satellites or subhaloes predicted by DM only N-body simulations ([42; 43]), and the number of subhaloes really observed. Moreover, the subhaloes obtained in simulations are too dense with respect to those observed around the Milky Way [44; 45]. The last is dubbed "too-big-to-fail" problem. Finally, we cite the issue of the location on planes of satellite galaxies of the Milky Way and M31 [46], difficult to explain. This is dubbed "satellites planes problem", but recently [47] showed a solution. The quoted problems have been attacked from different fronts. Some authors proposed to modify
the power spectrum (e.g. [48]), others proposed to modify either the particles constituting DM ([49; 50; 51; 52]), or the theory of gravity ([53; 54; 55; 56; 57]). Apart these drastic solutions, other astrophysical solutions have been proposed. They are based on the role of baryons in "heating" DM. A well known mechanism is that related to supernovae feedback ([58; 59; 60; 61; 62]), and another one is related to the transfer of energy and angular momentum from baryons to DM through dynamical friction ([63; 64; 65; 38; 66; 40; 67].
In this context, scaling relations are very helpful to understand complex phenomena. [68], by fitting the rotation curves of 55 galaxies with a pseudo-isothermal profile, obtained some relations among the DM halos parameters.
They introduced the quantity \(\Sigma=\mu_{0D}=\rho_{0}r_{0}\)1, which behaves as a DMsd. According to their studies, for late type galaxies, \(\Sigma=\mu_{0D}\simeq 100M_{\odot}/pc^{2}\) independently from galaxy luminosity. This result has been studied and verified by several authors. [1] (D09), in agreement with [68], found again a quasi-universality of \(\Sigma\), by analysing a set of 1000 galaxies (spirals, dwarfs, ellipticals, etc). Promptly, [2], showed that in MOND \(\Sigma\), in the Newtonian regime, has a similar behavior to that described by D09, but noticed that in the case of galaxies having a low surface density \(\Sigma\) has a smaller value. Recently, [69] extended the work of [2] to spiral galaxies showing that for high baryon DMsd \(\Sigma\) is constant in agreement with [2], and D09, and for low baryon surface density \(\Sigma\) decreases.
Footnote 1: \(r_{0}\) is the core radius of the pseudo-isothermal profile, and \(\rho_{0}\) its central density
From the literature is known that the Burkert profile gives good fits to LSBs, and dwarfs rotation curves ([70; 71; 38]. In the case of giant galaxies, and ellipticals the previous conclusion is not valid ([72; 73; 74; 72]).
Several authors obtained results in disagreement with those of D09.
By means of a much larger sample of that of D09 [75] showed that dark matter column density systematically increases with the mass of the halo. [76] showed that the column density, and the Newtonian acceleration, correlates with different quantities in agreement with [75], and in disagreement with D09 results.
In early-type galaxies, [77] did not find the existence of a universal DMsd.
[78], in agreement with [77], [76], and [75], found signs of a correlation of DMsd with \(M_{200}\).
[79] found a a correlation between the Newtonian acceleration and the virial mass \(M_{\rm vir}\).
Several correlations between the surface density and a series of quantities are shown by [67], similarly to what found by [80]. In the last paper the DMsd was obtained by fitting the rotation curves of the galaxies with a Burkert profile, and using the Markov Chain Monte Carlo (MCMC) method to infer the values of the parameters.
In this paper, using SPARC's HSBs, and LSBs we want to study the behavior of the DMsd, to see whether it is either constant as claimed by D09, or it depends on the baryon surface density according to MOND. To this aim, we study the DMsd vs the magnitude and the baryon surface density in HSBs, and LSBs. HSBs shows a constant behavior of the DM, while LSBs shows that the behavior depends from the baryon surface density.
The paper is organized as follows. In Sect. 2, we introduce the SPARC sample and the analysis that was performed in a previous paper ([80]) to obtain the DMsd. In Sect. 3, we discuss the results, and Sect. 4 is devoted to discussion.
## 2 SPARC data set, and data analysis: a summary
In this section, we give a summary of the SPARC data used, and the analysis on the data, as performed by [80]. SPARC (\(Spitzer\) Photometry and Accurate Rotation Curves) dataset [81] is constituted by 175 late-type galaxies with high quality rotation curves obtained from \(\rm HI/H\alpha\) studies, and with surface photometry at 3.6 \(\rm\mu m\), giving the mass-to-light ratio \(\Upsilon_{*}\) conversion factor. The gas mass is provided by 21 cm observations. Almost all SPARC galaxies have a disc structure, with some having also bulges. The baryon component is constituted by the disc, bulge, and gas components. The morphologies present in SPARC goes from SO to Irr. [80] obtained the DM surface density by fitting the rotation curves with a Burkert profile
\[\rho(r)=\frac{\rho_{0}r_{0}^{3}}{(r+r_{0})(r^{2}+r_{0}^{2})}, \tag{1}\]
where \(r_{0}\), and \(\rho_{0}\) represent the scale radius and the central density of the halo, respectively.
The total rotational velocity is given by
\[V_{\rm tot}^{2}=V_{\rm DM}^{2}+V_{\rm bar}^{2}=V_{\rm DM}^{2}+ \Upsilon_{\rm d}V_{\rm disc}^{2}+\Upsilon_{\rm b}V_{\rm bulge}^{2}+V_{\rm gas}^ {2}, \tag{2}\]
where \(V_{\rm disc},~{}V_{\rm bulge},~{}V_{\rm gas}\) are the velocities of the baryonic component, and \(V_{\rm DM}\) is that of the dark matter component, given by
\[\frac{V_{\rm DM}^{2}}{V_{200}^{2}}=\frac{C_{200}}{x}\frac{\ln{(1+x )}+\frac{1}{2}\ln{(1+x^{2})}-\arctan{x}}{\ln{(1+C_{200})}+\frac{1}{2}\ln{(1+C_ {200}^{2})}-\arctan{C_{200}}}, \tag{3}\]
and the concentration \(C_{200}\) and the rotation velocity \(V_{200}\) at the virial radius \(r_{200}\) are given by
\[C_{200}=r_{200}/r_{0},~{}~{}V_{200}=10C_{200}r_{0}H_{0}, \tag{4}\]
where \(H_{0}\) is the Hubble constant (chosen to be \(73~{}{\rm Km\,s^{-1}\,Mpc^{-1}}\)). The mass-to-light ratios for the disc and bulge component, are given by \(\Upsilon_{\rm d}\), and \(\Upsilon_{\rm b}\), respectively. An important point to recall is that the galaxy distance, and the disc inclination, affect the stellar components and the total observed rotational velocities, respectively. Then if the galaxy distance is changed from its value \(D\) to \(D^{\prime}=D\delta_{\rm D}\)2, also the radius is affected and becomes \(R^{\prime}=R\delta_{\rm D}\), and also the baryonic component velocity which becomes \(V_{\rm k}^{\prime}=V_{\rm k}\sqrt{\delta_{\rm D}}\)3.
Footnote 2: \(\delta_{\rm D}\) is a dimensionless distance factor
Footnote 3: -k’ denotes disc, bulge, or gas
Footnote 4: \(\delta_{\rm i}\) is a dimensionless inclination factor
A change in disk inclination from \(i\) to \(i^{\prime}=i\delta_{\rm i}\)4, produces a change in the observed rotation curves and their uncertainties, as
Footnote 4: \(\delta_{\rm i}\) is a dimensionless inclination factor
\[V_{\rm obs}^{\prime}=V_{\rm obs}\frac{\sin(i)}{\sin{(i^{\prime} )}},~{}~{}~{}\delta V_{\rm obs}^{\prime}=\delta V_{\rm obs}\frac{\sin(i)}{ \sin{(i^{\prime})}}. \tag{5}\]
The observed rotational curve is fitted with a theoretical curve depending on several parameters: \(V_{200}\), \(C_{200}\), \(\Upsilon_{\rm d}\), \(\Upsilon_{\rm b}\), \(\delta_{\rm D}\) and \(\delta_{\rm i}\). The calculation is performed through a Bayesian analysis. The posterior probability of parameter space is given by
\[P(V_{200},C_{200},\Upsilon_{\rm d},\Upsilon_{\rm b},\delta_{\rm D },\delta_{\rm i}|{\rm SPARC})={\cal L}(V_{200},C_{200},\] \[\Upsilon_{\rm d},\Upsilon_{\rm b},\delta_{\rm D},\delta_{\rm i}|{ \rm SPARC})P(V_{200},C_{200},\Upsilon_{\rm d},\Upsilon_{\rm b},\delta_{\rm D },\delta_{\rm i}), \tag{6}\]
where the likelihood is obtained through \(\mathcal{L}\sim e^{-\chi^{2}/2}\), and where \(\chi^{2}\) is given by
\[\chi^{2}=\sum_{k=1}^{N}\left(\frac{V_{\rm tot}(R_{\rm k}^{\prime};V_{200},C_{200},\Upsilon_{\rm d},\Upsilon_{\rm b},\delta_{\rm D})-V_{\rm obs,k}^{\prime}}{ \delta V_{\rm obs,k}^{{}^{\prime}}}\right)^{2}, \tag{7}\]
In the previous relation, \(N\) is the number of data point for each galaxy, \(V_{\rm obs,k}^{\prime}\) and \(\delta V_{\rm obs,k}^{\prime}\) are the observed rotation curve and its uncertainty at the radius \(R_{\rm k}\). The total rotation velocity \(V_{tot}\) at the radius \(R_{k}^{\prime}\) depends on galactic parameters \(\{\Upsilon_{\rm d},\Upsilon_{\rm b},\delta_{\rm D}\}\), and halo parameters \(\{V_{200},C_{200}\}\). Since prior probabilities of parameters are uncorrelated it is given by
\[P(V_{200},C_{200},\Upsilon_{\rm d},\Upsilon_{\rm b},\delta_{\rm D},\delta_{ \rm i})=P(V_{200})P(C_{200})P(\Upsilon_{\rm d})P(\Upsilon_{\rm b})P(\delta_{ \rm D})P(\delta_{\rm i}). \tag{8}\]
The priors on galactic parameters are set as in [82]: on \(\delta_{\rm D}\) and \(\delta_{\rm i}\) are imposed Gaussian priors around 1 with standard deviations given by the observational relative errors. On \(\Upsilon_{*}\) is used a log-normal prior around their fiducial values \(\Upsilon_{\rm d}=0.5\) and \(\Upsilon_{\rm b}=0.7\) with a standard deviation of 0.1 dex. In the case of the halo parameters is used a flat prior with \(10<V_{200}<500~{}{\rm km\,s^{-1}},1<C_{200}<100\). Maximizing the posterior probability, one gets the best fitting value. The values of \(\rho_{0}r_{0}\), obtained with the previous method by [80], are in the second column of Table 1, while the values of the luminosity, and the effective surface brightness (taken from SPARC's webpage [http://astroweb.cwru.edu/SPARC/SPARC_Lelli2016c.mrt](http://astroweb.cwru.edu/SPARC/SPARC_Lelli2016c.mrt)) can be found in column 3, and 4 of Table 1, respectively.
As already reported, by means of the Bayesian method, [80] fitted 175 galaxy rotation curve of the SPARC sample. The fit to three representative galaxies has been shown in Fig. 1 of [80]. Burkert's profile gives a good fit to the galaxies studied. The galaxies studied, except five, have a reduced \(\chi^{2}<10\), and the best fitting values and the reduced \(\chi^{2}\) for the SPARC sample are listed in Table 1 of [80].
## 3 Results
As discussed in the abstract, in this paper we have two goals. The first one is related to one claim of D09, namely that the DMsd is a constant universal quantity, equal to \(\log{(\Sigma/{\rm M}_{\odot}{\rm pc}^{-2})}=2.15\pm 0.2\). The second one, a MOND prediction, namely that
for HSBs the DMsd is constant, and equal to \(\log\left(\Sigma/{\rm M_{\odot}pc^{-2}}\right)=2.14\), while for LSBs the DMsd is not constant and has smaller values than for HSBs. At the first order the DMsd behaves as \(\Sigma\simeq\sqrt{\Sigma_{b}\Sigma_{M}}\). In order to answer to these two questions, we used the SPARC (\(Spitzer\) Photometry and Accurate Rotation Curves) sample [81], constituted by 175 late-type galaxies with high quality rotation curves. We divided the sample in HSBs, and LSBs. In order to distinguish between these two classes of objects, we recall that the division between LSBs and HSBs is at 23 \({\rm mag/arcsec^{2}}\) in B band. Recalling that in SPARC we have a photometry at 3.6 \(\mu{\rm m}\), an estimate of
Figure 1: Upper panel: the DMsd in terms of the magnitude, \(M\) for HSBs. The data points with error-bars represent the HSBs with surface brightness larger than 300 \({\rm L_{\odot}/pc^{2}}\). The dashed line is the D09 value, and the solid line the fit to the data. Lower panel: same as the upper panel but for HSBs with surface brightness larger than 200 \({\rm L_{\odot}/pc^{2}}\).
the threshold between LSBs and HSBs can be reasonably put at the effective surface brightness \(\Sigma_{\rm eff}=200L_{\odot}/pc^{2}\) (or 100 \(M_{\odot}/pc^{2}\) for \(M/L=0.5\))5.
Footnote 5: Notice that the surface brightness distribution in SPARC is continuous even if the definition has a certain degree of arbitrariness
In the paper, we consider several thresholds. For the HSBs, we use values of the effective surface brightness, \(\Sigma_{\rm eff}\), \(>200L_{\odot}/pc^{2}\), and \(>300L_{\odot}/pc^{2}\). For LSBs values
Figure 2: Upper panel: the DMsd in terms of the effective surface brightness, \(\Sigma_{\rm eff}\) for HSBs. The data points with error-bars represent the HSBs with surface brightness larger than 300 \({\rm L_{\odot}/pc^{2}}\). The dashed line is the D09 value, the shaded region the \(1\sigma\) confidence level region obtained using MOND, and the solid line the fit to the data. Lower panel: same as the upper panel but for HSBs with surface brightness larger than 200 \({\rm L_{\odot}/pc^{2}}\).
of \(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), and \(\Sigma_{\rm eff}<25L_{\odot}/pc^{2}\). For a given value of \(M/L\), the quoted threshold corresponds to thresholds in mass surface density \({\rm M_{\odot}}/{\rm pc^{2}}\). The results are plotted in Figs. 1-4. In Fig. 1, we plot the DMsd in terms of the magnitude, \(M\). The DMsd of the HSBs is represented by the data points with error-bars. The data points,
Figure 3: Upper panel: DMsd in terms of the magnitude, \(M\) for LSBs. The data points with error-bars represent the LSBs with surface brightness smaller than 100 \({\rm L_{\odot}}/{\rm pc^{2}}\). The dashed line is the D09 value, and the solid line the fit to the data. Bottom panel: the DMsd in terms of the effective surface brightness, \(\Sigma_{\rm eff}\) for LSBs with surface brightness smaller than 100 \({\rm L_{\odot}}/{\rm pc^{2}}\). The dashed line is the D09 value, and the solid line the fit to the data. The shaded region the \(1\sigma\) confidence level region obtained using MOND.
surface density of galaxies and errors, are obtained by [80] (Table A. 1). In the upper panel we used the galaxies (HSBs) with surface brightness \(>300\rm{L_{\odot}/pc^{2}}\). The lower panel represents the same quantities as the upper panel but in this case for HSBs with surface brightness larger than 200 \(\rm{L_{\odot}/pc^{2}}\). In both panels, the dashed line is the D09 value, and the solid line the fit to the data. Both plots give the same information: the DMsd is constant, independent on the magnitude. This is qualitatively in agreement with D09, but the value of the constant DMsd is not \(\rm{log}\,(\Sigma/M_{\odot}pc^{-2})=2.15\pm 0.2\), but larger (\(\rm{log}\,(\Sigma/M_{\odot}pc^{-2})=2.61\), for the upper panel). Another difference between D09 result and ours is that D09 claims that their result is valid for all kind of galaxies: LSBs, and HSBs. In reality, their conclusions is based on the analysis of dwarf spheroidal satellites of the Milky Way. They include only one well studied LSB: NGC 3741. In other terms their claim that their result can be applied to all galaxies
Figure 4: Same as Fig. 3 but for DMsd smaller than 25 \(\rm{L_{\odot}/pc^{2}}\).
is not correct. The D09 result and our are in qualitative agreement because they are mainly using HSBs as we are doing in this first part of the analysis. Then HSBs are characterized by a constant DMsd. In Fig. 2, we plot the surface density in terms of the effective surface brightness, \(\Sigma_{\rm eff}\). As in Fig. 1, the data points with error-bars represent the HSBs with surface brightness larger than 300 \({\rm L_{\odot}/pc^{2}}\) (upper panel), and 200 \({\rm L_{\odot}/pc^{2}}\) (lower panel). The dashed line is the D09 value, and the solid line the fit to the data. We also plotted, as a shaded region, the \(1\sigma\) confidence level region obtained using MOND (see Appendix). The plot gives two information: a) the DMsd is constant and does not depend on \(\Sigma_{\rm eff}\). b) MOND predicts a constant value of the DMsd equal to \(\log{(\Sigma/{\rm M_{\odot}pc^{-2}})}=2.14\) almost identical to that of D09, qualitatively in agreement with our result, for what concerns the constancy, but in disagreement with the value that we obtain, \(\log{(\Sigma/{\rm M_{\odot}pc^{-2}})}=2.54\) (upper panel). In Fig. 3, the upper panel represents the DMsd in terms of the magnitude, \(M\). The data points with error-bars represent the LSBs with surface brightness smaller than 100 \({\rm L_{\odot}/pc^{2}}\). The dashed line is the D09 value, and the solid line the fit to the data. The plot shows that the DMsd decreases with increasing magnitude (smaller luminosity). However, in the range of magnitudes \(\simeq-20.5\) to \(\simeq-17.5\) the surface density, within the errors, is in agreement with a flat surface density close to that of D09. For smaller magnitudes several LSBs have a smaller value than that predicted by D09. In other terms, for \(\Sigma_{\rm eff}<100{\rm L_{\odot}/pc^{2}}\), for decreasing values of the magnitude the LSBs tend to have smaller values of the surface density with respect to D09. The bottom panel in Fig. 3, represents the DMsd in terms of \(\Sigma_{\rm eff}\). The behavior for decreasing values of \(\Sigma_{\rm eff}\) is similar to that of the upper panel. Within the errors limits, and in the range \(1.4\preceq\log{\Sigma_{\rm eff}}\preceq 2\), the DMsd is in agreement with the D09 result, after it has smaller values than D09. Still in the bottom panel it is also plotted a shaded region which is the \(1\sigma\) confidence level region obtained using MOND. The plot shows that in the range \(1.6\preceq\log{\Sigma_{\rm eff}}\preceq 2\), MOND is in agreement with D09 prediction, while for smaller values of \(\Sigma_{\rm eff}\) it shows a decline towards smaller values of the surface density, in agreement with the data. The upper panel of Fig. 4, represents the surface density in terms of the magnitude, \(M\). The data points with error-bars represent the LSBs with surface brightness smaller than 25 \({\rm L_{\odot}/pc^{2}}\). The trend with the magnitude of the DMsd is similar to that shown in Fig.
3, even if the values of the DMsd are smaller than that of D09 since the most negative magnitudes. The bottom panel in Fig. 4, represents the DMsd in terms of \(\Sigma_{\rm eff}\). As before the dots with error-bars represent the LSBs galaxies the shaded region is the \(1\sigma\) confidence level region obtained using MOND, the solid line the fit to the LSBs with surface brightness smaller than 25 \({\rm L_{\odot}/pc^{2}}\), and the dashed line D09 prediction. Within the errors limits, and in the range \(1.35\preceq\Sigma_{\rm eff}\preceq 1.5\), DMsd is in agreement with the D09 result, after it has smaller values than D09. The behavior for decreasing values of \(\Sigma_{\rm eff}\) is similar to that of the upper panel. In this case MOND predicts a surface density smaller than that of D09 in the entire range of the data. As in Fig. 3, it shows a decline towards smaller values of the DMsd, in agreement with the data.
Summarizing the results till now reported, concerning HSBs, the DMsd is constant in terms of the magnitude (in qualitative agreement with D09) and \(\Sigma_{\rm eff}\) (in qualitative agreement with MOND), but larger than the prediction of D09, and MOND. In the case of the LSBs, there is a decrease of the DMsd with the luminosity (in disagreement with D09), and a decrease of \(\Sigma_{\rm eff}\) (in agreement with MOND), and it is usually smaller than the D09 prediction. MOND, is in qualitative agreement with the data predicts a decrease of the DMsd with \(\Sigma_{\rm eff}\).
## 4 Discussion
By means of the HSBs, and LSBs in SPARC we studied the DMsd of the class of galaxies in the sample. Our interest was that of understanding whether the DM surface density is a constant universal quantity equal to \(\log{(\Sigma/{\rm M_{\odot}pc^{-2}})}=2.15\pm 0.2\), as claimed by D09 or if it assumes different classes of objects and their baryon surface density. At the same time, we studied the MOND prediction ([2; 69]), namely a constant DMsd in HSBs close to that predicted by D09, and a smaller, decreasing DMsd with decreasing surface brightness (baryon surface density) in the case of LSBs. In order to study these two issues, we grouped the SPARC sample in HSBs, and LSBs, and we also considered for these two groups of galaxies several thresholds in \(\Sigma_{\rm eff}\). In the case of HSBs, and for galaxies characterized by \(\Sigma_{\rm eff}>200L_{\odot}/pc^{2}\), and \(\Sigma_{\rm eff}>300L_{\odot}/pc^{2}\), we found that the DMsd vs magnitude is constant, as in D09,
and similarly, we found a constant DMsd vs \(\Sigma_{\rm eff}\) as in MOND prediction. In spite of the fact that the result of our study, in the case of HSBs is in qualitative agreement with D09, and MOND, the value of the DMsd is larger than the one predicted by D09, and MOND. In the case of LSBs, we also used two thresholds: \(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), and \(\Sigma_{\rm eff}<25L_{\odot}/pc^{2}\). For both thresholds, the DMsd is decreasing, with increasing value of magnitude (decreasing luminosity), in all magnitude range. In the case of \(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), the DMsd, within the errors, and in in the range of magnitudes \(\simeq-20.5\) to \(\simeq-17.5\) is in agreement with a flat surface density close to that of D09. Going to smaller negative magnitudes several LSBs have a smaller value than that predicted by D09. In the case of LSBs with surface brightness smaller than 25 \({\rm L_{\odot}/pc^{2}}\), the trend with magnitude of the surface density is similar to that of the previous case (\(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\)), even if the values of the DMsd are smaller than that of D09 since the most negative magnitudes. Concerning the trend of the DM surface density in terms of the \(\Sigma_{\rm eff}\) for the two thresholds (\(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), and \(\Sigma_{\rm eff}<25L_{\odot}/pc^{2}\)), it is decreasing with decreasing \(\Sigma_{\rm eff}\). In the case \(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), the DMsd is, in the errors limits, and in the range \(1.4\preceq\log\Sigma_{\rm eff}\preceq 2\), in agreement with the D09 result, after it has smaller values than D09. In the case \(\Sigma_{\rm eff}<25L_{\odot}/pc^{2}\) the DMsd is, in the errors limits, and in the range \(1.35\preceq\Sigma_{\rm eff}\preceq 1.5\), in agreement with the D09 result, after it has smaller values than D09. Concerning MOND's predictions, in the case \(\Sigma_{\rm eff}<100L_{\odot}/pc^{2}\), and \(1.6\preceq\log\Sigma_{\rm eff}\preceq 2\), MOND is in agreement with D09 prediction, while for smaller values of \(\Sigma_{\rm eff}\) it shows a decline towards smaller values of the DMsd, in agreement with the data. In the case \(\Sigma_{\rm eff}<25L_{\odot}/pc^{2}\), for all values of \(\Sigma_{\rm eff}\), DMsd is smaller than D09 prediction, and shows a decline towards smaller values of the DMsd, in agreement with data.
In summary, the SPARC's HSBs tell us that both D09, and MOND correctly predict a constant DMsd, even if the value predicted is smaller than that obtained from data. SPARC's LSBs shows a decrease of the DMsd with luminosity and surface brightness (or baryon surface density), in agreement with MOND but not with D09.
**Appendix: DMsd in MOND**
In this section, we summarize the calculation developed in [69] to obtain the DMsd in MOND. Using the MOND formulation by [83], the potential \(\phi\) is given by a generalization of Poisson equation
\[\nabla[\mu(|\nabla\phi|/a_{0})\nabla\phi]=4\pi G\rho \tag{9}\]
where the baryon density is indicated by \(\rho\), \(\mu(x)\) is the interpolating function, and \(a_{0}\) is the MOND constant. The difference between the MOND acceleration field \(\nabla\phi\) and the Newtonian one, interpreted from a Newtonian point of view is explained by the presence of dark matter or as indicated by Milgrom, by "phantom matter" having a density [84]
\[\rho_{\rm p}=\frac{1}{4\pi G}\Delta\phi-\rho \tag{10}\]
Using Eq. (9), we can write
\[\rho_{P}({\bf r})=-\frac{1}{4\pi Ga_{0}}\frac{\mu^{\prime}}{\mu}\nabla|\nabla \phi|\nabla\phi+\rho({\bf r})(1/\mu-1). \tag{11}\]
The previous equation can be written as
\[\rho_{P}=\rho_{P_{1}}+\rho_{P_{2}}=\frac{-a_{0}}{4\pi G}{\bf e}\cdot\nabla \mathcal{V}(|\nabla\phi|/a_{0})+(1/\mu-1)\rho. \tag{12}\]
After defining \(\mathcal{V}(x)=\int L(x)dx\), \(x={\rm g}/a_{0}\), \(L=\frac{\mu^{\prime}}{\mu}x\), and defining a vector \({\bf e}\) in the direction of \(\nabla\phi\), we also have
\[\rho_{P}=\rho_{P_{1}}+\rho_{P_{2}}=\frac{-a_{0}}{4\pi G}{\bf e}\cdot\nabla \mathcal{V}(|\nabla\phi|/a_{0})+(1/\mu-1)\rho, \tag{13}\]
with
\[\rho_{P_{1}}=\frac{-a_{0}}{4\pi G}{\bf e}\nabla\mathcal{V}(|\nabla\phi|/a_{0}), \tag{14}\]
\[\rho_{P_{2}}=(1/\mu-1)\rho. \tag{15}\]
The integral of Equation (13) is calculated considering two cases: \(x\geq 1\) (acceleration above the MOND universal constant), and \(x\leq 1\). Let us consider the second term (Equation 15), \(\rho_{P_{2}}=(1/\mu-1)\rho\). We write \(a_{N}=\frac{g_{N}}{a_{0}}=({\rm g}/a_{0})\mu({\rm g}/a_{0})\), where \(g_{N}={\rm g}\mu({\rm g}/{\rm a}_{0})\) is the Newtonian acceleration.
In the case \(n=1\), and \(\mu=\frac{x}{(1+x^{n})^{1/n}}=\frac{\mbox{g}/a_{0}}{1+\mbox{g}/a_{0}}\), multiplying for \(\mu\) we have
\[\mu=\frac{\mu\mbox{g}/a_{0}}{\mu\mbox{g}/a_{0}+\mu}=\frac{a_{N}}{a_{N}+\mu}, \tag{16}\]
solving with respect to \(\mu\) we have
\[\mu=-1/2a_{N}+1/2\sqrt{a_{N}^{2}+4a_{N}}. \tag{17}\]
For a double exponential disk, with cylindrical radius \(R\) and altitude \(z\), we have
\[a_{N}\simeq\frac{GM}{a_{0}R^{2}}=\frac{G2\pi\rho_{0}h_{z}{h_{r}}^{2}}{a_{0}R^{2 }}\simeq\frac{\Sigma_{b}}{\Sigma_{M}}\frac{{h_{r}}^{2}}{R^{2}} \tag{18}\]
being \(\Sigma_{b}\), the baryonic surface density, approximated by an integrated constant volume density (\(\Sigma_{b}\simeq\int\rho_{0}dz=\rho_{0}h_{z}\)).
The MOND interpolating function becomes
\[\mu=-1/2\,\frac{\Sigma_{b}\,{h_{r}}^{2}}{\Sigma_{M}\,R^{2}}+1/2\,\sqrt{\frac{ {\Sigma_{b}}^{2}{h_{r}}^{4}}{\Sigma_{M}}^{2}R^{4}}+4\,\frac{\Sigma_{b}\,{h_{r }}^{2}}{\Sigma_{M}\,R^{2}}. \tag{19}\]
Then
\[\frac{1-\mu}{\mu}\int_{-\infty}^{+\infty}\rho dz=2(1-\mu)x\Sigma_{M}\frac{R^{2 }}{h_{r}^{2}}\exp^{-R/h_{r}}. \tag{20}\]
The integral of the first (Eq. 13) term is given by
\[\int_{-\infty}^{+\infty}\,\rho_{P_{1}}dz = 2\int_{0}^{\infty}\rho_{P_{1}}dz= \tag{21}\] \[\Sigma_{M}[{\cal V}(\infty)-{\cal V}(0)]=\] \[\Sigma_{M}\int_{0}^{\infty}L(x)dx=a\Sigma_{M},\]
where \(\Sigma_{M}=\frac{a_{0}}{2\pi G}\).
Integrating the Burkert profile D09 obtained a surface density given by
\[\Sigma_{0}=2\int\rho dr=2\int\frac{\rho_{0}r_{0}^{3}}{(r+r_{0})(r^{2}+r_{0}^{ 2})}=\frac{\pi}{2}\rho_{0}r_{0}=\frac{\pi}{2}\Sigma_{c}. \tag{22}\]
If we call \(\Sigma_{c}^{*}\) the MOND analog of \(\Sigma_{c}\) we have
\[\Sigma_{c}^{*}=\frac{2\lambda}{\pi}\Sigma_{M}, \tag{23}\]
with \(\Sigma_{M}=138\frac{a_{0}}{1.2\times 10^{-8}\mbox{cms}^{-2}}M_{\odot}/pc^{2}\).
The full integral can be written as
\[F=F_{1}+F_{2}=\frac{2}{\pi}\Sigma_{M}\int_{0}^{x}L(x)dx+\frac{2}{\pi}2(1-\mu)x \Sigma_{M}\frac{R^{2}}{h_{r}^{2}}\exp^{-R/h_{r}}. \tag{24}\]
As shown in [69], in the case \(x\leq 1\) we have
\[F = \frac{2}{\pi}\Sigma_{M}\arctan{(\sqrt{\frac{\Sigma_{b}}{\Sigma_{M }}}\frac{h_{r}}{R})}+ \tag{25}\] \[\frac{2}{\pi}2\frac{R\Sigma_{M}}{h_{r}}\left(1+1/2\,\frac{\Sigma_ {b}\,{h_{r}}^{2}}{\Sigma_{M}\,R^{2}}-1/2\,\sqrt{\frac{{\Sigma_{b}}^{2}{h_{r}}^ {4}}{\Sigma_{M}\,R^{4}}+4\,\frac{\Sigma_{b}\,{h_{r}}^{2}}{\Sigma_{M}\,R^{2}}}\right)\] \[\times\sqrt{\frac{\Sigma_{b}}{\Sigma_{M}}}{\rm e}^{-\frac{R}{h_{ r}}}.\]
At small \(x\) the first term in the right hand side tend to zero, and \(F\) is dominated by the second term. At first order the trend of \(F\) is \(\sqrt{\Sigma_{b}\Sigma_{M}}\).
In the case \(x\geq 1\), we have
\[F = \frac{2}{\pi}\Sigma_{M}\arctan{(\frac{\Sigma_{b}}{\Sigma_{M}}\frac {h_{r}^{2}}{R^{2}})}+ \tag{26}\] \[\frac{4}{\pi}\,\left(1+1/2\,\frac{\Sigma_{b}\,{h_{r}}^{2}}{\Sigma _{M}\,R^{2}}-1/2\,\sqrt{\frac{{\Sigma_{b}}^{2}{h_{r}}^{4}}{\Sigma_{M}\,R^{4}} +4\,\frac{\Sigma_{b}\,{h_{r}}^{2}}{\Sigma_{M}\,R^{2}}}\right)\] \[\times\Sigma_{M}\,{\rm e}^{-\frac{R}{h_{r}}}R^{2}/h_{r}^{2}.\]
For large \(x\), the second term tends to 0, O(1), and the first one is \(\simeq\Sigma_{M}\).
As it comes from the previous results and the plots in [69] there is a double trend of the surface density. For \(R=h_{r}\), and at small \(\Sigma_{b}/\Sigma_{M}\),, the surface density increases as \(\sqrt{\frac{\Sigma_{b}}{\Sigma_{M}}}\Sigma_{M}\). For larger \(\Sigma_{b}/\Sigma_{M}\), the behavior of the surface density flattens till when, at large \(\Sigma_{b}/\Sigma_{M}\) tends to \(\Sigma_{M}\).
**Acknowledgements**
The author thanks Valerio Pirronello for a careful reading of the manuscript. Financial contribution from the Programma ricerca di Ateneo UNICT 2020-22 linea 2 is graciously acknowl- edged. All authors read and approved the final manuscript. |
2303.01324 | A Step Closer Towards 5G mmWave-based Multipath Positioning in Dense
Urban Environments | 5G mmWave technology can turn multipath into a friend, as multipath
components become highly resolvable in the time and angle domains. Multipath
signals have not only been used in the literature to position the user
equipment (UE) but also to create a map of the surrounding environment. Yet,
many multipath-based methods in the literature share a common assumption, which
entails that multipath signals are caused by single-bounce reflections only,
which is not usually the case. There are very few methods in the literature
that accurately filters out higher-order reflections, which renders the
exploitation of multipath signals challenging. This paper proposes an ensemble
learning-based model for classifying signal paths based on their order of
reflection using 5G channel parameters. The model is trained on a large dataset
of 3.6 million observations obtained from a quasi-real ray-tracing based 5G
simulator that utilizes 3D maps of real-world downtown environments. The
trained model had a testing accuracy of 99.5%. A single-bounce reflection-based
positioning method was used to validate the positioning error. The trained
model enabled the positioning solution to maintain sub-30cm level accuracy 97%
of the time. | Qamar Bader, Sharief Saleh, Mohamed Elhabiby, Aboelmagd Noureldin | 2023-03-02T14:58:26Z | http://arxiv.org/abs/2303.01324v1 | # A Step Closer Towards 5G mmWave-based Multipath Positioning in Dense Urban Environments
###### Abstract
5G mmWave technology can turn multipath into a friend, as multipath components become highly resolvable in the time and angle domains. Multipath signals have not only been used in the literature to position the user equipment (UE), but also to create a map of the surrounding environment. Yet, many multipath-based methods in the literature share a common assumption, which entails that multipath signals are caused by single-bounce reflections only, which is not usually the case. There are very few methods in the literature that accurately filters out higher-order reflections, which renders the exploitation of multipath signals challenging. This paper proposes an ensemble learning-based model for classifying signal paths based on their order of reflection using 5G channel parameters. The model is trained on a large dataset of \(3.6\) million observations obtained from a quasi-real ray-tracing based 5G simulator that utilizes 3D maps of real-world downtown environments. The trained model had a testing accuracy of \(99.5\%\). A single-bounce reflection-based positioning method was used to validate the positioning error. The trained model enabled the positioning solution to maintain sub-\(30cm\) level accuracy \(97\%\) of the time.
## 1 Introduction
Accurate positioning and mapping are crucial for the safe and efficient operation of autonomous vehicles, particularly in urban environments. The ability to accurately locate the vehicle within the environment and to understand the layout of the surrounding area is essential for the vehicle to make informed decisions and navigate safely. Urban environments present a number of challenges for autonomous vehicles, such as complex road networks, dynamic traffic conditions, and a high density of obstacles and pedestrians. Without accurate positioning and mapping, autonomous vehicles may struggle to safely navigate these environments and may even pose a risk to other road users (Reid et al., 2019).
## 1 Problem Statement
GPS positioning solutions are widely used in autonomous vehicles, but their accuracy and reliability can be affected by the urban environment. In urban areas, high-rise buildings, bridges, and other structures can block or reflect satellite signals, resulting in multipath and shadowing effects. These effects can deteriorate the positioning accuracy or even cause a total signal blockage. Additionally, the crowded radio frequency environment can cause interference with GPS signals (Jing et al., 2022). Inertial Navigation Systems (INS) and perception systems, while useful for autonomous vehicles, have certain limitations when it comes to positioning. INS relies on sensors such as accelerometers and gyroscopes to compute the vehicle's position, velocity, and orientation. However, the accuracy of INS would degrade over time due to the growing IMU biases and the accumulation of errors caused by dead-reckoning. This can make INS less reliable in the long term and it needs to be periodically corrected via an external reliable source (Noureldin et al., 2012). Perception systems, such as cameras and LiDARs, can provide a wealth of information about the vehicle's environment, but they are limited in their ability to provide precise position information. They can be used for localization and mapping, but the accuracy of these systems can be affected by factors such as scene illumination, weather conditions, and the presence of obstacles or occlusions (Benedek et al., 2021).
## 2 Motivation
5G mmWave technology is considered a promising alternative as a positioning technology. One of the main features of 5G NR lies in its large bandwidth, which can reach \(400\) MHz, enabling accurate time-based measurements such as time of arrival (ToA), time difference of arrival (TDoA), and round-trip time (RTT). Additionally, it enables multipath resolvability in the time domain. 5G also features MIMO capabilities which allow for more accurate estimates of angle-based measurements such as angle of arrival (AoA), and angle of departure (AoD). Massive MIMO capabilities also allow for the ability to resolve multipath signals in the angular domain. Last but not least, the deployment of 5G gNBs is anticipated to occur every \(200m\) to \(500m\), suggesting an enhanced line-of-sight (LoS) connectivity with the UE. 5G NR has also demonstrated its potential as a mapping technology through its ability to accurately distinguish multipath signals. These signals provide valuable information about the surrounding environment, thereby enabling the creation of environment maps. Multipath signals are useful not only for environment mapping but also for bridging 5G outages that are expected in highly dynamic environments, as well as for providing redundant information to improve the precision of the UE position estimate. However, most of the methods in the literature assume a single-bounce reflection (SBR), rendering multipath signals difficult to use. For instance, the works in Wen and Wymeersch (2021); Wei et al. (2011); Li and Wang (2021); Kakkavas et al. (2020, 2021); Kulmer et al. (2017) assume SBRs for UE positioning. Additionally, the work in Kakkavas et al. (2019) utilizes SBR in resolving the clock offset between a UE and a BS, enabling precise single-base-station positioning. Therefore, all of the aforementioned methods would directly benefit from methodologies that accurately resolve SBRs from other multipath signals.
## 3 Contributions
The ultimate objective of this research is to attain high-precision positioning in dense urban environments where GNSS is denied and vision is degraded. The positioning solution should be robust, continuous, and accurate at the decimeter level. To fulfill this, an order of reflection identification (OoRI) technique is needed to fully exploit multipath signals in multipath-rich environments (i.e. urban canyons). Our contributions can be summarised in the following aspects:
* Proposal of an ensemble-learning-based order of reflection identifier.
* Analysis of positioning errors using a single-bound reflection multipath positioning scheme proposed by Miao et al. (2007).
* Validation of the proposed approach through a novel quasi-real 5G simulator.
## 4 Significance
Our proposed method for identifying the order of reflection for multipath signals has the potential to revolutionize the field of positioning and mapping. The ability to accurately identify the order of reflection can enable realizing many positioning and mapping methods in the literature. Furthermore, the proposed method can also be used to improve the accuracy of mapping systems that rely on wireless signals, by providing redundant information about the location and movement of objects.
## 5 Related Work
There are limited works in the literature that addresses identifying the order of reflection of multipath signals. The authors in (Seow and Tan, 2008) propose a two-step proximity detection scheme to detect and discard multiple-bound scattering paths in order to identify the order of reflection. First, they find the centroid of the line of possible mobile device location (LPMD) using a normalized path weighting factor. Second, they calculate the normalized euclidean distance between the midpoint of each path and the estimated centroid, paths whose distance exceeds a pre-determined threshold being considered as multiple-bound
scattering paths. The proposed approach is computationally intensive, as it necessitates the calculation of the LPMD for all paths. This may pose challenges in the real-time implementation of the method. Additionally, the weighting strategy employed in the approach, which is based on the assumption that multiple-bound scattering paths have a larger TOA compared to one-bound scattering paths, may not always hold true in all scenarios. In (Lin et al., 2018), authors propose a method based on the observation that diffuse scattering and multiple reflection channels have significantly lower RSS than LOS and single-bounce specular reflection paths. The strongest components are chosen from the received paths using a predetermined threshold in order to filter out these interfering paths. However, this approach is unnecessarily discarding useful information. Specifically, it fails to take into account that single-bounce diffracted signals can be useful for positioning and mapping applications (Miao et al., 2007). Research in the field has demonstrated the utility of both types of signal interactions for these purposes, thus, relying solely on single-bounce specular reflections may not provide a comprehensive representation of the environment.
## 6 Paper Organization
The remainder of the paper is organized as follows: Section II establishes the system model and the foundations of 5G positioning. Section III exhibits the proposed order of reflection identification scheme. Section IV provides details on the experimental setup and presents the results along with discussions. Finally, Section VI concludes the paper.
## 7 System Model
### 5G System Model
To enable more accurate positioning measurements, 5G NR features dedicated reference signals (Mogyorosi et al., 2022). These signals are known as the UpLink Sounding Reference Signal (UL-SRS) and the DownLink Positioning Reference Signal (DL-PRS). Through the correlation of the received reference signal and the original reference signal configuration, such signals assist in the estimation of ToA, TDoA, and RTT. Typically, UL-AoA is estimated using UL-SRS in conjunction with multiple signal classification (MUSIC) or estimation of signal parameters using rotational invariance techniques (ESPRIT) (Ruan et al., 2022). On the other hand, DL-AoD estimation often occurs during the beamforming (BF) training sequence. This paper uses DL-PRS signals to compute AoD and RTT since the estimated position is computed at the UE side rather than the network. The position of the UE in relation to the location of the base station (gNB) in a 3D Cartesian coordinate system is mathematically represented by the following equations:
\[\mathbf{p}^{\prime}=\mathbf{p}_{b}^{\prime}+d^{\prime}\begin{pmatrix}\sin\alpha\cos \phi\\ \cos\alpha\cos\phi\\ \sin(\phi)\end{pmatrix} \tag{1}\]
Where \(\mathbf{p}^{\prime}=[x,y,z]^{T}\) and \(\mathbf{p}_{b}^{\prime}=[x_{b},y_{b},z_{b}]^{T}\) represent the 3D Cartesian coordinates of the UE and the BS deceptively, \(d^{\prime}\), \(\alpha\), and \(\phi\) represent the spherical coordinates of the UE, with \(d^{\prime}\) being the 3D distance between the UE and the BS. The angle between the UE and the BS is denoted by \(\alpha\), while the elevation angle is denoted by \(\phi\). If the vehicle's height is known, the 2D position of the vehicle can be calculated as seen in (2).
\[\begin{split}\mathbf{p}&=\mathbf{p}_{b}+d\begin{pmatrix}\sin \alpha\\ \cos\alpha\end{pmatrix}\\ d&=\sqrt{r^{2}-\Delta z^{2}}\end{split} \tag{2}\]
Where \(\mathbf{p}=[x,y]^{T}\) denote the 2D positioning of the UE; \(d\) is the 2D range between the UE and the gNB, and \(\Delta z\) is the height difference between the UE and the gNB.
### Multipath Positioning
A novel algorithm for multipath positioning is proposed in (Miao et al., 2007). The algorithm estimates the possible region of the UE position by using the AOD, denoted as \(\alpha\), AOA \(\beta\), and the distance \(d\), of the strongest propagation path, as shown in Fig.1.
The coordinates of the scatterer, \(\mathbf{p_{s}}=[x_{s},y_{s}]^{T}\), and the UE, \(\mathbf{p}\), are given by the following equations:
\[\mathbf{p_{s}}=\mathbf{p_{b}}+r\begin{pmatrix}\sin\beta\\ \cos\beta\end{pmatrix},\qquad r\in(0,d) \tag{3}\]
\[\mathbf{p}=\mathbf{p_{s}}-(d-r)\left(\begin{matrix}\sin\alpha\\ \cos\alpha\end{matrix}\right),\qquad r\in(0,d) \tag{4}\]
Where \(r\) is the distance between the BS and the scatterer, and \(d\) is the distance of the strongest path. The possible position of the UE can be described by the straight-line equation:
\[y=k(\alpha,\beta)x+b(\alpha,\beta,d), \tag{5}\]
where,
\[k(\alpha,\beta)=\frac{\cos\alpha+\cos\beta}{\sin\alpha+\sin\beta}, \tag{6}\]
and,
\[b(\alpha,\beta,d)=-k(\alpha,\beta)(x_{b}-d\sin\alpha)+y_{b}-d\cos\alpha. \tag{7}\]
This implies that if there is knowledge about two propagation paths from the UE, the position of the UE can be estimated as the intersection of two lines.
## III Order of reflection identification
Identifying the order of reflection of multi-path signals is not a trivial problem. This is due to the fact that such signals undergo various interactions with the environment such as reflection, diffraction, and refraction. A single time-based, angle-based, or power-based observation might correspond to various combinations of these interactions. To address such a complex classification task a simple learning-based approach is not efficient, therefore, ensemble algorithms are investigated. Ensemble learning is a machine learning technique that combines several base models in order to produce a more accurate, robust, and reliable model (Zhou, 2012). It is used to improve the performance of a model, assign a confidence score to the decision made by the model, and data fusion. The proposed model is summarised in Fig.2. Bagging, which stands for bootstrap aggregating, is the ensemble method used in this work. It is a technique used to reduce the variance of a base model by averaging its predictions across multiple samples of data. Bagging works by randomly selecting subsets of data with replacement and training individual models on each subset. Once the models are trained, a voting procedure takes place to consolidate their predictions. In this work, the ensemble model was constructed by aggregating the predictions of fourteen decision tree models (Myles et al., 2004). The dataset under examination consists of around \(3.6M\) observations, with features including TOA, AOA, AOD, and RSS. The ground truth labels are provided by a quasi-real 5G simulator as described in detail in section IV.1. The labels comprise seven classes representing the reflection order (RO) of each multipath signal, with class 0 denoting an LoS signal and class 7 denoting a six-bounce NLoS signal. The data was shuffled before being split into training, validation, and testing sets, with proportions of \(60\%\), \(20\%\), and \(20\%\) respectively. This allows for an accurate evaluation of models developed using the dataset, as the models will be tested on a diverse set of unseen data, ensuring its generalizability. The large size of the dataset also allows for a high level of statistical power, which is important for uncovering meaningful patterns and relationships within the data. For validation, 5-fold cross-validation is utilized to detect overfitting and evaluate the generalization performance of the model (Refaeilzadeh
Figure 1: System model of a single-bounce reflection scenario.
et al., 2009). In order to verify the premise of the paper, the single bounce-based multipath positioning solution proposed by Miao et al. (2007) is implemented using data from the proposed method. The general framework is summarized in Fig.3. For every time instant \(t\) there exist many signal reflections. The 5G measurements are fed into the OoRI method, which filters out reflections with ROs greater than one (i.e. multiple-bounce reflections). As a result, once two reflections satisfy the foregoing condition, the corresponding 5G measurements are fed into the multipath positioning method for UE position computation. If no reflection satisfies the first condition, the positioning algorithm uses the strongest two paths, which may result in errors in the final position solution.
## IV Experimental Road Tests and Results
### Experimental Road Tests
Siradel, a 5G simulation environment that mimics 5G measurements based on ray-tracing capabilities, was used to test the proposed method. The simulation environment is made up of 3D maps that resemble real-world settings and were obtained from LiDAR scans of downtown Toronto. Fig.4a depicts a Google Earth snapshot of a segment from downtown Toronto. A snapshot of the corresponding area in the Siradel environment is depicted in Fig.4b. This suggests that acquired multipath signals are likely to approximate realistic signal propagation, as utilizing a map of the real downtown environment provides accurate information on the physical properties of the environment. For instance, building heights, street widths, and material compositions, which affect the propagation of 5G signals.
A reference trajectory of the UE and corresponding positions of the connected gNBs was imported to Siradel to acquire the 5G measurements. A NovAtel tactical grade IMU/GNSS positioning solution was placed on a vehicle to collect the trajectory, which was driven over a distance of \(9km\) inside Toronto's downtown region for approximately \(1\) hour and \(13\) minutes as seen in Fig.5. Additionally, gNBs were placed along the trajectory, \(250m\) apart, and \(4m\) lateral distance from the UE's trajectory of motion. Lastly, the carrier frequency was set at \(28GHz\) with a \(400MHz\) bandwidth to obtain mmWave transmissions. The BS was equipped with an \(8x1\) ULA while the UE had access to an omnidirectional antenna.
outages. On the other hand, the percentage of false negatives represents the percentage of higher-order reflections being wrongly classified as SBRs, which will cause positioning errors. Overall, the trained model was able to achieve a validation accuracy of \(99.5\%\), and testing accuracy of \(99.6\%\) resulting in a highly accurate model. Additionally, the close proximity of the two accuracy scores suggests that the model is not overfitting to the training data.
The positioning performance of the LoS and NLoS 5G measurements are evaluated by plotting the cumulative distribution function (CDF) of the positioning error as seen in Fig.7. The positioning error statistics for both 5G-based hybrid positioning using \(2\) and SBR-based positioning, as described in section II.2, are summarized in Table 1. The presented results show the variation between the closest gNB denoted gNB1, which has a high probability of LoS connectivity, and the second closest gNB denoted gNB2, which has a lower chance of LoS connectivity. Overall, it is clear that SBR-based positioning has almost identical error statistics to LoS-based positioning while employing gNB1 measurements, which can maintain sub-\(30cm\) accuracy for \(97\%\) of the time. However, it is evident that the maximum error of positioning based on SBR is significant. This is because SBRs are not always available, even in dense urban areas, forcing the method to rely on higher-order reflection, resulting in incorrect UE position computation. When analyzing the positioning error of the second closest gNB, SBR-based positioning exhibits superior results as it can sustain sub-\(30cm\) error for \(87\%\) of the time compared to merely \(74\%\) for the positioning scheme utilizing LoS signals. This is because LoS signals become less likely as we move away from the base station, yet, multipath reflections remain available. Fig. 8 shows a closer look at the high-error end of the CDF. It can be seen that while SBR positioning outperforms LoS positioning at low error ranges, yet, their errors during outages are much higher compared to LoS-based positioning. Proving the fact that relying solely on higher-order reflections for positioning could cause serious positioning errors. Hence, the fusion with LoS measurements is crucial.
A Close-up sample of the positioning solution of LoS and SBR-based positioning solutions for both gNB1 and gNB2 are shown in Figs.9 and 10 respectively. In the event of 5G outages, multipath signals can bridge the gap while maintaining high positioning accuracy. This demonstrates the availability of SBR signals over LoS in dense urban areas.
In rare cases, the number of SBRs was not efficient as to enable multipath positioning, causing erroneous positioning solutions as seen in Fig. 11. In the meantime, LoS communication with the UE exists which demonstrates the significance of fusing LoS and SBRs for more accurate, highly-precise position estimates. SBR-based positioning is obviously intended to supplement LoS-based positioning rather than replace it. They are, however, investigated separately to demonstrate the potential of using multipath signals for positioning.
Figure 3: Block diagram of the proposed positioning solution.
## V Conclusion
In conclusion, we presented a novel method for accurately classifying multipath signals based on their order of reflection (i.e. LoS, single-bounce, double-bounce, etc.) using 5G channel parameters such as ToA, AoA, AoD, and RSS. Our proposed model was trained on a dataset of \(3.6\) million observations obtained from a quasi-real 5G simulator using an ensemble learning technique with bagging of \(14\) decision tree models. The model demonstrated its reliability as a RO classifier by achieving validation accuracy of \(99.5\%\) and testing accuracy of \(99.6\%\). A multipath positioning technique that employs channel parameters of two SBRs to determine the position of the UE was also used to demonstrate the effectiveness of the proposed classifier in a positioning setting. Whilst using the proposed OoRI method, the positioning error was maintained below \(30\) cm for \(98.8\%\) of the time, which was an improvement over LOS positioning. To further demonstrate the trained model's capability, we investigated its positioning error while using measurements from the second closest gNB, which has a lower chance of LoS connectivity. The SBR-based positioning was observed to maintain sub-30 cm accuracy for \(87\%\) of the time, which significantly outperformed LOS positioning. The findings of this research have the potential to vastly improve existing positioning and mapping methods that rely solely on SBRs. Overall, this study lays a solid foundation for future research in multipath signal classification and positioning.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Error & \multicolumn{2}{c}{gNB1} & \multicolumn{2}{c}{gNB2} \\ Type & LoS & SBRs & LoS & SBRs \\ \hline RMS & \(6.3m\) & \(51m\) & \(45m\) & \(47m\) \\ Max & \(107m\) & \(4734m\) & \(735m\) & \(3124m\) \\ sub \(2m\) & \(97\%\) & \(99\%\) & \(82\%\) & \(87\%\) \\ sub \(1m\) & \(97\%\) & \(99\%\) & \(81\%\) & \(87\%\) \\ sub \(30cm\) & \(96.9\%\) & \(98.8\%\) & \(74\%\) & \(87\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: 2D Positioning Error Statistics With 5G hybrid Positioning Vs SBR-based positioning.
Figure 4: A snapshot of a segment from downtown Toronto, ON, on (a) Google Earth, vs. (b) Siradel simulation environment
Figure 5: Downtown Toronto Trajectory (Red), and 5G gNBs (Yellow circles).
Figure 6: Validation confusion matrix of the trained model.
Figure 8: Zoomed CDF of the 2D positioning errors of LoS Vs. SBR-based positioning.
Figure 7: CDF of the 2D positioning errors of LoS Vs. SBR-based positioning.
Figure 10: Close-up scenario that showcases the capability of multipath positioning accuracy during 5G outage while the UE is connected to the second closest BS (gNB2).
Figure 9: Close-up scenario that showcases the capability of multipath positioning accuracy during 5G outage while the UE is connected to the closest BS (gNB1).
Figure 11: A close-up scenario in which LoS measurements are available but sufficient SBRs are not.
## Acknowledgements
This research is supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant numbers: RGPIN-2020-03900 and ALLRP-560898-20.
|
2310.09640 | Clockwork Neutrinogenesis: Baryogenesis from theory space | We propose a minimal clockwork model to illustrate the possibility of
baryogenesis via leptogenesis in a theory space setting. The standard lepton
sector is augmented with three copies of a clockwork lattice made of SM neutral
fermions. The two boundaries of these one-dimensional lattices are endowed with
couplings to the SM leptons and three dark sector fermions, respectively. Small
neutrino masses and a resonance enhanced Dirac leptogenesis are naturally
obtained for anarchic textures of the Yukawa matrices, with $\sim O(1)$
elements, provided the heavy clockwork fermions have masses $\gtrsim O(10 \,
\mbox{TeV})$. | Suvam Maharana, Tripurari Srivastava | 2023-10-14T18:44:05Z | http://arxiv.org/abs/2310.09640v1 | # Clockwork Neutrinogenesis: Baryogenesis from theory space
###### Abstract
We propose a minimal clockwork model to illustrate the possibility of baryogenesis via leptogenesis in a theory space setting. The standard lepton sector is augmented with three copies of a clockwork lattice made of SM neutral fermions. The two boundaries of these one-dimensional lattices are endowed with couplings to the SM leptons and three dark sector fermions, respectively. Small neutrino masses and a resonance enhanced Dirac leptogenesis are naturally obtained for anarchic textures of the Yukawa matrices, with \(\sim\mathcal{O}(1)\) elements, provided the heavy clockwork fermions have masses \(\gtrsim\mathcal{O}(10\,\mathrm{TeV})\).
## I Introduction
The origins of the apparent lightness of the neutrinos as well as the observed baryon asymmetry of the Universe (BAU), remain elusive. This has elicited a myriad of investigations over the years towards combined explanations of neutrino masses and the BAU, most popularly through the mechanism of baryogenesis via leptogenesis [1; 2; 3; 4]. While it is technically natural to have small neutrino masses within the Standard Model (SM) Lagrangian (augmented by \(\nu_{R}\)'s), it requires assuming couplings with the Higgs field orders of magnitude smaller than those encountered for the charged fermions. Furthermore, the resulting CP violation in the neutrino sector would be insufficient for successful leptogenesis. Several models have been proposed to address neutrino masses and leptogenesis in unison (neutrinogenesis), the most popular ones being those employing the seesaw mechanism or radiative mass generation (see e.g. [5; 6; 7] for review). An interesting alternative can also be found in braneworld scenarios where hierarchical masses and lepton asymmetry are generated through a geometry-induced exponential warping -- e.g., having neutrinos in the bulk of a 5D Randall-Sundrum geometry [8; 9].
In this work, we veer off the usual route and investigate neutrino masses and leptogenesis on a unified footing from a theory space perspective. Such an analysis, to the best of our knowledge, has not been carried out before. To this end, the _clockwork_ (CW) paradigm [10; 11; 12] provides an attractive framework to naturally generate hierarchical couplings in a theory with \(\mathcal{O}(1)\) parameters through localisation effects in the theory space. Although neutrino models based on the clockwork mechanism have been studied before [13; 14; 15], they were primarily concerned with the mass hierarchies and mixings and affairs related to baryogenesis were left unaddressed. These models typically invoke a chain of Weyl fermions, all neutral under the SM gauge group, which define a lattice in the theory space. Each clockwork fermion mixes with its nearest neighbour(s) with a weight \(q\). The SM neutrinos then couple to one of the right-handed lattice fermions, or, in the language of theory spaces, are localised at one of the _sites_ (minimally via the SM Higgs). The mechanism then dictates that the effective Yukawa couplings are suppressed by a factor \(q^{-N}\), where \(N\) denotes the number of sites. The mass spectrum of such models would consist of a band of closely separated heavy neutral fermions in addition to the light neutrinos.
We demonstrate the possibility of baryogenesis in a theory space setting with a minimal scenario based on the CW mechanism to generate small neutrino masses. This is achieved with a completely anarchic Higgs Yukawa matrix in the neutrino sector comprising of nearly \(\mathcal{O}(1)\) entries. Prompted by the absence of any evidence suggesting a Majorana nature of the neutrinos, we adopt the mechanism of Dirac leptogenesis [16; 17; 18] for our study. This requires introduction of three fermionic clockwork chains, one for each flavor of the SM neutrinos, and three flavors of a new light neutral fermion to ensure an observable CP violation. To avoid unnecessary inter-flavor hierarchies in the heavy CW spectrum, a simple \(\mathbb{Z}_{3}\) exchange symmetry is imposed among the three flavors which is broken solely by the Yukawa couplings of the clockwork sector to the light fermions (SM leptons and the extra fermions). The required CP asymmetry for leptogenesis is then obtained by virtue of a resonance enhancement effected by the resulting pseudo-degenerate masses of the heavy neutral fermions. Remarkably, the required mass-splitting for the resonance condition is quite naturally obtained from the explicit breaking of the flavor symmetry via radiative corrections to the fermion mass terms. However, this is true only when the out-of-equilibrium dynamics of leptogenesis concludes before electroweak symmetry breaking (EWSB) occurs. With this constraint we find that the heavy neutrinos must have masses \(\gtrsim\mathcal{O}(10\,\mathrm{TeV})\).
## II Model
### Clockwork with fermions
We define the CW theory space with \(N\) copies of left and \(N+1\) copies of right-handed Weyl fermions -- all sin
glets under the SM gauge group -- and the Lagrangian,
\[\mathcal{L}_{CW}=\mathcal{L}_{kin}-m\sum_{j=0}^{N-1}\bar{\psi}_{Lj}\left(\psi_{R_ {j}}-q\psi_{R_{j+1}}\right)+\text{h.c.}\,. \tag{1}\]
Here, \(m\) is a mass parameter and \(q\) is a dimensionless parameter which enables the CW mechanism for values \(q>1\)[19]. The combination \(mq\), therefore, approximates the characteristic scale of the physical spectrum for large \(q\). The near-neighbor mixing terms break the full chiral symmetry \(U(N)_{L}\times U(N+1)_{R}\) of the kinetic terms into a residual factor \(U(1)_{R}\), meaning that the physical spectrum contains one massless right-handed state. The massive eigenvalues of the spectrum, on the other hand, are given by,
\[m_{n}=m\sqrt{1+q^{2}-2q\cos\frac{n\pi}{N+1}}\sim mq\quad(n>0). \tag{2}\]
With the unphysical \((\psi)\leftrightarrow\) mass basis \((\Psi)\) transformation defined as \(\Psi_{n}=\sum_{j}a_{nj}\psi_{j}\), the rotation matrices (for large \(N\)) are given by,
\[\begin{split} a_{0j}^{R}&\sim q^{-j},\quad a_{nj}^ {L}\sim\sqrt{\frac{2}{N}}\sin\frac{(j+1)n\pi}{N+1},\\ a_{nj}^{R}&\sim\sqrt{\frac{2m^{2}}{Nm_{n}^{2}}} \left(q\sin\frac{jn\pi}{N+1}-\sin\frac{(j+1)n\pi}{N+1}\right).\end{split} \tag{3}\]
This clearly shows that the right-handed massless state \(\Psi_{R0}\) is localised towards the \(j=0\) site, thereby illustrating the clockwork mechanism. The massive modes, on the other hand, are delocalised over the lattice with roughly \(\mathcal{O}(1)\) weights.
### A model for neutrinos and leptogenesis
For a minimal realisation of neutrino masses and leptogenesis, we introduce three copies of the aforementioned SM singlet clockwork chains -- one for each flavor of the SM neutrinos -- and three flavors of additional neutral fermions \(\chi_{L,R}\) along with a SM singlet real scalar \(\Phi\). The clockwork sector then interacts with both the SM lepton doublets \(L\) and the \(\chi\)'s through the following lepton number conserving Lagrangian,
\[\begin{split}&\mathcal{L}_{CW-SM}=\sum_{\beta=1}^{3}\mathcal{L}_{CW}^{( \beta)}-\sum_{\alpha,\beta}\lambda_{1\alpha\beta}\bar{L}_{\alpha}\tilde{H} \psi_{RN}^{\beta}\\ &-\sum_{\alpha,\beta}\lambda_{2\alpha\beta}\Phi\,\bar{\psi}_{L 0}^{\beta}\,\chi_{R\alpha}-\sum_{\alpha,\beta}\lambda_{\alpha\beta}^{\chi} \Phi\,\chi_{L\beta}\,\chi_{R\alpha}+\text{h.c.}\,,\end{split} \tag{4}\]
where \(\alpha,\beta\in[1,3]\) are the flavor indices. Here, \(\tilde{H}\) has the usual definition \(\tilde{H}=i\sigma_{2}H^{*}\). An exchange symmetry among the different flavors in the clockwork sector is introduced, namely, a symmetry under the \(\mathbb{Z}_{3}\) transformations \(\psi_{\alpha}\leftrightarrow\psi_{\beta}\), only to be broken by the flavor mixing Yukawa couplings. This simply means that the set of parameters \((m,\,q,\,N)\) assume the same values for all the flavors. It will be shown shortly that the explicit \(\mathbb{Z}_{3}\) breaking also leads to a loop-induced mixing between the CW flavors which ensures that they are quasi-degenerate with just the right magnitude of mass-splitting so as to realise resonant leptogenesis. From a theory space perspective, then, the SM fields are localised at the site \(j=N\), whereas \(\chi\)'s and \(\Phi\) are localised at \(j=0\) (see Fig.1 for an illustration) [20]. The two \(3\times 3\) complex matrices \(\lambda_{1,2}\) stipulate the flavor mixing Yukawa interactions which break the residual chiral symmetry in each clockwork chain. As a result, the lightest of the right-handed CW modes pairs with a left-handed SM neutrino to get a Dirac mass. While \(\lambda_{1}\) is responsible for the active neutrino masses, \(\lambda_{2}\) facilitates CP violation and, hence, leptogenesis, provided it does not commute with \(\lambda_{1}\). Now, to ensure that the active neutrino masses are not significantly affected by terms other than the \(\lambda_{1}\) interaction, we restrict terms like \(\Phi\psi_{R0}\chi_{L}\) by introducing a \(\mathbb{Z}_{2}\) symmetry under which only the fields \(\chi_{R}\) and \(\Phi\) have an odd parity. The \(\lambda^{\chi}\) couplings, on the other hand, largely concern the dynamics of the \(\chi\) sector alone. For one, the scalar potential for \(\Phi\) is assumed to be such that it acquires a nonzero vacuum expectation value (VEV) \(\langle\Phi\rangle=v_{\Phi}\) lending masses to \(\chi\)'s -- \(m_{\chi}\sim\lambda_{3}v_{\Phi}\). For brevity we assume \(\lambda^{\chi}\) to be diagonal. We stress here that consistency with the argument that \(H\) and \(\Phi\) are localised at different CW sites demands that they do not interact with each other. Therefore, a typical form of the scalar potential would be [21],
\[V_{H,\Phi}=m_{H}^{2}H^{2}+m_{\Phi}^{2}\Phi^{2}+\lambda_{H}\left(H^{\dagger}H \right)^{2}+\lambda_{\Phi}\Phi^{4}. \tag{5}\]
Since \(\Phi\) and \(\chi\)'s couple only feebly with the SM, mediated by the heavy CW fermions, the lightest among them would typically be rendered stable over cosmological timescales. With this is mind we discuss in a subsequent section the viability of a dark matter candidate in the model. In the minimal setup, however, we find that only a significantly hot dark matter (\(\Phi\)) may exist which would, ostensibly, be in conflict with structure formation. Thus, it can only constitute a small fraction of the total thermal relic, which is naturally ensured in our scenario for \(m_{\chi}\sim m_{\Phi}<\mathcal{O}(1\,\text{MeV})\). This point is further elucidated in a later section. As concerns leptogenesis, it suffices to say that the dark sector particles (\(\chi\,\text{and}\,\Phi\)) are substantially lighter than the CW neutrinos.
Post EWSB, the active neutrino masses and the Yukawa matrix in the flavour basis are related by the bi-unitary transformation,
\[U_{L}.m_{D}^{\nu}.U_{R}^{\dagger}\sim\frac{q^{-N}v_{H}}{\sqrt{2}}\lambda_{1}, \tag{6}\]
where \(v_{H}\) is the Higgs VEV, \(U_{L,R}\) are the transformation matrices for the left and right handed fields, respectively, and \(m_{D}^{\epsilon}\) denotes the diagonal neutrino mass matrix. With the usual assumption that the charged leptons are flavor diagonal, \(U_{L}\) becomes the conventional (Pontecorvo-Maki-Nakagawa-Sakata) PMNS matrix parametrised with three mixing angles and a phase. The matrix \(U_{R}\), on the other hand, has nine independent parameters as dictated by unitarity. The factor \(q^{-N}\) corresponds to the overlap of the \(n=0\) right-handed CW fermion with the unphysical field \(\psi_{RN}\). A straightforward scan of the CW parameters shows that the Higgs Yukawa \(\lambda_{1}\) can have \(\mathcal{O}(1)\) entries while being consistent with the oscillation data [22] for \(q>2\) and \(N\sim\mathcal{O}(10)\). However, one ought to examine whether such a scenario is adequate for baryogenesis as well. We show in the ensuing sections that this is indeed true.
## III Clockwork Leptogenesis
With the flavor exchange symmetry being broken by the couplings \(\lambda_{1,2}\), the CW fermions of different flavors mix radiatively. The contributing diagrams, before EWSB, are as shown in Fig.2. The induced mixing at the CW level (\(n\)) is characterised by the loop factor
\[\begin{split}\delta_{n}^{(\alpha,\beta)}\sim\frac{1}{16\pi^{2}} \sum_{\gamma}&\Big{[}2\lambda_{1\alpha\gamma}\lambda_{1\gamma }^{*}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Mechanism for leptogenesis
At very high temperatures (\(T\gtrsim m_{\Psi}\)), the heavy CW fermions are, understandably, in thermal equilibrium with the SM sector for \(\mathcal{O}(1)\) couplings. Then, the foremost requirement for a lepton number preserving (Dirac) leptogenesis is to ensure that the asymmetry generating decays cease to equilibrate with the corresponding inverse decays before \(B+L\) violating sphaleron transition rates begin to fall at the threshold temperature \(T_{sp}\sim 150\) GeV [24; 25]. In our case this implies that the heavy neutrinos must go out of equilibrium well above \(T_{sp}\), following which all decays would contribute to the total lepton asymmetry, to be subsequently converted to baryon asymmetry through sphaleron transitions. Typically, Dirac leptogenesis models contain suppressed couplings for the decay vertices due to their proportionality to the neutrino masses. This leads to a weak washout scenario where the heavy particles decouple from the thermal bath at temperatures not far from their masses, i.e. at \(z_{D}\equiv m/T_{D}\sim\mathcal{O}(1)\), where \(T_{D}\) is the decoupling temperature. For our model, however, the heavy fermion dynamics is practically decoupled from the neutrino mass generation which results in unsuppressed effective vertices for the decay processes. In this case the wash-out factor defined at the temperature \(T\sim m_{\Psi}\) is,
\[\mathcal{K}_{i\alpha}\equiv\frac{\Gamma\left(\Psi_{i}\to L_{\alpha}H \right)}{\mathcal{H}(z=1)}\gg 1, \tag{12}\]
where \(z\equiv m_{\Psi}/T\) and \(\mathcal{H}\) is the Hubble parameter. Such an extreme wash-out suggests late decoupling for the parent neutrinos. Therefore, to avoid spoiling the resonance condition they need to be heavy enough so as to reach the out-of-equilibrium state before EWPT occurs. This sets a lower bound \(m_{\Psi}\gtrsim\mathcal{O}(10\,\text{TeV})\) on the CW neutrinos. Since the decays to SM leptons are the dominant processes, roughly the entire \(\Psi\) abundance is converted to lepton number once the washout rate is such that \(\Gamma\lesssim\mathcal{H}\). In this case a conservative estimate of the total lepton asymmetry (produced by the \(i\)-th generation of the CW fermions once they decouple) is given by,
\[Y^{i}_{\Delta L}\approx\sum_{\alpha}\epsilon^{(L)}_{i\alpha}\text{Br}\left( \Psi_{i}\to L_{\alpha}H\right)Y^{i}_{\Psi,eq}(z^{*}). \tag{13}\]
Here, \(Y^{i}_{\Psi,eq}(z^{*})\) is the equilibrium density of \(\Psi^{i}\) at the epoch \(z=z^{*}\) when it departs from its state of equilibrium with the SM leptons. Taking cue from [26], the value of \(z^{*}\) can be estimated by the relation
\[z^{*}=\log\sum_{\alpha}\mathcal{K}_{i\alpha}+5\ln\left(z^{*}/2\right), \tag{14}\]
as the point of maximum contribution to \(Y_{\Delta L}\) obtained by employing the steepest-descent method to evaluate the approximate integral equation,
\[Y^{i}_{\Delta L}\approx\sum_{\alpha}\epsilon_{i\alpha}\int_{0}^{\infty}dz\frac {K_{1}}{4g^{*}}z^{2}e^{-\int_{z}^{\infty}dz^{*}\left[z^{\alpha}/2\right]K_{1} \mathcal{K}_{i\alpha}}\,. \tag{15}\]
Here, \(K\) is a modified Bessel function of the second kind and \(g^{*}\) denotes the effective degrees of freedom. For our scenario this engenders a solution \(z^{*}\sim 20\).
For brevity, we summarise here the results of one benchmark case which goes to establish the possibility of leptogenesis in the minimal fermionic CW model. Assuming normal hierarchy for the active neutrinos with the lightest mass \(m_{0}^{\nu}=10^{-3}\,\text{eV}\), we find that the following texture for the Yukawa matrix \(\lambda_{1}\sim\lambda_{2}\),
\[\left(\begin{array}{ccccc}0.129\,-\,0.129\,\,i&-0.035+0.116\,\,i&-0.148+0.21 8\,\,i\\ -0.116+0.035\,\,i&0.851\,-\,0.851\,\,i&0.566\,-\,0.574\,\,i\\ -0.218+0.148\,\,i&0.574\,-\,0.566\,\,i&0.661\,-\,0.661\,\,i\end{array}\right)\]
is consistent with the neutrino oscillation data for the CW parameters \(N=25\), \(q=4\) and \(m_{\Psi}\sim mq=14\) TeV. This also generates a total lepton asymmetry \(|Y_{\Delta L}|=1.2\times 10^{-10}\) which lies in the right ballpark to generate the correct baryon asymmetry through sphaleron transitions. Similar examples also exist for the inverted hierarchy scenario, e.g. with \(m_{0}^{\nu}=10^{-5}\,\text{eV}\) and all other parameters having values similar to those for the preceding case.
## IV The dark sector
From the minimal setup for leptogenesis in the CW paradigm, we see the necessary existence of additional neutral fermions \(\chi_{\alpha}\) coupling to the CW sector through the scalar \(\Phi\). As previously mentioned, the lightest of these extra particles would be stable over the age of the Universe if \(m_{\chi,\Phi}<m_{h}\), where \(m_{h}\) is the SM Higgs mass. The case for \(\chi\)'s as DM is readily ruled out from overclosure as the dominant process for annihilation to the visible sector, \(\chi_{i}\Phi\to\nu h\), mediated by \(\Psi\)'s
Figure 3: _Decay channels of the heavy CW neutrinos: (a) Tree diagram and (b) Self-energy correction._
Figure 2: _One-loop interflavor mixing._
is overly suppressed. E.g., for the range of \(\lambda_{1,2}\) values consistent with leptogenesis and \(m_{\Psi}\sim\mathcal{O}(10\,\text{TeV})\), \(\langle\sigma v\rangle_{max}\sim 10^{-31}\) cm\({}^{3}\)s\({}^{-1}\). The scalar \(\Phi\), on the other hand, can saturate the relic abundance via \(3\to 2\) number changing processes. In this case, the SIMPy DM mass can be estimated to be \(\mathcal{O}(10-100\,\text{MeV})\) for \(\lambda_{\Phi}\sim\mathcal{O}(1)\)[27]. However, the secluded nature of the dark sector results in an extremely suppressed DM-SM interaction through the CW portal. Therefore, \(\Phi\) as DM would struggle to be in kinetic equilibrium with the SM as it approaches freeze-out and, as a consequence, would heat up excessively. Such a scenario is severely constrained from the usual dynamics of structure formation which suggests that the \(\Phi-\chi\) system can only constitute a small fraction of the total DM abundance. This is easily achieved in the minimal model for \(\lambda_{\chi,\Phi}\sim\mathcal{O}(1)\), \(m_{\Phi}\lesssim m_{\chi}<\mathcal{O}(1\,\text{MeV})\) which enhances the rates for \(\chi\chi\to\Phi\Phi\) and \(3\Phi\to 2\Phi\) by the factors \(\frac{\langle\sigma v\rangle}{\langle\sigma v\rangle_{0}}\Big{|}_{2\to 2} \gtrsim 10^{2}\) and \(\frac{\langle\sigma v^{2}\rangle}{\langle\sigma v^{2}\rangle_{0}}\Big{|}_{3 \to 2}\gtrsim 10^{5}\), respectively, where \(\langle\sigma v\rangle_{0}\) and \(\langle\sigma v^{2}\rangle_{0}\) denote typical threshold values for the thermally averaged annihilation cross-sections required to obtain the correct relic abundance. Clearly, this would deplete the \(\chi-\Phi\) abundances to a negligible percentage of the required DM yield. With the dynamics of leptogenesis being virtually independent of the details of the dark sector (except for the masses), it might be possible to juxtapose the basic construction proposed here with an extended dark sector, ostensibly with a heavy DM candidate of mass \(\sim\mathcal{O}(m_{\Psi})\) so as to have enhanced DM-SM interactions. Such explorations, however, are beyond the purview of this work.
## IV Discussion and conclusion
We have presented a minimal model within the clockwork paradigm where the correct lepton asymmetry for baryogenesis is naturally produced, while accounting for the small active neutrino masses, with nearly \(\mathcal{O}(1)\) and anarchic Yukawa couplings. It has a few distinct, yet interesting, features. For one, the CW mechanism stipulates that couplings of the heavy states to the leptons are roughly of the order of the Yukawa matrix elements, in stark contrast with the seesaw based _neutrinogenesis_ scenarios where the effective couplings are proportional to the neutrino masses. Therefore, for Dirac leptogenesis where self-energy corrections are the dominant higher order contributions, \(\mathcal{O}(1)\) Yukawa couplings warrant a resonant enhancement in the CP asymmetry to counter the characteristically strong washout effects. Secondly, the mass-splittings necessary for resonance are achieved quite generally within the model framework for random values of the Yukawa couplings, through loop-induced mixings, which break the flavor exchange symmetry in the CW sector explicitly. The resonance condition, in turn, demands a lower limit on the heavy fermion masses at \(\sim\mathcal{O}(10\leavevmode\nobreak\ \text{TeV})\). An immediate consequence of this limit is that the model trivially evades constraints from charged LFV processes \(\ell_{i}\to\ell_{j}\gamma\), \(\ell_{i}\to\ell_{j}\ell_{k}\ell_{l}\). On a related note, the CW scale in the model is clearly beyond the reach of the LHC as well as some of its upcoming derivatives. However, it could be probed in the future at some of the proposed energy frontier experiments, e.g. at the HE-LHC, FCC and the multi-TeV muon collider.
In conclusion, we would like to remark here that the generation of hierarchically suppressed couplings or mass scales is a generic feature of the clockwork mechanism which has led to its application in phenomenology largely in that context alone. Importantly, in most models based on the CW mechanism the dynamical heavy degrees of freedom (CW _gears_) had but only a passive role in the phenomenology discussed therein [28; 29; 30]. In this work, we have shown a more active, albeit pivotal, role for the CW gears in the context of leptogenesis. This not only paves the way for further explorations related to BAU within the CW framework, but also in other theory space constructions based on a localisation mechanism, e.g. [31]. To this effect, a model for leptogenesis based on the _closed clockwork_ topology [32] will be presented in [33].
## V Acknowledgement
We thank Debajyoti Choudhury for the illuminating discussions and his feedback on the manuscript. S.M. acknowledges research Grant No. CRG/2018/004889 of the SERB, India. T.S. would like to acknowledge the support from the Dr. D.S. Kothari Postdoctoral fellowship scheme no. F.4-2/2006 (BSR)/PH/20-21/0163.
|
2306.14691 | Tunable Synaptic Working Memory with Volatile Memristive Devices | Different real-world cognitive tasks evolve on different relevant timescales.
Processing these tasks requires memory mechanisms able to match their specific
time constants. In particular, the working memory utilizes mechanisms that span
orders of magnitudes of timescales, from milliseconds to seconds or even
minutes. This plentitude of timescales is an essential ingredient of working
memory tasks like visual or language processing. This degree of flexibility is
challenging in analog computing hardware because it requires the integration of
several reconfigurable capacitors of different size. Emerging volatile
memristive devices present a compact and appealing solution to reproduce
reconfigurable temporal dynamics in a neuromorphic network.
We present a demonstration of working memory using a silver-based memristive
device whose key parameters, retention time and switching probability, can be
electrically tuned and adapted to the task at hand. First, we demonstrate the
principles of working memory in a small scale hardware to execute an
associative memory task. Then, we use the experimental data in two larger scale
simulations, the first featuring working memory in a biological environment,
the second demonstrating associative symbolic working memory. | Saverio Ricci, David Kappel, Christian Tetzlaff, Daniele Ielmini, Erika Covi | 2023-06-26T13:36:06Z | http://arxiv.org/abs/2306.14691v1 | # Tunable Synaptic Working Memory with Volatile Memristive Devices
###### Abstract
Different real-world cognitive tasks evolve on different relevant timescales. Processing these tasks requires memory mechanisms able to match their specific time constants. In particular, the working memory utilizes mechanisms that span orders of magnitudes of timescales, from milliseconds to seconds or even minutes. This plentitude of timescales is an essential ingredient of working memory tasks like visual or language processing. This degree of flexibility is challenging in analog computing hardware because it requires the integration of several reconfigurable capacitors of different size. Emerging volatile memristive devices present a compact and appealing solution to reproduce reconfigurable temporal dynamics in a neuromorphic network.
We present a demonstration of working memory using a silver-based memristive device whose key parameters, retention time and switching probability, can be electrically tuned and adapted to the task at hand. First, we demonstrate the principles of working memory in a small scale hardware to execute an associative memory task. Then, we use the experimental data in two larger scale simulations, the first featuring working memory in a biological environment, the second demonstrating associative symbolic working memory.
Introduction
The short-term storage of information, one of the main properties of working memory (WM), is at the forefront of most cognitive abilities in humans and animals [1, 2]. Tasks that involve WM include visual processing, speech comprehension, and episodic planning; thus impairment of WM results in a partial or complete loss of these abilities [3, 4]. Several models have been suggested to explain the physiological mechanisms that underlie the WM in the brain. Today it is becoming increasingly evident that a mixture of specialized mechanisms enable an ensemble of versatile memory systems in the brain that cover memory duration from several seconds to minutes [4, 5].
One such mechanism for WM is based on short-term dynamics of synapses, i.e., short-term plasticity (STP). Figure 1 illustrates the synaptic model of WM. When a new memory item is stored in the WM network, an ensemble of neurons that represents the item is activated and synaptic strengths between these neurons are transiently potentiated. This transient increase in synaptic strength is caused by short term mechanisms that put the synapse in a state of higher effectiveness [2, 5]. This state of high effectiveness supports the system to retrieve the stored memory item. The transient decay of the synaptic strengths and, thus, of the memory items makes the WM susceptible for new content.
Various mechanisms for WM were introduced also in modern machine learning (ML) to improve the performance in complex sequence processing tasks including natural language [6, 7, 8, 9]. Indeed, similar to the WM, recurrent neural networks (RNNs) provide the possibility to perform associative memory and recall stored information thanks to the presence of internal feedback loops that ensures the persistence of data. In recent studies, recurrency demonstrated to be a crucial aspect in, e.g., grammar learning [10]. In contrast to feedforward architectures that are often being used in ML, RNNs inhere the desirable computational properties to represent long-term dependencies in sequential data, due to their computational load [11, 12].
Network architectures such as long-short term memory (LSTM) and gated recurrent units (GRUs) try to optimize the use of hardware resources while still exploiting recurrency [13]. However, these solutions are still expensive in terms of hardware area and number of operations. Bespoke hardware optimizing resources in terms of power consumption, area, and computational workload could therefore facilitate the implementation of RNNs. At present, complementary metal oxide semiconductor (CMOS)-only solutions include digital standard hardware, e.g., graphic processing units (GPUs) and field programmable gate arrays (FPGAs), as well as analog or mixed-signal application specific integrated circuit (ASIC). Digital circuits usually imply a heavy computational workload and the need of waiting for several clock cycles [14], while ASICs have a higher energy efficient approach by adopting subthreshold circuits working asynchronously [15]. However, the temporal dynamics are usually implemented by charging or discharging a capacitor with a constant current [16], thus consuming power. Furthermore, when dealing with biologically relevant time scales, the size of the capacitors becomes non-negligible.
Alternative approaches with better scaling and lower energy consumption are deeply sought to enable neuromorphic circuits with bio-realistic VMs [17]. In this respect, a promising technology is represented by memristive devices [18], namely two-terminal devices able to reversibly change their conductance upon the application of proper electrical stimuli. Memristive devices show a broad range of attractive properties, including high scalability, high read/program speed, high energy efficiency, and programming voltages comparable with the power supply of typical neuromorphic chips [19, 20]. While non-volatile memristive devices have already shown promising results when used as non-volatile synapses, volatile memristive devices that can keep track of re
Figure 1: Conceptual illustration of information storage in working memory. The neural network can store and recall features of an item. The volatile nature of the synapses allows the memory of the stored features to fade in time. After depletion of the memory, the features of a different item can be stored without significant interferences.
cent neural activity and implement synapses with biologically compatible time constants, are still largely unexplored. Indeed, volatile properties have so far mainly been investigated in reservoir computing applications [21, 22, 23].
In this work, we use C / HfO\({}_{2}\) / Ag memristive devices featuring a high ON/OFF ratio of \(10^{8}\) and tunable time constants in the range of milliseconds to seconds [24, 25], which are two desirable features for our application. The switching mechanism relies on the formation and spontaneous dissolution of a silver conductive filament (CF) across the switching layer. The relaxation, namely retention time, i.e., the time it takes to the CF to dissolve, has been shown to depend on the diameter of the CF itself: the thicker the filament, the longer the retention time [26, 27, 28]. The advantages of this technology are manifold: the retention times are electrically tunable and the time information is physically located in the CF, thus consuming a negligible power and area on the chip. Moreover, the probability to switch the device on is also electrically tunable, thus adding a further degree of freedom in controlling the dynamics of the WM. These features are exploited to demonstrate store and recall of patterns in a small scale WM hardware, then the system is scaled up in simulations to demonstrate the use of volatile memristive devices in a large-scale biologically inspired model of synaptic WM and in an associative symbolic WM.
## 2 Results
### Volatile memristive device
The volatile synapse used in this work is a resistive switching C / HfO\({}_{2}\) / Ag memristive device (see fabrication details in _Materials and Methods_). After the device fabrication, the device is in its high-resistive state and can be switched to the low resistive state (LRS) by applying a quasi-static voltage sweep between 0 V and 1.5 V and back. Contrary to many filamentary devices, the proposed device does not require an electroforming operation (Fig. S1a), thus simplifying the electrical operation within the neuromorphic circuit. The maximum current flowing through the device (i.e., current compliance, I\({}_{\text{CC}}\)) is limited by applying a voltage to the gate of an NMOS transistor whose drain is connected to the bottom electrode of the memristive device (Fig. 2A, inset). The switching to the LRS occurs because, as the voltage exceeds a given _threshold voltage_ V\({}_{\text{T}}\), Ag ions migrate across the oxide layer and form the CF, thus bringing the device in a LRS (_set operation_, Fig. 2A) [27, 29, 30]. However, the filament self-sustains only in presence of a voltage higher than a critical voltage referred to as _hold voltage_ V\({}_{\text{H}}\). Below this value, the rediffusion of the Ag atoms in the dielectric layer causes the self-disruption of the filament and, as a consequence, the device transition to a high resistive state (HRS) (_spontaneous recovery_, Fig. 2A). These operations are fairly reproducible (Fig. S1b). The two main features that enable the use of the proposed memristive device as a volatile synapse in working memory tasks are tunable volatility in biologically relevant time-scales and controllable switching probability.
The retention time is shown in Fig. 2B and the median retention time increases exponentially with the current compliance, as illustrated in Fig. 2C. The device can be tuned to be in LRS for a time ranging from ms to seconds depending on I\({}_{\text{CC}}\), which is the time-scale of interest for our application. The intrinsic stochasticity of the filament formation at a microscopic level originates high variability in the distributions of the retention times [31].
The controllability of the switching probability (P\({}_{\text{ON}}\)) of the device can instead be achieved by either changing the width or the amplitude of the voltage pulse applied to the device, as shown in Fig. 2D. It is worth noticing that the device operates with voltages \(<\)3 V, which are compatible with standard CMOS technology and therefore suitable for integration with neuromorphic circuits. Fig. 2E shows the stochastic behavior of the device. Due to its stochastic nature, the threshold voltage of the device is slightly variable, which results in an electrically tunable switching probability: The higher the pulse voltage, the higher the switching probability [25]. Therefore, the desired switching probability of the device can be tuned by setting a suitable voltage amplitude. Moreover, when stimulated by a burst of identical pulses (Fig. S2), the switching probability of the device increases with the number of pulses, as demonstrated in Fig. 2F and S3. This effect can therefore be exploited to accelerate the storing or training phase in a network.
### Working memory
#### 2.2.1 Store and recall of features
The volatile behavior of the memristive device is used in a small-scale experiment of WM to store and recall features, as depicted in Fig. 3A. In our example, our network consists of 5 volatile
synapses all connected to the same neuron (schematic in Fig. 3B). The current flowing to the neuron is the sum of the currents flowing through the stimulated devices and it depends on the resistive state of each device. The neuron is trained to recognize a color within a color stream. Each color is encoded by stimulating a different combination of three devices as in Fig. 3C. As a result, for each color only three devices out of five are switched on. At first, the target color is stored in the network by repeatedly stimulating the network with the combination of pulses that encodes the desired color. As a consequence, the corresponding stimulated devices will be switched on and set to their LRS. Afterwards the network is fed with a stream of random stimuli, as shown in Fig. 3D (see also Fig. S4). When the stored item is presented to the network, all the stimulated devices are in LRS and therefore the current received by the neuron overcomes a threshold, thus triggering the firing of the neuron. The current threshold is set according to the current distributions shown in Fig. S5. After about 1 s without stimulation, all the devices switch off and the pattern is forgotten, as shown in Fig. S4. The correlation plot in Fig. 3E shows the difference between the expected and the measured current when a pattern is presented. The main contribution to the current is given by the synapses that are stimulated and in LRS, therefore three different current levels can be expected. It is shown that we can reach more than 90% accuracy, defined as the percentage of correct classifications (stored / non-stored pattern) during a test sequence.
The retention time and the switching probability of each synaptic device are electrically tunable through the gate voltage of the transistor (which sets the current compliance) and the voltage amplitude of the pulse applied to the top electrode (TE) of the device, respectively. These features,
Figure 2: Ag-based volatile memristive device characterization. (a) Sketch of the one-transistor / one-resistor (1T1R) RRAM device together with its working principle. The memristive device (1R) is based on a W / C / 10 nm HfO\({}_{2}\) / Ag stack. The RRAM shows a volatile behavior, i.e., a set operation together with a spontaneous switch off. (b) Time characterization of the retention of the filament. After a 5 V amplitude triangular pulse to switch the cell on with an I\({}_{\mathrm{CC}}\) = 20 uA, a constant reading voltage of -150 mV is applied to monitor the retention. (c) Retention time distributions at different compliance currents. The median value of the retention time increases with the compliance current. Inset: applied programming pulse. (d) Switching probability of the device for a single pulse as a function of the amplitude and the pulse duration of the programming pulse. Shorter pulses required higher voltage amplitudes to switch ON. (e) Probability of switching the RRAM to the LRS depending on the number of pulses and their amplitude (1 ms pulses). The circles are the experimental data while the solid lines is the fitting. (f) Effect of the number of pulses and the voltage amplitude (1.6 V) on the switching of the device. The switching of the device is stochastic. Considering a group (burst) of pulses, the probability that the device is in the LRS inside the group increases.
together with the stimulation frequency of the network, determine the accuracy of the network in recognizing the color. As shown in Fig. 3F, for low P\({}_{\mathrm{ON}}\) (P\({}_{\mathrm{ON}}\) = 5%), an increase of the spike rate leads to an improvement of the accuracy because the time between two pulses is shorter than the retention time of the device. Instead, higher values of P\({}_{\mathrm{ON}}\) lead the network to experience a decrease in its accuracy because the probability that a non-active device is switched on is higher. Finally, the intrinsic volatility of the network results in a progressive forgetting of the stored element. Fig. 3G gives an estimation on how long the network remembers by showing the average mismatch current, defined as the difference between the expected current and the measured current. The two main errors that can occur over time, i.e., the network either fails to recognize the correct color or recognizes the wrong one, depend on the combination of stimulation frequency and P\({}_{\mathrm{ON}}\). Indeed, for high P\({}_{\mathrm{ON}}\) and stimulation frequency, devices that were supposed to be in their HRS turn to
Figure 3: WM store and recall experiment. (a) High-level sketch of the working memory. (b) Schematic of the WM implementation: 5 volatile 1T1R memristive devices are arranged in parallel configuration. The gate is chosen to set I\({}_{\mathrm{CC}}\) = 17 uA, that corresponds to a retention time of 28 ms. (c) Color - pattern encoding. (d) Store and recall experiment. During the store phase, a single pattern is fed to the network. Top colored plot: input stimuli. For ease of visualization, each pattern is colored as the color it encodes. Black dots in the bottom part of the upper plot indicate the stored pattern. Bottom plot: measure current fed to the post-neuron. The current threshold for recognition is indicated as a dashed black horizontal line. The traces are cropped on the x-axes to better highlight the salient events. (e) Correlation plot between the expected and measured currents based on the difference between the presented and the stored pattern. Results obtained from 10 different store and recall experiments with P\({}_{\mathrm{ON}}\) = 5% and stimulation frequency f\({}_{\mathrm{stim}}\) = 50 Hz. (f) Accuracy of the system in distinguishing the stored pattern under different stimulation and switching conditions. (g) Average current error, defined as the difference between the measured current and the expected current, during 100 patterns applied for the different conditions.
their LRS, thus increasing the measured current. The opposite combination, i.e., low \(\mathrm{P_{ON}}\) and stimulation frequency, results in the switching off of devices previously in LRS, thus decreasing the measured current. The results suggest that indeed a careful selection of the \(\mathrm{P_{ON}}\) based on the planned stimulation frequency is advisable to fine tune the WM accuracy for a given task, as it will be discussed further in Section 3.
#### 2.2.2 Biologically inspired model of synaptic WM
The experimental results on WM based on volatile memristive devices can be used to develop a WM model for simulation of a larger scale, recurrent neural network. In [5] a model of WM in the brain is introduced that is based on STP combined with multi-stable network dynamics. We adapted this network model to implement the core dynamics of our volatile memristive synapses (Fig. 2 and 3) as a model of STP. Fig. 4A illustrates our model that serves as WM model that is able to store a set of discrete patterns. For this, the network receives input from five input populations that represent the input patterns to be stored. A set of input neurons (illustrated by colored circles) are connected to the recurrent WM network (cloud) so that the memory items could be stored and reactivated. All neurons are spiking neurons, such that outputs are given by unitary events that are emitted when an internal membrane potential variable crossed the firing threshold (see Section 2.2). The network has a multi-stable dynamics in that neurons of a specific population excite each other and transiently strengthen synapses within the population through STP when activated. Inhibitory neurons are added to facilitate the network multi-stability (see Section _Materials and Methods_ for details). The experimental paradigm, the network architecture, and the synapse parameters are adjusted to the characteristics of our memristive devices. Retention time parameters are fit to the measured device characteristics.
Fig. 4B shows typical behavior of the model over a simulation of several seconds. Spiking
Figure 4: Large-scale simulation of WM. (a) Illustration of the network model. 5 different memory items (_A,B,C,D_ and _E_) can be stored in a recurrent network of spiking neurons. Corresponding strongly connected populations within the network transiently store memory items after activation. (b) Network activity of the WM model. Black dots show individual spikes of input (top) and network (bottom) neurons. Multiple phases of store and recall are shown. Insets show average firing rates (spikes per second in Hz) over recall phases. Data obtained using a current compliance of \(330\,\mathrm{\SIUnitSymbolMicro A}\), corresponding to average retention times of \(1.5\,\mathrm{s}\), and a voltage amplitude of \(0.5\,\mathrm{V}\), corresponding to a switching probability of \(5\%\), were used in this simulation. Network behavior using (c) different retention time distribution (change of compliance current) and (d) different switching probability (change of applied voltage amplitude).
activity of the network and input neurons are shown. The pattern \(C\) is stored by strongly activating the corresponding input neurons. This triggers co-activation of the pool of WM neurons that encode patterns \(C\). Through this activation, volatile memristive synapses get strengthened and cause a prominent response in WM neurons when a recall stimulus that activates all input neurons at intermediate rates is given after a timeout of 1 s. During this recall phase neurons that encode pattern \(C\) show significantly increased activity. Two repeated recalls are shown and lead to reliable memory performance. In the timeout phase WM neurons are almost perfectly silent, which is perfectly in line with the behavior of biological neurons. After the second recall phase the memory item \(C\) is being forgotten and \(A\) can be stored without interference by strongly activating pattern \(A\) input neurons. In a third recall period, \(A\) but not \(C\) neurons get strongly activated.
The WM model requires time constants in the order of behaviorally relevant time scales (\(>\)500 ms to few seconds) to function well. Fig. 4C shows the memory recall performance of the network when different retention times are used. Recall performance is measured here as the signal-to-noise ratio (SNR) between the WM neurons. The SNR is computed here as the mean firing rate of the population that is specific to the memory item over the mean firing rate of non-specific populations. Retention times below 1 s lead to degraded performance because the firing activity of the neurons is significantly lower than the retention time. This result is in accordance with the experiments shown in Fig. 3. Also switching probabilities had to be finely tuned. Fig. 4D shows the impact of switching probabilities on the recall performance. Switching probabilities of about 0.05 are found to work best for this memory store/recall protocol because it prevents an excessive activation of the volatile devices, which would prevent the correct storage of information, as also confirmed by the experimental results of Fig. 3. Thanks to the flexibility of our volatile memristive devices, these probabilities can be obtained with a suitable selection of the voltage amplitude and pulse time width as shown in Fig. 2D.
#### 2.2.3 Associative symbolic WM
Another important feature of WM is the ability to transiently form associations between properties, such as color or shape, to remember representations of real-world object. Fig. 5A illustrates this associative symbolic WM model. We investigated whether the model memristive devices can be used to form transient associations between symbolic items of different categories. We used a similar store-recall paradigm as in Fig. 4 but here the memory item to store is given by association between features that could be dynamically bound together. These features are thought to represent the state of physical objects an associative memory networks is able to perceive through a set of sensors. Concretely we used features from 3 different stimulus categories, shape, texture and color (see Fig. 5). Objects are represented by jointly activating feature neurons in the memory network, e.g. a _'smooth, red cylinder'_ is represented by jointly activating the corresponding feature neurons in the network. STP model synapses as in Fig. 4 are used as short-term memory inside the bidirectional connections between the neurons.
The network is able to form an autoassociative memory. After an association has been formed by presenting an object to the network, the features of that object can be recalled, after a brief time delay of length \(T_{\mathrm{delay}}\), by querying the network with only one part of the features. Fig. 4B
Figure 5: Associative symbolic WM with volatile memristive devices. (a) Illustration of the network for associative symbolic WM. (b) Sequence of store and recall. Store/recall input (top row) and WM neuron output spikes (bottom row) are shown. Inserted pictograms represent the decoded objects and recall queries. Store/recall inputs had a one-to-one fixed connectivity to WM neurons. After storing an association it can be recalled by cuing the network with an arbitrary memory element. (c) Decoding error plotted as a function of time delay between store and first recall.
shows one example store and recall sequence. After the object _'smooth, red cylinder'_ is stored, neurons representing its features can be re-activated by triggering only the _'cylinder'_ or _'smooth'_ neurons. After a new object (_'smooth, blue cone'_) is stored, its representation can be retrieved by only activating the _'cone'_ neuron. The object representations are transiently stored for and then slowly fade away. This is analyzed in Fig. 4C where we plot the decoding error as a function \(T_{\text{delay}}\). The memory can be reliably retrieved during a time window of \(600\,\mathrm{ms}\), corresponding in our devices to \(\mathrm{I_{CC}}\!=\!70\,\mathrm{\SIUnitSymbolMicro A}\).
## 3 Discussion
The emergence of a new class of volatile memristive devices featuring volatility in biologically-relevant time scales opens the possibility to implement in hardware neuromorphic systems that can inherently solve complex sequence processing tasks. The main advantage of the proposed technology lies in the storage of the information in the physical configuration of the nanoscale device, i.e., the CF inside the oxide layer of the memristive device. This way, a direct correlation of the retention time with the electrically tunable properties of the device is established.
The volatile properties of memristive devices have been so far exploited mainly in reservoir computing [21, 22, 23]. Other applications include selector devices and hardware security [32], while the exploration of the potential of volatile devices in systems requiring short-term memory is still at its infancy [33, 34]. Yet, the use of volatile devices in tasks as WM is extremely advantageous from a hardware perspective. Indeed, the networks devoted to carry out such tasks need the ability to forget the stored information in time, otherwise the network would quickly reach its maximum memory capacity and become unable to store new experiences unless old ones are forgotten. An implementation using non-volatile devices is thus feasible only together with the design of extra circuits that could reset the devices, thus consuming extra area and power. Another solution to implement volatility is with capacitors. However, in addition to the much larger area that capacitors would require to implement the same time constants as the proposed volatile devices, the stochastic properties that contribute to the correct functioning of the network (see Fig. 4D) would have to be implemented by dedicated circuits.
In our work, we exploit the properties of Ag-based resistive switching devices to carry out WM tasks. We first characterize the device and assess its electrical properties, then we select the parameters to conduct a store and recall hardware experiment, where we demonstrate the ability of short-term storage of features and explore the performance of the system under a variety of stimulation conditions. We then use a biologically inspired synapse model that qualitatively reproduces short-term dynamics of STP with the properties of our memristive device [35, 36]. We demonstrate the functionality of this STP model by qualitatively reproducing the WM model of [5] in Fig. 4. This experiment demonstrates that STP based on volatile memristive device dynamics is able to install WM capability in a multi-stable network of spiking neurons. Memory items can be reliably retrieved seconds after storage and overwritten with new items on the same time scale. Furthermore we demonstrate a symbolic associative working memory model in Fig. 5 where we use the data of the device characterization to preserve the feature of the device in terms of switching probability, retention time, and variability.
As shown in Fig. 3, the device parameters and the stimulation conditions are linked and therefore they need to be matched in order to achieve successful network operation. During the experiments, the store phase exploits the properties of burst stimulation shown in Fig. 2E, which in our case is beneficial because it shortens the store phase. Due to their stochasticity, the devices do not switch on together, as visible in Fig. 3D, hence the duration of the store phase should take this aspect into consideration. In case of two consecutive store phases, only the second element is actually safely stored in the network. Indeed, during the second store phase, the synapses common to both elements remain active, whereas the ones that are no longer stimulated switch off, while the others specific to the second element only switch on. Changing \(\mathrm{P_{ON}}\) has repercussions in the speed of the store process (Fig. S6), but also in the recall phase (Fig. 3F and Fig. 3G). Indeed, while an increase of \(\mathrm{P_{ON}}\) is beneficial during store, it becomes detrimental during recall, since it results in the erroneous activation of one or more devices. A trade-off therefore exists between store speed and recall accuracy when setting the device's \(\mathrm{P_{ON}}\).
Another important factor that has to be considered when setting the device parameters is the expected stimulation rate. Each incoming pulse has a different effect on the memristive device, depending on whether the device is ON (in LRS) or OFF (in HRS). In the former case, the CF, i.e., the information stored in the memristive device, is refreshed. In the latter, the stimulation might activate OFF devices. As a consequence, there are two mechanisms that degrade the accuracy of
the system: Low stimulation rates may lead to the switching off of previously ON devices, while high rates may lead to erroneous classification due to the switching on of previously OFF devices (Fig. S7, Fig. S8). A possible mitigation measure is tuning the mean retention time of the devices: low spike rates need longer retention times, whereas high spike rates might benefit from shorter retention times.
The proposed STP synapse model presents an important difference compared to the one in [35, 36], that is the high level of noise, which imitates a feature of biological synapses. Chemical synaptic transmission in biology is inherently unreliable. About half of synaptic transmissions are not detectable at the post-synaptic side at all, which makes synapses an abundant source of noise in the brain [37, 38, 39]. Given how costly synapses are in terms of energy consumption [40], this finding is surprising. Several authors have therefore suggested that the noise in synapses serves as a computational resource that allows the brain to solve complex tasks more efficiently [37, 41]. We show here that the noise in synaptic short-term dynamics, that mimics the behavior of synaptic facilitation in biological synapses, can be exploited to realize short-term memory on behaviorally relevant time scales of several seconds.
## 4 Conclusion
In summary, real-world applications require very different time constants. Technologies that enable the design of systems whose internal temporal dynamics can be tuned to match the real world ones present appealing opportunities, especially in the context of power and memory limited edge computing. Our results show that the proposed Ag-based volatile memristive device features electrical tunability of its key parameters, i.e., retention time and switching probability, that allows to adapt the lifetime of WM to the task-specific timescale needed - from \(1\,\mathrm{ms}\) to \(10\,\mathrm{s}\).
## 5 Materials and Methods
### Device fabrication
The memristive devices used in this study are fabricated on top of foundry-based MOS transistors, namely one-transistor / one-resistor (1T-1R), allowing the control of the compliance current \(I_{\mathrm{C}}\)[42]. The bottom electrode (BE) consists of a \(70\,\mathrm{nm}\times 70\,\mathrm{nm}\) graphitic carbon pillar, that was already demonstrated to be a good electrode material for both volatile and non-volatile devices thanks to its stability and inert behavior [25]. The oxide layer and the TE are fabricated by e-beam evaporation. The oxide is a \(10\,\mathrm{nm}\) HfO\({}_{2}\) active layer and the TE is a \(100\,\mathrm{nm}\) thick Ag layer. Both HfO\({}_{2}\) and Ag are deposited at room temperature and without breaking the vacuum (pressure \(3\times 10^{-6}\,\mathrm{mbar}\)).
### Electrical setup for device characterization
The electrical setup is the same as described in [25], where the device is connected either to a semiconductor device parameter analyzer (DC characterization) or to a waveform generator and an oscilloscope (pulsed characterization). DC characterization is carried out using a HP 4156C semiconductor device parameter analyzer. A sweep from \(0\,\mathrm{V}\) to \(1.5\,\mathrm{V}\) is applied to the TE of the device while the BE is grounded. The characterization includes DC sweeps with different compliance current (\(\mathrm{I_{C}}\)), as in Fig. S1a. The pulsed characterization as well as the working memory experiment are carried out using a TTI TGA12104 arbitrary waveform generator. The voltage waveforms are applied to the TE. To measure the current, a LeCroy Waverunner 640Zi oscilloscope is connected at the BE side, and the voltage drop across a \(50\,\mathrm{\SIUnitSymbolOhm}\) series resistance is probed. MATLAB(r) software is used for the data analysis and the control of the instruments.
The temporal dynamics are studied by first applying a semi-triangular pulse with \(10\,\mathrm{ms}\) pulse duration and \(5\,\mathrm{V}\) amplitude to induce the filament formation and then monitoring the state of the filament with constant -\(150\,\mathrm{mV}\) bias. To tune the temporal behavior, the \(\mathrm{I_{C}}\) is changed from \(10\,\mathrm{\SIUnitSymbolMicro A}\) to \(70\,\mathrm{\SIUnitSymbolMicro A}\). For each \(\mathrm{I_{C}}\), \(100\,\mathrm{exp}\) experiments are carried out. To avoid possible interference due to the previous cycles, the value of \(\mathrm{I_{C}}\) is changed in random order.
The switching probability (Fig.2D) is studied by applying \(100\) pulses for each combination of voltage amplitude - pulse duration. The conditions are applied in random order. The impact of the number of pulses (Fig.2D and E) is analyzed with the same methodology, selecting the order of the combination of voltage amplitude - number of pulses randomly. Each combination is applied \(100\) times.
For the WM store and recall experiment, each probability-frequency-pattern condition is repeated 10 times. The voltage bias is applied only at the end of the experiments to check the retention capabilities.
### Fitting of device features
Thanks to the stochasticity of the switching mechanism, the probability \(\mathrm{P}_{ON}\) for a single pulse of given amplitude (Fig. 2D) follows a normal distribution and thus is fit using a cumulative distribution function:
\[P_{ON}(V)=\frac{1}{2}\left(1+erf\left(\frac{V-\mu}{\sigma\sqrt{2}}\right)\right) \tag{1}\]
where the average value \(\mu\) and variance \(\sigma\) are fitting parameters of the device and the pulse duration, respectively. Table 1 collects the data of the device presented in Fig. 2D. Each point in the figure corresponds to 100 measurements.
The switching probability, for a given amplitude, as a function of the number of pulses is fit with the mathematical model:
\[P_{ON}(N)=1-(1-P_{ON}(1))^{N}\quad\text{where}\quad P_{ON}(1)=P_{ON}(V) \tag{2}\]
In the WM store and recall experiments, unless otherwise noted, the following conditions apply: no read voltage between spikes during the store and the recall phases. A read voltage of -150 mV is applied at the end of the experiment. The current compliance is \(\mathrm{I}_{\mathrm{C}}=17\,\mathrm{\SIUnitSymbolMicro A}\), which gives an average retention time of 28 ms.
### STP synapse model
To perform the computer simulations we developed a simplified phenomenological model that qualitatively reproduces the the behavior of the memristive devices. The model captures the switching probabilities and variable retention time of the devices. Each synapse \(i\) was modeled with a binary internal state variable \(x_{i}\) that denotes either the low (\(x_{i}=1\)) or the high resistance state (\(x_{i}=0\)). The device resistance was \(r_{1}\) and \(r_{0}\) in the low and high resistance state, respectively. To model switching probabilities we assigned a parameter \(\rho_{i}\in[0,1]\) to every synapse. Upon arrival of a pre-synaptic input spike, \(x_{i}\) was set to 1 with probability \(\rho_{i}\).
To model the trial-by-trial variability of the retention times we adopted a Lognormal distribution for the retention times \(t_{ret}\), namely:
\[t_{ret}\;\sim\;\text{Lognormal}(t_{ret}\;|\;\mu_{i},\sigma_{i})=\;\frac{1}{ \sqrt{2}\pi\sigma_{i}\,t_{ret}}\,\exp\left(-\frac{(\log(t_{ret})-\mu_{i})^{2 }}{2\sigma_{i}^{2}}\right)\;. \tag{3}\]
The parameters \(\mu_{i}\) and \(\sigma_{i}\) in Eq. 3 were adjusted to fit the device properties. Supplementary figure S10 shows example model fits to experimental data for different compliance currents. In the simulations we used \(\mu=7.24\) and \(\sigma_{i}=0.82\), if not stated otherwise, which corresponds to a mean retention time of \(\sim 1.5\,\mathrm{s}\). Whenever a synapse was set to the low resistance state (\(x_{i}=1\)) the retention time \(t_{ret}\) was drawn from Eq. 3. After the simulation time \(t\) exceeded \(t+t_{ret}\) the synapse spontaneously returned to the high-resistance state.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Time width [ms]** & **Average \(\mu\) [V]** & **Variance \(\sigma\) [V]** \\ \hline
0.05 & 2.31 & 0.38 \\ \hline
0.10 & 2.11 & 0.33 \\ \hline
0.15 & 1.86 & 0.30 \\ \hline
0.50 & 1.73 & 0.22 \\ \hline
1.00 & 1.21 & 0.16 \\ \hline
2.00 & 0.61 & 0.15 \\ \hline
5.00 & 0.59 & 0.11 \\ \hline \end{tabular}
\end{table}
Table 1: Values of \(\mu\) and \(\sigma\) calculated for different pulse time widths.
### Working memory network model
To reproduce the working memory model introduced in [5], we used a recurrent network of 8000 excitatory and 2000 inhibitory neurons. Connection probabilities between these neurons were as in [5]. Supplementary Figure S9 illustrates the detailed network structure, connection probabilities and baseline synaptic conductances. Each memory item was represented by a population of 800 excitatory neurons, which were randomly chosen from the recurrent network for each of the 5 memory items. Synaptic weights between these neurons were 5 times stronger (0.5 mV) than between other neurons (0.1 mV). All inhibitory synapses were static (no STP) and had a strength of -0.2 mV. To store and recall the working memory, 1000 input neurons were set up for each memory item. Input neurons were only connected to one of the memory item populations in the network. Input neurons fired at low baseline firing rates of 0.1 Hz. To store a memory item, firing rates of input neurons corresponding to one memory item were elevated 10-fold. During recall all input neurons were set to unspecific elevated activation.
All neurons used the leaky integrate and fire (LIF) model with biologically plausible parameters [43]. The LIF neuron is a spiking point neuron model that transiently integrates synaptic inputs using a leaky membrane potential. The membrane potential \(u(t)\) follows the dynamics
\[\frac{d\,u}{d\,t}\ =\ -\frac{1}{\tau_{m}}\left(u(t)-u_{0}\right)\ +\ i(t)\;, \tag{4}\]
where \(u_{0}\) is the membrane resting potential \(\tau_{m}\) is a time constant. \(i(t)\) is the input into the neuron denoting the summed effect of afferent synapses. If the membrane potential crosses a firing threshold \(\vartheta\) at time \(t^{f}\) a spike is emitted and the membrane potential is reset immediately after
\[u(t^{f}+\Delta t)=u_{r}\;. \tag{5}\]
After a spike the neuron is inactive for a brief refractory time. The firing threshold was set to 20 mV and refractory time to 2 ms. Membrane time constants \(\tau_{m}\) were 15 ms and 10 ms, resting potentials \(u_{0}\) 16 mV and 13 mV, for excitatory and inhibitory neurons, respectively. Independent unit variance Gaussian noise with mean 0.5775 mV were injected into each excitatory, and with mean 0.5275 mV into inhibitory neurons. As in [5] the mean of the Gaussian noise was precisely tuned to enable multistability in the network. Example network dynamics are shown in Fig. 4D, for a store-recall experiment over 8 seconds.
### Associative symbolic working memory model
For the associative working memory model in Fig. 5 we developed a network architecture that was capable to store bidirectional short-term associations between items of different categories. The associative memory model consisted of 8 memory neurons that were connected through bidirectional associative connections as illustrated in Fig. 5A. The memory neurons encoded for the three memory categories, shape, texture and color. Each memory neuron exclusively received input from one input neuron to trigger store and recall events. Memory neurons were implemented using the LIF neuron model, with firing threshold of 1.4 V and membrane time constant of 15 ms. Each group of neurons corresponding to one memory category was augmented with a single inhibitory neuron that provided lateral feedback to enforce mutually exclusive activation of one memory item at a time. Inhibitory neurons were LIF neurons with threshold of 1.4 V and membrane time constant of 10 ms. All excitatory connections had a strength of 1.5 V and inhibitory feedback was -1.5 V.
Model memristive devices were used inside the bidirectional connections to store the association between a specific pair of memory neurons (which we call here the specific neurons). Exactly one model device was used per association. Memristive devices were augmented with two auxiliary threshold gates to route the input and outputs during store and recall cycles. Gates were put in series with the devices, one before (input gate) and one after (output gate). Input gates received excitatory input from the two specific memory neurons the association corresponded to. In addition input gates received inhibitory input from all other memory neurons. Output gates recursively connected back to specific memory neurons through excitatory connections. Storage of memory items was not dependent on the activity of the neurons and threshold gates (see Fig. 5), but retained solely in the hidden state of the memristive devices. Model parameters of memristive devices were fit to experimental data (\(\mu\)=7.24, \(\sigma\)=0.82, corresponding to ca. 1.4 s mean retention time). Switching probabilities \(\rho\) were 0.2. The synaptic conductance for the low resistance state was chosen to elicit a voltage pulse of 1.0 \(V\) in the post-synaptic neuron. The high resistance \(r_{0}\) was set to be 20 times higher than the low resistance \(r_{1}\).
### Details to software simulations
Simulations of the working memory model with volatile memristive devices were done in python (3.8) using a custom implementation of the memristive synapse model that was developed for the NEST 2.14 simulation environment [44]. The simulation time step was \(1\,\mathrm{ms}\). Neuron dynamics were simulated using the current-based leaky integrate-and-fire model is available in NEST. A custom model of the STP model outlined in Section 5.4 was implemented based on existing synapse models. Data analysis used the numpy and matplotlib python packages in version 1.23.2 and 3.5.3, respectively. The code will be made available upon publication.
|
2301.09101 | Bounds On the order of the Schur multiplier of $p$-groups | In 1956, Green provided a bound on the order of the Schur multiplier of
$p$-groups. This bound, given as a function of the order of the group, is the
best possible. Since then, the bound has been refined numerous times by adding
other inputs to the function, such as, the minimal number of generators of the
group and the order of the derived subgroup. We strengthen these bounds by
adding another input, the group's nilpotency class. The specific cases of
nilpotency class 2 and maximal class are discussed in greater detail. | Pradeep Kumar Rai | 2023-01-22T11:05:24Z | http://arxiv.org/abs/2301.09101v1 | # Bounds on the order of the Schur multiplier of \(p\)-groups
###### Abstract.
In 1956, Green provided a bound on the order of the Schur multiplier of \(p\)-groups. This bound, given as a function of the order of the group, is the best possible. Since then, the bound has been refined numerous times by adding other inputs to the function, such as, the minimal number of generators of the group and the order of the derived subgroup. We strengthen these bounds by adding another input, the group's nilpotency class. The specific cases of nilpotency class 2 and maximal class are discussed in greater detail.
Key words and phrases:Schur multiplier, finite \(p\)-group, maximal class 2010 Mathematics Subject Classification: 20J99, 20D15
## 1. Introduction
The Schur multiplier \(M(G)\) of a finite group \(G\) is defined as the second cohomology group of \(G\) with coefficients in \(\mathbb{C}^{*}\). It plays an important role in the theory of extensions of groups. Finding the bounds on the order, exponents, and ranks of the Schur multiplier of prime power groups has been a major focus of previous investigations. This article investigates the bounds on the order of the Schur multiplier of prime power groups.
Let \(G\) be a finite \(p\)-group of order \(p^{n}\). In 1956 Green [6] proved that \(|M(G)|\leq p^{\frac{1}{2}n(n-1)}\). Since then, this bound has been strengthened by many mathematicians [3, 4, 5, 8, 9, 10, 14, 15, 16, 17, 19, 20, 21, 22].
To note the most recent ones, let G be a non-abelian \(p\)-group of order \(p^{n}\) with derived subgroup of order \(p^{k}\). Niroomand [15] proved that
\[|M(G)|\leq p^{\frac{1}{2}(n-k-1)(n+k-2)+1}. \tag{1.1}\]
The author noted in [17] that a bound by Ellis and Wiegold [4] is better than this bound and derived from their bound that
\[|M(G)|\leq p^{\frac{1}{2}(d-1)(n+k-2)+1}(=p^{\frac{1}{2}(d-1)(n+k)-(d-2)}),\]
where \(d\) is the minimal number of generators of \(G\).
In this article, we further refine the bounds by adding another input, i.e., the nilpotency class of the group.
Before proceeding to the results of this article, we set some notations that are mostly standard. The center and the commutator subgroup of a group \(G\) are denoted by \(Z(G)\) and \(\gamma_{2}(G)\), respectively. By \(d(G)\) we denote the minimal number of generators of \(G\). We write \(\gamma_{i}(G)\) for the \(i\)-th term in the lower central series of \(G\). The subgroup \(\langle x^{p}\mid x\in G\rangle\) is denoted by \(G^{p}\). Finally, the abelianization of the group \(G\), i.e. \(G/\gamma_{2}(G)\), is denoted by \(G^{ab}\).
We now state our first theorem.
**Theorem 1.1**.: _Let \(G\) be a non-abelian \(p\)-group of order \(p^{n}\) and nilpotency class \(c\) with \(|\gamma_{2}(G)|=p^{k}\) and \(d=d(G)\). Then_
\[|M(G)|\leq p^{\frac{1}{2}(d-1)(n+k)-\sum\limits_{i=2}^{\min(d,c)}d-i}.\]
_Thus, if \(\mu=\min(d,c)\) then,_
\[|M(G)|\leq p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(\mu-1)[2d-(\mu+2)]}.\]
_Considering cases, the inequality can be restated as follows:_
\[|M(G)|\leq\begin{cases}p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(d-1)(d-2)}&\text{ if }\ \ \ \ d\leq c\\ p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(c-1)(2d-(c+2))}&\text{ if }\ \ \ \ d\geq c.\end{cases}\]
Next, we consider \(p\)-groups of nilpotency class \(2\). A finite \(p\)-group of nilpotency class \(2\) is said to be special if its center coincides with the derived and the Frattini subgroups. Berkovich and Janko asked the following questions:
**Question 1.2**.: _[_1_, Problem 1729]_ _Let G be a special p-group with \(d(G)=d\) and \(|Z(G)|=p^{\frac{1}{2}d(d-1)}\). Find the Schur multiplier of \(G\) and describe the representation groups of \(G\)._
**Question 1.3**.: _[_1_, Problem 2027]_ _Find the Schur multiplier of special \(p\)-groups with center of order \(p^{2}\)._
The Question 1.2 and 1.3 of Bekovich and Janko have been studied in [18] and [7] respectively. In view of these questions, and the fact that for special \(p\)-groups \(G\), \(|Z(G)|=p^{k}\) if and only if \(d(\gamma_{2}(G))=k\), it seems reasonable to consider the term \(d(\gamma_{2}(G))\) while investigating the bounds for the order of the Schur multiplier of \(p\)-groups of nilpotency class \(2\).
**Theorem 1.4**.: _Let \(G\) be a non-abelian finite \(p\)-group of order \(p^{n}\) with \(|\gamma_{2}(G)|=p^{k}\), \(d(G)=d\) and \(d\bigg{(}\frac{\gamma_{2}(G)}{\gamma_{3}(G)}\bigg{)}=\gamma\). Then_
\[|M(G)|\leq p^{\frac{1}{2}(d-1)(n+k)-\sum\limits_{i=2}^{\min(d,\gamma+1)}d-i}.\]
_Thus, if \(\nu=\min(d,\gamma+1)\) then,_
\[|M(G)|\leq p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(\nu-1)[2d-(\nu+2)]}.\]
_Considering cases, the inequality can be restated as follows:_
\[|M(G)|\leq\begin{cases}p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(d-1)(d-2)}&\text{ if }\ \ \ \ d\leq\gamma+1\\ p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}\gamma(2d-(\gamma+3))}&\text{if }\ \ \ \ \ d\geq\gamma+1.\end{cases}\]
In view of the Questions 1.2 and 1.3 the following corollary is an application of Theorem 1.4 to special \(p\)-groups.
**Corollary 1.5**.: _Let \(G\) be a finite \(p\)-group of nilpotency class 2 such that \(G^{p}\leq\gamma_{2}(G)\), \(|\gamma_{2}(G)|=p^{k}\) and \(d(G)=d\). Then_
\[p^{\frac{1}{2}d(d-1)-k}\leq|M(G)|\leq\begin{cases}p^{(d-1)(k+1)}&\text{if}\ \ \ \ \ d\leq k+1\\ p^{\frac{1}{2}d(d-1)+\frac{1}{2}k(k+1)}&\text{if}\ \ \ \ \ d\geq k+1.\end{cases}\]
_Moreover_
* _If_ \(G^{p}\cong\mathbb{Z}_{p}\) _then,_ \[|M(G)|\leq\begin{cases}p^{(d-1)k-1}&\text{if}\ \ \ \ \ d\leq k\\ p^{\frac{1}{2}d(d-1)+\frac{1}{2}k(k-1)-1}&\text{if}\ \ \ \ \ d\geq k.\end{cases}\]
* _If_ \(p\) _is an odd prime and_ \(G^{p}=\gamma_{2}(G)\)_, then_ \[|M(G)|\leq p^{\frac{1}{2}d(d-1)+\frac{1}{2}k(k-3)}.\]
* _If_ \(G_{1}\) _and_ \(G_{2}\) _are special_ \(p\)_-groups with_ \(|Z(G_{1})|=p^{2}\) _and_ \(|Z(G_{2})|=p^{3}\) _then_ \[p^{\frac{1}{2}d(d-1)-2}\leq|M(G_{1})|\leq p^{\frac{1}{2}d(d-1)+3}\] _and_ \[p^{\frac{1}{2}d(d-1)-3}\leq|M(G_{2})|\leq p^{\frac{1}{2}d(d-1)+6}.\]
Let \(p\) be an odd prime and \(G_{1}\) be a special \(p\)-group with \(|Z(G_{1})|=p^{2}\). A more general result has already been given by Mazur [13] in this case. He proves that if the epicenter \(Z^{*}(G_{1})\) does not coincide with the center \(Z(G_{1})\) then \(G_{1}\) is of exponent \(p\) and belongs to one of the five classes of groups. These classes include a group of order \(p^{5}\), three groups of order \(p^{6}\), two groups of order \(p^{7}\), a group of order \(p^{2m+3}\) (for all \(m\geq 3\)) and two groups of order \(p^{2m+2}\) (for all \(m\geq 2\)). It is easy to see from [11, Theorem 2.5.10] that if \(Z^{*}(G_{1})\) coincide with \(Z(G_{1})\) then \(M(G_{1})\) is elementary abelian of order \(p^{\frac{1}{2}d(d-1)-2}\). Therefore, it only remains to compute the Schur multiplier of groups that belong to the above mentioned five families. This can be easily achieved by using the explicit presentation of the non-unicentral groups given by Mazur. This observation nullifies [7, Theroem 1.3(d)] which states that if \(G_{1}^{p}\cong\mathbb{Z}_{p}\) then \(|M(G_{1})|=p^{\frac{1}{2}d(d-1)}\) if and only if \(Z^{*}(G_{1})=G^{p}\). It is clear from the above discussion that such groups do not exist. Also, in contrast to [7, Theorem 1.1(c)], one can see that the Schur multiplier of \(G_{1}\) is always elementary abelian and never of exponent \(p^{2}\).
The following corollary improves on Corollary 1.5 for \(G_{2}\) when \(|G_{2}|\geq p^{13}\).
**Corollary 1.6**.: _Let \(p\) be an odd prime and \(G_{2}\) be a special \(p\)-group with \(|Z(G_{2})|=p^{3}\) and \(|G_{2}|\geq p^{13}.\) Then_
\[|M(G_{2})|\leq p^{\frac{1}{2}d(d-1)+2}.\]
_Moreover, if \(G_{2}^{p}=\gamma_{2}(G_{2})\), then_
\[|M(G_{2})|\leq p^{\frac{1}{2}d(d-1)-2}.\]
Next, we consider the groups of maximal class. A finite \(p\)-group of order \(p^{n}\) is said to be of maximal class if its nilpotency class is \(n-1\). Let \(G\) be a finite \(p\)-group of maximal class and order \(p^{n}\). Since \(G\) is generated by 2 elements, it
follows by a result of Gaschutz [5] that \(|M(G)|\leq p^{n-1}\). Moravec [14] proved that, if \(n>p+1\) then \(|M(G)|\leq p^{\frac{n+1}{2}\left\lceil\frac{n-1}{p-1}\right\rceil}\). Improving Moravec's result, we prove the following theorem.
**Theorem 1.7**.: _Let \(p\) be an odd prime and \(G\) be a finite \(p\)-group of maximal class with \(|G|=p^{n}\), \(n\geq 4\). Then \(|M(G)|\leq p^{\frac{n}{2}}\)._
## 2. Proofs
Let \(G\) be a finite \(p\)-group and \(\overline{G}\) be the factor group \(G/Z(G)\). The commutator \(x^{-1}y^{-1}xy\) of the elements \(x,y\in G\) is denoted by \([x,y].\) For ease of reading, we shall use the same 'bar notation' \(\overline{g}\) for \(g\in G\) to denote the different elements in different factor groups, when there is no danger of ambiguity. Such notations should be interpreted according to the context. For example, whenever \(\overline{[x_{1},x_{2}]}\otimes\overline{x_{3}}\in\frac{\gamma_{2}(G)}{\gamma _{3}(G)}\otimes\overline{G}^{ab}\) for \(x_{1},x_{2},x_{3}\in G\), by \(\overline{[x_{1},x_{2}]}\) and \(\overline{x_{3}}\) we mean \([x_{1},x_{2}]\gamma_{3}(G)\) and \(x_{3}Z(G)\gamma_{2}(G)\), respectively.
We now proceed to prove the Theorem 1.1. The proof is founded on the following result of Ellis and Weigold [4, Proposition 1, comments following Theorem 2].
**Proposition 2.1**.: _Let \(G\) be a finite \(p\)-group of nilpotency class \(c\) and \(\overline{G}\) be the factor group \(G/Z(G)\). Then_
\[\Big{|}M(G)\Big{|}\Big{|}\gamma_{2}(G)\Big{|}\prod_{i=2}^{c}\Big{|}\operatorname {Im}\Psi_{i}\Big{|}\leq\Big{|}M(G^{ab})\Big{|}\prod_{i=2}^{c}\Big{|}\frac{ \gamma_{i}(G)}{\gamma_{i+1}(G)}\otimes\overline{G}^{ab}\Big{|},\]
_where \(\Psi_{i}\), for \(i=2,\ldots c,\) is a map from \(\underbrace{\overline{G}^{ab}\otimes\overline{G}^{ab}\cdots\otimes\overline{G }^{ab}}_{i+1\text{ times}}\) to \(\frac{\gamma_{i}(G)}{\gamma_{i+1}(G)}\otimes\overline{G}^{ab}\) defined as follows:_
\[\Psi_{2}(\overline{x_{1}}\otimes\overline{x_{2}}\otimes\overline{x_{3}})= \overline{[x_{1},x_{2}]}\otimes\overline{x_{3}}+\overline{[x_{2},x_{3}]} \otimes\overline{x_{1}}+\overline{[x_{3},x_{1}]}\otimes\overline{x_{2}}.\]
_For \(3\leq i\leq c\),_
\[\Psi_{i}(\overline{x_{1}}\otimes\overline{x_{2}}\otimes\cdots \otimes\overline{x_{i+1}}) = \overline{[x_{1},x_{2},\cdots,x_{i}]_{l}}\otimes\overline{x_{i+1} }+\overline{[x_{i+1},[x_{1},x_{2},\cdots x_{i-1}]_{l}]}\otimes\overline{x_{i}}\] \[+\overline{[[x_{i},x_{i+1}]_{r},[x_{1},\cdots,x_{i-2}]_{l}]} \otimes\overline{x_{i-1}}\] \[+\overline{[[x_{i-1},x_{i},x_{i+1}]_{r},[x_{1},x_{2},\cdots,x_{i- 3}]_{l}]}\otimes\overline{x_{i-2}}\] \[+\cdots+\overline{[x_{2},\cdots,x_{i+1}]_{r}}\otimes\overline{x_{ 1}}\]
_where_
\[[x_{1},x_{2},\cdots x_{i}]_{r}=[x_{1},[\cdots[x_{i-2},[x_{i-1},x_{i}]]\ldots]\]
_and_
\[[x_{1},x_{2},\cdots x_{i}]_{l}=[\ldots[[x_{1},x_{2}],x_{3}],\cdots,x_{i}].\]
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1.: Let \(\Psi_{i}\) be the map defined above and \(d(G/Z(G))=\delta\). Following Proposition 2.1 we have that
\[|M(G)||\gamma_{2}(G)||\underset{i=2}{\overset{c}{\prod}}|\mathrm{Im}\,\Psi_{i}| \leq|M(G^{ab})|p^{k\delta}.\]
Applying [17, Lemma 2.1] this gives
\[|M(G)|\underset{i=2}{\overset{c}{\prod}}|\mathrm{Im}\,\Psi_{i}|\leq p^{\frac{ 1}{2}(d-1)(n-k)+k(\delta-1)},\]
so that
\[|M(G)|\underset{i=2}{\overset{c}{\prod}}|\mathrm{Im}\,\Psi_{i}|\leq p^{\frac{ 1}{2}(d-1)(n+k)-k(d-\delta)}. \tag{2.1}\]
Choose a subset \(S=\{x_{1},x_{2},\ldots,x_{\delta}\}\) of \(G\) such that \(\{\overline{x_{1}},\overline{x}_{2},\cdots,\overline{x_{\delta}}\}\) be a minimal generating set for \(G/Z(G)\). Fix \(i\leq\min(\delta,c)\). Since \(i\leq c\), \(\gamma_{i}(G)/\gamma_{i+1}(G)\) is a non-trivial group. Using [12, Lemma 3.6 (c)] we can choose a commutator \([y_{1},y_{2},\cdots,y_{i}]\) of weight \(i\) such that \([y_{1},y_{2},\cdots,y_{i}]\notin\gamma_{i+1}(G)\) and \(y_{1},\ldots,y_{i}\in S\). Since \(i\leq\delta\), \(S\backslash\{y_{1},y_{2},\cdots,y_{i}\}\) contains at least \(\delta-i\) elements. Choose any \(\delta-i\) elements \(z_{1},z_{2},\ldots,z_{\delta-i}\) from \(S\backslash\{y_{1},y_{2},\cdots,y_{i}\}\). Since \([y_{1},y_{2},\cdots,y_{i}]\notin\gamma_{i+1}(G)\) and \(z_{j}\notin\{y_{1},y_{2},\cdots,y_{i}\}\), \(\Psi_{i}(\overline{y_{1}},\ldots,\overline{y_{i}},\overline{z_{j}})\neq 1\). Notice that the set \(\{\Psi_{i}(\overline{y_{1}},\ldots,\overline{y_{i}},\overline{z_{j}})\ \mid\ 1\leq j \leq\delta-i\}\) is a minimal generating set for \(\langle\{\Psi_{i}(\overline{y_{1}},\ldots,\overline{y_{i}},\overline{z_{j}}) \mid\ 1\leq j\leq\delta-i\}\rangle\), because \(\{\overline{x_{1}},\overline{x_{2}},\cdots,\overline{x_{\delta}}\}\) is a minimal generating set for \(G/Z(G)\). It follows that \(|\mathrm{Im}\Psi_{i}|\geq p^{\delta-i}\). Putting this in Equation 2.1 we get the required result.
We now proceed to prove Theorem 1.4. The following proposition is the main ingredient of the proof.
**Proposition 2.2**.: _Let \(G\) be a \(p\)-group of nilpotency class 2 and \(\Psi_{2}\) be the homomorphism given in the Proposition 2.1. Suppose \(d\Big{(}\frac{G}{Z(G)}\Big{)}=\delta\) and \(d(\gamma_{2}(G))=\gamma\). Then,_
\[|\mathrm{Im}(\Psi_{2})|\geq p^{\underset{i=2}{\overset{min(\delta,\gamma+1)}{ \sum}}(\delta-i)}.\]
Proof.: Choose \(x_{1},x_{2},\ldots,x_{\delta}\in G\) such that
\[\frac{G}{\Phi(G)Z(G)}=\langle\overline{x_{1}}\rangle\times\langle\overline{x_ {2}}\rangle\times\cdots\times\langle\overline{x_{\delta}}\rangle.\]
Let \(U\) be the set \(\{x_{1},x_{2},\ldots,x_{\delta}\}.\) We now choose a minimal generating set for \(\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}\) in the following manner: Since the set \(T=\{[x_{i},x_{j}]\mid 1\leq i<j\leq\delta\}\) generates \(\gamma_{2}(G)\), we choose an element from this set, say \([x_{i_{1}^{1}},x_{i_{2}^{1}}]\) such that \(\overline{[x_{i_{1}^{1}},x_{i_{2}^{1}}]}\neq 0\in\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}\). Define
\[U_{1}=\{x_{i_{1}^{1}},x_{i_{2}^{1}}\},\]
and
\[V_{1}=\Big{\langle}\overline{[x_{i_{1}^{1}},x_{i_{2}^{1}}]}\Big{\rangle}\leq \frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}.\]
Suppose \(U_{j}\) and \(V_{j}\leq\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}\) have been defined. To define \(U_{j+1}\) and \(V_{j+1}\), we check if there exists any element \([y,z]\in T\), \(y,z\in U\) such that \(U_{j}\cap\{y,z\}\neq\phi\) and \(V_{j}<\left<V_{j},\overline{[y,z]}\right>\). If such an elements exists in \(T\), say \([x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]\), then we define
\[U_{j+1}=U_{j}\cup\{x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}\},\]
and
\[V_{j+1}=\left<V_{j},\overline{[x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]}\right>\leq \frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}.\]
Otherwise we choose any element \([x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]\in T\), \(x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}\in U\) such that
\[V_{j}<\left<V_{j},\overline{[x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]}\right>,\]
and define
\[U_{j+1}=\{x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}\},\]
and
\[V_{j+1}=\left<V_{j},\overline{[x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]}\right>.\]
Clearly
\[V_{\gamma}=\left<\overline{[x_{i_{1}^{1}},x_{i_{2}^{j}}]},\ldots,\overline{[x_ {i_{1}^{\gamma}},x_{i_{2}^{j}}]}\right>=\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p }}.\]
Now suppose that for \(j=k_{1},k_{2},\ldots,k_{t}\), \(U_{j}\cap U_{j+1}=\phi\) and also that these are the only such numbers. Denote the element
\[\bar{x}\otimes\overline{[y,z]}+\bar{y}\otimes\overline{[z,x]}+\bar{z}\otimes \overline{[x,y]}\in\frac{G}{\Phi(G)Z(G)}\otimes\frac{\gamma_{2}(G)}{\gamma_{2} (G)^{p}}\]
by \((x,y,z)\), and define
\[W_{j}=\{(x,x_{i_{1}^{j}},x_{i_{2}^{j}})\mid x\in U\backslash U_{j}\}\]
for \(j=1,\ldots,\gamma\).
We claim that \(W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}\right>\). To see this, note that \(\frac{G}{\Phi(G)Z(G)}\) and \(\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}\) are elementary abelian \(p\)-groups, and therefore can be considered as vector spaces over field \(\mathbb{Z}/p\mathbb{Z}\) with bases \(\{\overline{x_{1}},\ldots,\overline{x_{\delta}}\}\) and \(\{[\overline{x_{i_{1}^{1}}},\overline{x_{i_{2}^{1}}}],\ldots,[\overline{x_{i_{ 1}^{\gamma}}},\overline{x_{i_{2}^{\gamma}}}]\}\) respectively. It follows that the set \(\{\overline{x_{i}\otimes}\overline{[x_{i_{1}^{j}},x_{i_{2}^{j}}]}\mid 1\leq i\leq \delta,1\leq j\leq\gamma\}\) form a basis for the tensor product \(\frac{G}{\phi(G)Z(G)}\otimes\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}\). Now take an element \((x,x_{i_{1}^{1}},x_{i_{2}^{1}})\in W_{1}\). The presence of the term \(\overline{x}\otimes\overline{[x_{i_{1}^{1}},x_{i_{2}^{1}}]}\) in the expression \((x,x_{i_{1}^{1}},x_{i_{2}^{1}})\) ensures that \((x,x_{i_{1}^{1}},x_{i_{2}^{1}})\notin\left<W_{1}\backslash(x,x_{i_{1}^{1}},x_{ i_{2}^{1}})\right>\). This shows that \(W_{1}\) minimally generates \(\left<W_{1}\right>\). If \(k_{1}>1\), suppose for \(j\leq k_{1}-1\), \(W_{1}\cup W_{2}\ldots\cup W_{j}\) minimally generates \(\left<W_{1}\cup W_{2}\ldots\cup W_{j}\right>\). Take an element \((x,x_{i_{1}^{j+1}},x_{i_{2}^{j+1}})\in W_{j+1}\). By definition \(x\in U\backslash U_{j+1}\). Therefore if
\[(x,x_{i_{1}^{j+1}},x_{i_{2}^{j+1}})\in\left<W_{1},W_{2},\ldots,W_{j},W_{j+1} \backslash(x,x_{i_{1}^{j+1}},x_{i_{2}^{j+1}})\right>,\]
then
\[\overline{x}\otimes\overline{[x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]}=\overline{x} \otimes\overline{[x_{i_{1}^{1}},x_{i_{2}^{1}}]^{\alpha_{1}}\cdots[x_{i_{1}^{j} },x_{i_{2}^{j}}]^{\alpha_{j}}}.\]
Since \(\overline{x}\neq 0\in\frac{G}{\Phi(G)Z(G)}\), we get that
\[\overline{[x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]}=\overline{[x_{i_{1}^{1}},x_{i_{2 }^{1}}]^{\alpha_{1}}\cdots[x_{i_{1}^{j}},x_{i_{2}^{j}}]^{\alpha_{j}}}.\]
But this is not possible because \(V_{j}<\left<V_{j},\overline{[x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}]}\right>\). This shows that \(W_{1}\cup W_{2}\cup\ldots\cup W_{k_{1}}\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{k_{1}}\right>\).
Now if \(\gamma>1\), suppose that \(k_{t+1}=\gamma\), and also that \(W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}}\) for \(1\leq j\leq t\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}}\right>\). Let \((x,x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}})\in W_{k_{j}+1}\). By the definition of \(W_{k_{j}+1}\), we have \(x\in U\backslash U_{k_{j}+1}\) (\(i.e.\)\(U\backslash\{x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}}\}\)). First assume that \(x\in U_{k_{1}}\cup\cdots\cup U_{k_{j}}\) and let \(l_{1},l_{2},\ldots,l_{r}\) be all those numbers less than \(k_{j}+1\) such that \(x=x_{i_{1}^{l_{1}}}=x_{i_{2}^{l_{2}}}=\cdots=x_{i_{r_{r}}^{l_{r}}}\) where \(s_{i}^{\prime}s\) are either \(1\) or \(2\). Without loss of generality, we can assume that \(s_{i}=1\) for \(i=1,2,\ldots,r\). Now if
\[(x,x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}})\in\left<W_{1},W_{2},\ldots W_{k_{j }+1}\backslash(x,x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}})\right>,\]
then
\[\overline{x}\otimes\overline{[x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j }+1}}]} = \overline{x}\otimes\overline{[x_{i_{1}^{1}},x_{i_{2}^{k_{j}}}]^{ \alpha_{1}}\cdots[x_{i_{1}^{k_{j}}},x_{i_{2}^{k_{j}}}]^{\alpha_{k_{j}}}[x_{i_{ 1}^{l_{2}}},y_{1}]^{\beta_{1}}\cdots[x_{i_{2}^{l_{r}}},y_{r}]^{\beta_{r}}}\]
for some \(y_{1},y_{2},\ldots,y_{r}\in G\) and for some \(\alpha_{i},\beta_{k}\in\mathbb{Z}\). Since \(\overline{x}\neq 0\in\frac{G}{\Phi(G)Z(G)}\) we have
\[\overline{[x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}}]} = \overline{[x_{i_{1}^{1}},x_{i_{2}^{1}}]^{\alpha_{1}}\cdots[x_{i_ {1}^{k_{j}}},x_{i_{2}^{k_{j}}}]^{\alpha_{k_{j}}}[x_{i_{2}^{l_{1}}},y_{1}]^{ \beta_{1}}[x_{i_{2}^{l_{2}}},y_{2}]^{\beta_{2}}\cdots[x_{i_{2}^{l_{r}}},y_{r} ]^{\beta_{r}}}.\]
But then, for some \(y\in U\) and for some \(i\),
\[V_{k_{j}}<\left<V_{k_{j}},\overline{[x_{i_{2}^{l_{i}}},y]}\right>.\]
This contradicts the way we have chosen the basis for \(\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}\). Therefore, we can now assume that that \(x\in U\backslash(U_{k_{1}}\cup\cdots\cup U_{k_{j}}\cup U_{k_{j}+1})\). Next, if
\[(x,x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}})\in\left<W_{1},W_{2}, \ldots W_{k_{j}+1}\backslash(x,x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j}+1}})\right>,\]
we get that
\[\overline{x}\otimes\overline{[x_{i_{1}^{k_{j}+1}},x_{i_{2}^{k_{j }+1}}]} = \overline{x}\otimes\overline{[x_{i_{1}^{1}},x_{i_{2}^{1}}]^{ \alpha_{1}}\cdots[x_{i_{1}^{k_{j}}},x_{i_{2}^{k_{j}}}]^{\alpha_{k_{j}}}}\]
for some \(\alpha_{i}\) (\(1\leq i\leq k_{j}\)) \(\in\mathbb{Z}\), which is again not possible. This shows that \(W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}+1}\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}+1}\right>\). If \(k_{j+1}>k_{j}+1\), suppose that \(W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}+i}\) for \(i\leq k_{j+1}-k_{j}-1\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}+i}\right>\). With the same idea applied in the above argument, it can be shown that \(W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}+i+1}\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j}+i+1}\right>\). This shows that \(W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j+1}}\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{k_{j+1}}\right>\). Therefore, we can now conclude that \(W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}\) minimally generates \(\left<W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}\right>\) and hence hence \(W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}\) is a linearly independent set.
Now, since \(|U_{1}|=|U_{k_{i}+1}|=2\) for \(1\leq i\leq t\), it follows, by the definition of \(W_{1}\) and \(W_{k_{i}+1}\), that \(|W_{1}|=|W_{k_{i}+1}|=\delta-2\). For \(1\leq j\leq k_{i}-1\), note that, if \(|W_{j}|\geq m\) then \(|W_{j+1}|\geq m-1\), because \(U_{j}\cap\{x_{i_{1}^{j+1}},x_{i_{2}^{j+1}}\}\neq\phi\). Therefore it easily follows that
\[|W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}|\geq\sum_{i=2}^{min(\delta,\gamma+1)}( \delta-i).\]
Now, by the universal property of tensor products there exists a homomorphism \(\eta\), such that the following diagram commutes.
\[\begin{CD}\frac{G}{\gamma_{2}(G)Z(G)}\times\gamma_{2}(G)@>{\phi}>{}>\frac{G}{ \gamma_{2}(G)Z(G)}\otimes\gamma_{2}(G)\\ @V{\mathcal{P}}V{}V@V{\eta}V{}V\\ \frac{G}{\Phi(G)Z(G)}\times\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}}@>{\theta}>{ }>\frac{G}{\Phi(G)Z(G)}\otimes\frac{\gamma_{2}(G)}{\gamma_{2}(G)^{p}},\end{CD}\]
where
\[\mathcal{P}(\overline{x},y)=(\overline{x},\overline{y})\ \ \text{for}\ x\in G,y\in\gamma_{2}(G),\]
\[\phi(\overline{x},y)=\overline{x}\otimes y,\ \ \ \text{and}\ \ \ \theta( \overline{x},\overline{y})=\overline{x}\otimes\overline{y}.\]
Therefore, we have
\[\eta(\overline{x}\otimes y)=\overline{x}\otimes\overline{y}.\]
Since the preimage of \(W_{1}\cup W_{2}\cup\ldots\cup W_{\gamma}\) under \(\eta\) is isomorphic to a subgroup of \(\operatorname{Im}(\Psi_{2})\) we get that
\[|\operatorname{Im}(\Psi_{2})|\geq p\overset{min(\delta,\gamma+1)}{\sum}^{( \delta-i)}.\]
This completes the proof.
We are now ready to prove The Theorem 1.4.
Proof of Theorem 1.4:.: Let \(\Psi_{2}\) be the homomorphism given in the Proposition 2.1 and \(\overline{\Psi}_{2}\) be the similarly defined homomorphism associated with the group \(G/\gamma_{3}(G)\). Also, let \(d(G/Z(G))=\delta\). Since \(G/\gamma_{3}(G)\) is a group of nilpotency class 2, we get from the Proposition 2.2 that
\[|\operatorname{Im}(\overline{\Psi}_{2})|\geq p\overset{min(\delta,\gamma+1)}{ \sum}^{(\delta-i)}.\]
It is easy to see that \(|\operatorname{Im}(\Psi_{2})|\geq|\operatorname{Im}(\overline{\Psi}_{2})|.\) The Theorem 1.4 now follows from Equation 2.1.
We now proceed towards the proof of the Corollary 1.5. The proof makes use of the following two results.
**Proposition 2.3**.: _Let \(G\) be a \(p\)-group (\(p\) odd) of nilpotency class 2 with \(G/\gamma_{2}(G)\) elementary abelian, \(d(G)=d\) and \(|G^{p}|=p^{t}\). Let \(V\) be the subgroup of \(\gamma_{2}(G)\otimes G/\gamma_{2}(G)\) generated by all elements of the form \(x^{p}\otimes x\gamma_{2}(G)\) for \(x\in G\). Then \(|V|=p^{\frac{1}{2}t(2d-t+1)}\)._
The proof of the proposition follows exactly along the same lines as [18, proposition 3.3].
**Theorem 2.4**.: _[_11_, Theorem 2.5.6]_ _Let \(Z\) be a central subgroup of a finite group \(G\). Then there exists the following exact sequence_
\[Z\otimes\frac{G}{\gamma_{2}(G)}\mapsto M(G)\mapsto M(G/Z)\mapsto\gamma_{2}(G )\cap Z\mapsto 1,\]
_where the map \(\alpha:Z\otimes\frac{G}{\gamma_{2}(G)}\mapsto M(G)\) is defined as follows: Let \(G\) be given by \(F/R\) for some free group \(F\) and its normal subgroup \(R\), and \(Z\) be identified as \(T/R\). After
identifying \(Z\otimes G\) and \(M(G)\) as \(T/R\otimes F/\gamma_{2}(F)R\) and \(\gamma_{2}(F)\cap R/[F,R]\) respectively, \(\alpha\) is defined by_
\[\alpha(xR\otimes yR\gamma_{2}(F))=[x,y][F,R].\]
We are now ready to prove Corollary 1.5.
Proof of Corollary 1.5.: For all finite \(p\)-groups, the lower bound is a well known fact [Corollary 3.2, [9]]. The upper bound is a direct consequence of the Theorem 1.4 and the fact that for the group \(G\), \(\gamma=k\) and \(n=d+k\).
Assume next that \(G^{p}\cong\mathbb{Z}_{p}\). Since \(\gamma_{2}(K)\leq K^{2}\) for any finite group \(K\), for \(p=2\) it follows that \(G\) is either a quaternion or a dihedral group of order \(8\). Therefore \(M(G)\) is either trivial or of order \(2\). Hence we can assume now that \(p\) is an odd prime. Consider the exact sequence from Theorem 2.4 for \(Z=G^{p}\). We will show that the map \(\alpha:G^{p}\otimes\frac{G}{\gamma_{2}(G)}\mapsto M(G)\) is the trivial map in this case. To see this, let \(G^{p}=\langle g^{p}\rangle\) for some \(g\in G\). By the definition of \(\alpha\) note that \(x^{p}\otimes x\gamma_{2}(G)\in\ker\;\alpha\) for all \(x\in G\). Therefore \((gy)^{p}\otimes gy\gamma_{2}(G)\in\ker\;\alpha\) for all \(x,y\in G\). The bilinearity of the tensor product \(\otimes\) implies that
\[g^{p}\otimes y\gamma_{2}(G)+y^{p}\otimes g\gamma_{2}(G)\in\ker\;\alpha.\]
But \(y^{p}=(g^{p})^{m}\) for some natural number \(m\). Hence \(g^{p}\otimes y\gamma_{2}(G)\in\ker\;\alpha\) for all \(y\in G\). As a result, \(\alpha\) is the trivial map. It follows, from the exact sequence in 2.4, that
\[|M(G)|=\frac{|M(G/G^{p})|}{|G^{p}|}.\]
Now apply the general bound obtained earlier in the corollary for the group \(G/G^{p}\) to get the required bound in this case.
Next suppose that \(p\) is an odd prime and \(G^{p}=\gamma_{2}(G)\). Applying the exact sequence in Theorem 2.4 again for \(Z=G^{p}=\gamma_{2}(G)\), we get
\[|M(G)|\leq\frac{|M(G/\gamma_{2}(G)|}{|\gamma_{2}(G)|}\frac{|G^{p}\otimes G/ \gamma_{2}(G)|}{|\ker\;\alpha|}. \tag{2.2}\]
Again by the definition of \(\alpha\) it is evident that \(x^{p}\otimes\overline{x}\in\ker\;\alpha\) for all \(x\in G.\) Therefore from Proposition 2.3
\[|\ker\;\alpha|\geq p^{\frac{1}{2}\gamma(2d-k+1)}.\]
Now putting \(|M(G/\gamma_{2}(G)|=p^{\frac{1}{2}d(d-1)}\), \(|G^{p}\otimes G/\gamma_{2}(G)|=p^{dk}\) and the lower bound for \(|\ker\;\alpha|\) in the Inequality 2.2 yield the desired result.
We now prove Corollary 1.6 which improves on Corollary 1.5 in the case \(G\) is a special \(p\)-group with \(|G|\geq p^{13}\) and \(|Z(G)|=p^{3}.\) The proof uses the well known connection between the capability of groups and the Schur multiplier.
Proof of Corollary 1.6.: Let \(Z^{*}(G)\) be the smallest central subgroup of \(G\) such that \(G/Z^{*}(G)\) is capable. Since \(|G|\geq p^{13}\), by [13, Theorem 5.7]\(G\) is not capable. Therefore \(Z^{*}(G)\) is non-trivial. Let \(Z\) be a subgroup of \(Z^{*}(G)\) of order \(p\). Since \(Z\leq Z(G)=\gamma_{2}(G)\), by [11, Theorem 2.5.10] we have
\[|M(G)|=|M(G/Z)|/|Z|.\]
Note that \(G/Z\) is a nilpotent group of class 2 such that \(G^{p}\leq\gamma_{2}(G)\) and \((G/Z)^{p}=\gamma_{2}(G/Z)\). This enables us to apply Corollary 1.5 for the group \(G/Z\) to obtain the desired result.
We now prove Theorem 1.7 which gives bounds on the Schur multiplier of \(p\)-groups of maximal class. These groups are an important class of finite \(p\)-groups and were first studied by Blackburn [2]. The following proof uses the well-known facts discovered by him.
_Proof of Theorem 1.7_ Let \(P_{1}=C_{G}(\gamma_{2}(G)/\gamma_{4}(G)).\) Choose arbitrary elements \(s\in G\backslash P_{1}\cup C_{G}(\gamma_{n-2}(G))\) and \(s_{1}\in P_{1}\backslash\gamma_{2}(G)\). Then \(s\) and \(s_{1}\) generate \(G\). If we define \(s_{i}=[s_{i-1},s]\) for \(i\geq 2\), then \(s_{i}\in\gamma_{i}(G)\backslash\gamma_{i+1}(G)\). Let \(\Psi_{i},i\geq 3\), be the map defined in the Proposition 2.1. Then
\[\Psi_{i}(\overline{s_{1}}\otimes\overline{s}\otimes\overline{s} \otimes\cdots\otimes\overline{s}\otimes\overline{s_{1}}) = \overline{[s_{1},s,s,\cdots,s]_{l}[s,s,\cdots,s,s_{1}]_{r}} \otimes\overline{s_{1}}+\overline{t}\otimes\overline{s}\]
for some \(t\in G\).
If \(i\) is an odd integer, notice that
\[\overline{[s_{1},s,s,\cdots,s]_{l}}=\overline{[s,s,\cdots,s,s_{1}]_{r}}.\]
Since \(p\neq 2\), it follows that \(\Psi_{i}(\overline{s_{1}}\otimes\overline{s}\otimes\overline{s}\otimes \cdots\otimes\overline{s}\otimes\overline{s_{1}})\) is a non-identity element so that \(\mathrm{Im}\Psi_{i}\) is non-trivial. Using this fact, Equation 2.1 yields the desired result.
|
2302.10100 | Electromagnetic instability of compact axion stars | If the dark matter is composed of axions, then axion stars are expected to be
abundant in the Universe. We demonstrate in fully non-linear (3+1) numerical
relativity the instability of compact axion stars due to the electromagnetic
Chern-Simons term. We show that above the critical coupling constant
$g_{a\gamma}^\mathrm{crit} \propto M_s^{-1.35}$, compact axion stars of mass
$M_s$ are unstable. The instability is caused by parametric resonance between
the axion and the electromagnetic field. The existence of stable compact axion
stars requires approximately Planck-suppressed couplings to photons. If the
coupling exceeds the critical value, then all stable axion stars are
necessarily non-compact. Unstable axion stars decay leaving behind a less
massive, less compact, remnant. The emitted radiation peaks at frequency
$\omega \sim 1/R_s$, where $R_s$ is the axion star radius. | Liina M. Chung-Jukko, Eugene A. Lim, David J. E. Marsh, Josu C. Aurrekoetxea, Eloy de Jong, Bo-Xuan Ge | 2023-02-20T17:01:41Z | http://arxiv.org/abs/2302.10100v1 | # Electromagnetic instability of compact axion stars
###### Abstract
If the dark matter is composed of axions, then axion stars are expected to be abundant in the Universe. We demonstrate in fully non-linear (3+1) numerical relativity the instability of compact axion stars due to the electromagnetic Chern-Simons term. We show that above the critical coupling constant \(g_{a\gamma}^{\rm crit}\propto M_{s}^{-1.35}\), compact axion stars of mass \(M_{s}\) are unstable. The instability is caused by parametric resonance between the axion and the electromagnetic field. The existence of stable compact axion stars requires approximately Planck-suppressed couplings to photons. If the coupling exceeds the critical value, then all stable axion stars are necessarily non-compact. Unstable axion stars decay leaving behind a less massive, less compact, remnant. The emitted radiation peaks at frequency \(\omega\sim 1/R_{s}\), where \(R_{s}\) is the axion star radius.
_Introduction:_ If dark matter (DM) is composed of axions or axion-like particles (henceforth, axions) [1], then DM halos are predicted to host an abundance of so-called _axion stars_ (see e.g. Refs. [2; 3; 4; 5; 6] for formation mechanism, and Ref. [7] for the abundance and merger rates). Axion stars are self-gravitating, time periodic, finite mass solutions of the Klein-Gordon-Einstein equations, which fall under the class of solitonic objects known as oscillatons [8; 9]. A defining property of axions is that they are real pseudo-scalars, and necessarily couple to gauge fields via the Chern-Simons term. In the case of electromagnetism, this leads to a coupling between the axion and two photons specified by a coupling constant \(g_{a\gamma}\) with mass dimension -1. In terms of classical fields, the axion couples to \(\vec{E}\cdot\vec{B}\).
It is known that this coupling can lead to an instability of the axion fields [10; 11; 12]. In particular, within the context of axion stars, this non-linearity is destabilising, as was demonstrated in the weak field perturbative regime in Ref. [13], and first suggested in Ref. [14]. In the strong field regime, it was also recently shown that complex scalar boson stars with a coupling to the Chern-Simons term can also become unstable [15].
In this paper, we investigate the stability of compact, relativistic axion stars in the presence of a weak propagating electromagnetic (EM) wave modelling a bath of ambient photons. We use the 3+1 numerical relativity code GRChombo [16; 17; 18]. We find that, as long as (i) the EM wavelength is approximately the size of the axion star and (ii) the coupling exceeds a critical coupling \(g_{a\gamma}^{\rm crit}\propto M_{s}^{-1.35}\) where \(M_{s}\) is the axion star mass for fixed axion mass \(m\), the star will experience an instability, losing mass via potentially detectable EM emissions.
To be specific, we find the following:
* The instability is induced by _parametric resonance_, with an instability band roughly with a bandwidth \(\Delta\omega\sim R_{s}^{-1}\) where \(R_{s}\) is the size of the axion star, centered around \(\omega\sim R_{s}^{-1}\). EM energy is generated exponentially.
* The critical threshold for the coupling is \[g_{a\gamma}^{\rm crit}\approx\frac{1.66\times 10^{-17}}{\rm GeV}\left[\left( \frac{M_{s}}{M_{\odot}}\right)\left(\frac{m}{10^{-11}{\rm eV}}\right)\right]^{ -1.35}\] where we have scaled our results to \(m=10^{-11}\)eV corresponding to \({\cal O}(M_{\odot})\) compact axion stars [19].
* The timescale of the instability is a power law \[\tau\propto(g_{a\gamma}-g_{a\gamma}^{\rm crit})^{-0.87}\] and independent of the initial EM seed amplitude.
* since the instability is exponential, the time to trigger it depends on \(t_{0}\sim\ln E_{0}\) at best.
The presence of this instability forbids axion stars from existing above the critical line \(g_{a\gamma}^{\rm crit}(M_{s})\) in the \((M_{s},g_{a\gamma})\) plane, as shown in Fig. 1. Compact axion stars have \(M_{s}\sim m_{\rm Pl}^{2}/m\), and our results imply that stable compact axion stars can exist only if the axion-photon coupling is approximately Planck suppressed (a similar conclusion applies to the axion quartic self-coupling as was shown in Ref. [20]).
_Theory:_ The electromagnetic field strength tensor and its dual are
\[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu},\quad\widetilde{F}^{ \mu\nu}=\frac{1}{2\sqrt{-g}}\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}, \tag{1}\]
with \(\varepsilon^{\mu\nu\rho\sigma}\) being the totally antisymmetric Levi-Civita symbol with \(\varepsilon^{0123}=+1\). We write the total action1 as
Footnote 1: Our metric signature is \(-+++\), and \(\hbar=c=1\).
\[\begin{split} S=\int d^{4}x\sqrt{-g}\left[\frac{m_{\rm Pl}^{2}}{1 6\pi}R-\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{1}{2}m^{2}\phi^{ 2}\right.\\ \left.-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{g_{u\gamma}}{4}\phi F _{\mu\nu}\tilde{F}^{\mu\nu}\right],\end{split} \tag{2}\]
where \(\phi\) is the axion field, and \(R\) is the Ricci scalar. The last term in this action is the Chern-Simons term, which acts as a boundary term and hence does not contribute to the stress-tensor. The stress-energy tensor is derived from Eq. (2) to find Einstein's equations \(G_{\mu\nu}=8\pi m_{\rm Pl}^{-2}T_{\mu\nu}\) (see e.g. Ref. [21]).
The equations of motion in the matter sector are
\[\nabla^{\mu}\nabla_{\mu}\phi-m^{2}\phi =\frac{g_{a\gamma}}{4}F_{\mu\nu}\tilde{F}^{\mu\nu}, \tag{3}\] \[\nabla_{\mu}F^{\mu\nu} =-g_{a\gamma}J^{\nu}, \tag{4}\]
where the current \(J^{\nu}\) is defined as \(J^{\nu}=\partial_{\mu}\phi\tilde{F}^{\mu\nu}\). The parametric resonance is driven by the EM sector Eq. (4), as long as the photon frequency is within the resonance band of the axion field. Since the axion oscillates \(\omega\sim m\sim R_{s}^{-1}\), if the photon wavelength is \(\mathcal{O}(R_{s})\), resonance will commence.
We solve the full system with numerical relativity using GRChombo[16; 17; 18] following the methodology in Ref. [22; 23; 24; 25; 26]. For a summary, please see appendix A. We construct initial conditions for compact axion stars with ADM masses \(0.41m_{\rm Pl}^{2}/m\leq M_{s}\leq 0.60m_{\rm Pl}^{2}/m\), which corresponds to
\[5.4M_{\odot}\left(\frac{10^{-11}{\rm eV}}{m}\right)\leq M_{s}\leq 8.1M_{ \odot}\left(\frac{10^{-11}{\rm eV}}{m}\right)\, \tag{5}\]
following the method used in Ref. [27; 28; 29; 30; 8; 19]. These masses are near the Kaup [31] limit for black hole formation.
For the EM field initial conditions, we approximate the initial spacetime as Minkowski, since we are interested in the case where the EM field is subdominant to the energy density of the axion star. This approximation decouples the oscillaton and EM initial conditions from each other, with minimal violations to the initial constraint equations. We choose the components of our gauge field, \(A_{\mu}=C_{\mu}e^{i(-k_{x}z+\omega_{\mu}t)}\), to describe a single plane wave polarised in the x-direction, with wavevector \(k_{\mu}^{(x)}=(\omega^{(x)},0,0,-k^{(x)})\) such that \(\omega^{(x)}=k^{(x)}\) initially. We identify \(A_{0}\) and \(A_{z}\) with the gauge mode, and set \(C_{0}=C_{z}=C_{y}=0\) at the initial time. This ansatz satisfies both the Lorenz gauge \(k_{\mu}A^{\mu}=0\), and the Bianchi identities, which set the dispersion relation for each wave mode. Using these simplifications, the only non-zero components of the electric and magnetic fields are
\[E_{x} =\partial_{t}A_{x}-\partial_{x}A_{t}=-\omega^{(x)}C_{x}\sin{(-k^{ (x)}z+\omega^{(x)}t)} \tag{6}\] \[B_{y} =\partial_{z}A_{x}-\partial_{x}A_{z}=-\omega^{(x)}C_{x}\sin{(-k^{ (x)}z+\omega^{(x)}t)}, \tag{7}\]
where we have used the real part of the gauge fields. We use \(k^{(x)}\equiv 2\pi/\lambda\sim 0.10m\), and the amplitude \(C_{x}=0.001m_{\rm Pl}\) as our initial conditions. Numerically solving the full non-linear equations to evolve our system implies that all classical backreactions are included in our simulations. Periodic boundary conditions were used throughout the simulations. We show that the constraint equations are satisfied and tested their convergence during evolution in Appendix B.
_Results:_ Slices through our simulation box for the \(M_{s}=0.60m_{\rm Pl}^{2}/m\), and \(g_{a\gamma}=16m_{\rm Pl}^{-1}\) case illustrating the evolution of the axion and EM energy density are shown in Fig. 2. An incoming seed EM wave (not visible on the scale shown) causes the axion star to emit a strong burst of EM radiation at \(t\sim 100m^{-1}\). At a later time, \(t\sim 200m^{-1}\), the EM radiation becomes less intense, and the axion star begins to settle into a stable lower mass, less compact, and larger configuration. As the star dilutes and increases in radius \(R_{s}\), its characteristic frequency drifts out of the instability band, shutting down the parametric resonance process.
We next show in Fig. 3 (top panel) the time evolution of the total energy in axions and EM radiation, which can be obtained by integrating their respective energy densities (see Appendix A). In order to describe the decay process, we fit a tanh function for the amplification of the energy of the EM field \(E_{\gamma}\):
Figure 1: The critical coupling (in black) along our simulation data (in red), with triangular simulation points representing a decaying star through scalar, electromagnetic and gravitational radiation (see diagram in top right corner). Our simulations cover \(M_{s}=0.60,0.53,0.46,0.41m_{\rm Pl}^{2}/m\), and we have plotted the mass ranges scaled to \(m=10^{-11}\) eV which correspond to compact axion stars of \(\mathcal{O}(M_{\odot})\).
\[E_{\gamma}(t)=A\left(\frac{e^{2(t-t_{0})/\tau}+1}{e^{2(t-t_{0})/\tau}-1}\right)+B, \tag{8}\]
where the constants \(A\) and \(B\) depend on the simulation box size.
The amplification in the EM energy sets two timescales: the parameter \(t_{0}\) determines how fast the amplification process begins after the start of the simulation, and \(\tau\) can be seen as a measure of the lifetime of the star in the decay process. The dependence of \(t_{0}\) and \(\tau\) on the axion-photon coupling \(g_{a\gamma}\) is demonstrated in Fig. 3 (bottom panel); they follow a decaying power law, which has an asymptote at a critical value of \(g_{a\gamma}\approx 12.1m_{\rm Pl}^{-1}\) based on our simulation data. We find \(\tau\propto g_{a\gamma}^{-0.87}\). We compare this result to the parametric resonance instability timescale for a homogeneous cosmological axion field, which is proportional to \(g_{a\gamma}^{-1}\), although the instability is blocked by the expansion of the Universe [32]. A gravitational potential well, provided by the axion star itself, is required to allow for the instability to develop [14]. Our results indicate that the decay scaling for relativistic highly inhomogeneous compact axion stars is comparable but different from the homogeneous case.
We verified the dependence of \(t_{0}\) and \(\tau\) on the EM seed amplitude. We find that \(\tau\) is independent of the amplitude for fixed photon-axion coupling \(g_{a\gamma}\). We note that \(A\) from Eq. (8) is also constant, indicating that the EM field is amplified by the same amount of energy and hence the amplification has the same shape independent of the amplitude of the EM seed \(E_{0}\). Furthermore, we confirm that \(t_{0}\) has a logarithmic dependence on the initial amplitude of the EM seed, \(t_{0}\sim\ln E_{0}\)2, as the growth of the EM field is an exponential process \(E(t)\sim E_{0}\exp((t-t_{0})/\tau)\).
Footnote 2: As an example, \(t_{0}\approx-31.3m^{-1}\ln(1860(m_{\rm Pl}m)^{-1}E_{0})+174m^{-1}\) for \(M_{s}=0.60m_{\rm Pl}^{2}/m\) and \(g_{a\gamma}=15m_{\rm Pl}^{-1}\). The constants depend on the value of \(g_{a\gamma}\) chosen.
Weak field calculations of non-relativistic (and hence non-compact) axion star decay suggest a critical value for the axion-photon coupling \(g_{a\gamma}^{\rm crit}=7.66m_{\rm Pl}/\sqrt{8\pi}mM_{s}\), or \(g_{a\gamma}^{\rm crit}\propto M_{s}^{-1}\)[13]. While the power law is different, the proximity of the coefficient suggests that decay dynamics are broadly similar in both the weak and strong gravity limits3. A possible explanation is that it is driven by _matter_ couplings, with gravity playing only a second order4. We also compared the critical mass given in Ref. [13] to the remnant axion star mass from our simulations and find they were of the same order of magnitude with our remnants having slightly lower mass.
Footnote 3: This is in agreement with complex scalar boson stars instability found in [15].
Footnote 4: See for example [33; 34].
Figure 2: Energy densities of the electromagnetic and scalar fields as a slice through the centre of the star for the \(M_{s}=0.60m_{\rm Pl}^{2}/m\), \(g_{a\gamma}=16m_{\rm Pl}^{-1}\) case. The EM field (bottom panel), initially polarized in the \(x\) direction, is initially propagating from the right to the left. As parametric resonance kicks in, the axion star undergoes rapid dilution and mass loss, with a corresponding burst in the EM energy which is roughly isotropic (see \(t=175/m\)). The process stops when the axion star dilutes and expands to a size away from the characteristic frequency of the EM spectra. A movie of our simulations for coupling \(g_{a\gamma}=16m_{\rm Pl}^{-1}\) can be found in this link.
In Figure 4, we show the power spectra \(P(k)\) of the \(x\), \(y\) and \(z\) components of the electric field at \(t=350m^{-1}\) for \(g_{a\gamma}=16m_{\rm Pl}^{-1}\), after the decay process has happened, along with the original seed (black dashed line), demonstrating the frequency of the EM radiation emitted by the axion star as it decays. We obtained the power spectrum by performing a fast Fourier transform on the spatial electric field, and then integrating the square of the transform in \(k\)-space. We note two salient points. First, around the incoming frequency \(k=0.1m\), the \(E_{x}\) power broadens, with a corresponding smaller power in \(E_{z}\) and \(E_{y}\) power, but no large amplification. Secondly, the primary power of the emission lies around \(k\sim 0.5m\), equipartitioned between the \(x\), \(y\) and \(z\) components. This scale corresponds to the diameter \(2R_{s}\) of the axion star \(2kR_{s}\sim 2\pi,k\sim 0.6m\), capturing the emission from parametric resonance. This equipartition of energies arises from (i) the total momentum of the system must remain small as the initial EM waves carry negligible momenta (ii) the source current for the radiation is the initially spherically symmetric axion star \(J_{\mu}\propto\partial_{\mu}\phi\). We note that while we saw evidence of birefringence between the \(x\) and \(y\) components before decay, post-decay this effect is subdominant. We leave to future work a complete study of the emission power and possibly circularly polarised emission due to \(CP\) violation.
_Conclusions:_ We have demonstrated in fully non-linear simulations that axion stars are unstable above a critical line \(g_{a\gamma}^{\rm crit}\propto M_{s}^{-1.35}\) in the plane of mass and coupling constant, exploding into EM radiation. Crucially, we have shown this decay process with a log dependence on the amplitude of the plane wave, suggesting that ambient radiation alone would be sufficient to destabilise compact axion stars on Hubble timescales. As an example, assuming the resonant band \(\delta\omega\sim m\) and \(g_{a\gamma}=15m_{\rm Pl}^{-1}\), destabilisation of \(\mathcal{O}(M_{\odot})\) compact axion stars stimulated by the Cosmic Microwave Background photons will take approximately \(t_{0}\sim 0.05\) seconds, smaller than the Hubble expansion time by many orders of magnitude.
Populations of axion stars can form cosmologically due to mergers of dark matter halos or from collapse of cosmological perturbations [4], with a computable rate [7]. Our results suggests that if \(g_{a\gamma}>g_{a\gamma}^{\rm crit}\), compact axion stars quickly decay into EM radiation which can heat the intergalactic medium, with potentially observable consequences [35]. Conversely, this will impact attempts to search for these objects via gravitational waves from their mergers [36; 37]. In future work, we intend to study the multi-messenger gravitational, scalar, and EM radiation of axion star decays.
_Acknowledgements:_ We would like to thank Thomas Helfer for his early contribution to the project. We
Figure 3: _Top:_ The total energy in the scalar (solid line) and electromagnetic fields (dashed line) for several values of the coupling \(g_{a\gamma}\) for the \(M_{s}=0.60m_{\rm Pl}^{2}/m\) case. Total energy conservation (including gravitational energy) is checked by ensuring the Hamiltonian constraint is not violated (see appendix A). _Bottom:_ The values of the parameters \(t_{0}\) and \(\tau\) from the hyperbolic tangent fit Eq. (8) to the EM energy profile as a function of the axion-photon coupling \(g_{a\gamma}\). The power law function fitted can be seen in the legend. This gives a critical value for the coupling of \(\sim 12.1m_{\rm Pl}^{-1}\). The simulation errors found through higher resolution runs were of order \(0.1\%\) for \(t_{0}\) and \(1\%\) for \(\tau\). The values for \(g^{\rm crit}\) and \(p\) were \(12.1m_{\rm Pl}^{-1}\) and \(0.83\) respectively, for \(t_{0}\), and \(12.2m_{\rm Pl}^{-1}\) and \(0.87\) for \(\tau\).
Figure 4: Power spectra of the electric field for \(g_{a\gamma}=16m_{\rm Pl}^{-1}\) at time \(t=350m^{-1}\), which is after the decay process has ended, for the \(M_{s}=0.60m_{\rm Pl}^{2}/m\). The excitation of the wave mode corresponding to the inherent frequency scale of the axion star around \(k\sim 0.5m\) is clearly visible, where the energy is equipartitioned. The black dashed line demonstrates the power spectrum of the initial EM seed, polarised in the \(x\) direction.
also thank members of the GRChombo Collaboration for technical support and help. LMCJ is supported by a studentship funded by the Science and Technologies Facilities Council (UK). DJEM is supported by an Ernest Rutherford Fellowship from the Science and Technologies Facilities Council (UK). JCA acknowledges funding from the Beecroft Trust and The Queen's College via an extraordinary Junior Research Fellowship (eJRF). This work used the DiRAC@Durham Cosma facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk), under DiRAC grant ACTP238. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1.
|
2303.07755 | Flattening conduction and valence bands for interlayer excitons in a
moiré MoS$_2$/WSe$_2$ heterobilayer | We explore the flatness of conduction and valence bands of interlayer
excitons in MoS$_2$/WSe$_2$ van der Waals heterobilayers, tuned by interlayer
twist angle, pressure, and external electric field. We employ an efficient
continuum model where the moir\'e pattern from lattice mismatch and/or twisting
is represented by an equivalent mesoscopic periodic potential. We demonstrate
that the mismatch moir\'e potential is too weak to produce significant
flattening. Moreover, we draw attention to the fact that the quasi-particle
effective masses around the $\Gamma$-point and the band flattening are
\textit{reduced} with twisting. As an alternative approach, we show (i) that
reducing the interlayer distance by uniform vertical pressure can significantly
increase the effective mass of the moir\'e hole, and (ii) that the moir\'e
depth and its band flattening effects are strongly enhanced by accessible
electric gating fields perpendicular to the heterobilayer, with resulting
electron and hole effective masses increased by more than an order of magnitude
leading to record-flat bands. These findings impose boundaries on the commonly
generalized benefits of moir\'e twistronics, while also revealing alternate
feasible routes to achieve truly flat electron and hole bands to carry us to
strongly correlated excitonic phenomena on demand. | Sara Conti, Andrey Chaves, Tribhuwan Pandey, Lucian Covaci, François M. Peeters, David Neilson, Milorad V. Milošević | 2023-03-14T10:00:10Z | http://arxiv.org/abs/2303.07755v1 | Flattening conduction and valence bands for interlayer excitons in a moire MoS\({}_{2}\)/WSe\({}_{2}\) heterobilayer
###### Abstract
We explore the flatness of conduction and valence bands of interlayer excitons in MoS\({}_{2}\)/WSe\({}_{2}\) van der Waals heterobilayers, tuned by interlayer twist angle, pressure, and external electric field. We employ an efficient continuum model where the moire pattern from lattice mismatch and/or twisting is represented by an equivalent mesoscopic periodic potential. We demonstrate that the mismatch moire potential is too weak to produce significant flattening. Moreover, we draw attention to the fact that the quasi-particle effective masses around the \(\Gamma\)-point and the band flattening are _reduced_ with twisting. As an alternative approach, we show (i) that reducing the interlayer distance by uniform vertical pressure can significantly increase the effective mass of the moire hole, and (ii) that the moire depth and its band flattening effects are strongly enhanced by accessible electric gating fields perpendicular to the heterobilayer, with resulting electron and hole effective masses increased by more than an order of magnitude leading to record-flat bands. These findings impose boundaries on the commonly generalized benefits of moire twistronics, while also revealing alternate feasible routes to achieve truly flat electron and hole bands to carry us to strongly correlated excitonic phenomena on demand.
## 1 Introduction
The exciting report of superconductivity associated with flat bands in moire twisted bilayer graphene [1] opened the floodgates to inducing and tuning the strongly correlated states associated with flat bands by interlayer twisting [2, 3]. Flat bands and the associated correlated states should have a strong effect on the properties of interlayer excitons in bilayer systems, and in this paper we investigate these effects. The interlayer excitons are bound electron-hole pairs with the electrons and holes confined in spatially separated layers. Thanks to the separation, interlayer excitons have much longer lifetimes than intralayer excitons which are formed from electrons and holes in the same layer [4]. In this paper, we investigate the effect of band flattening on interlayer excitons.
Twisted bilayer graphene is unsuitable for the purpose because of the need for an insulating barrier to separate the layers and confine the electrons and the holes to their respective layers. Such a barrier greatly reduces the effects of interlayer coupling, making the layers behave as if they were isolated. The isotropic band structure of the low energy states of graphene is then not sensitive to an interlayer twist angle. Recent progress in the fabrication of high-quality van der Waals stacked bilayers has extended twisting to other 2D materials, most notably the transition-metal dichalcogenides (TMDs) [5, 6]. A major advantage of TMDs here is that the two TMD layers can be different, forming a heterobilayer, and they can be chosen with a type-II interface. A type-II interface confines the electrons and the holes in separate layers without need for an insulating spacer layer. This allows formation of long-lived interlayer excitons in a moire environment [7].
Twisting in bilayers creates a periodic moire pattern characterized by regions with different local stacking registries. The moire pattern yields potential landscapes for electrons and holes along the plane [8]. As a result, the properties of the excitons can be broadly tuned by flattening the electron and hole bands and changing their moire potentials, leading to intriguing possibilities for novel technology devices [9, 10, 11, 12].
In homobilayers, moire patterns occur only in twisted samples but in TMD heterobilayers, because of the incommensurability of the two different TMD lattices, moire patterns are present even without twisting. Rotational alignment has been found to influence the interlayer coupling in homobilayers [13], but for TMD heterobilayers, the role and possible tunability of interlayer coupling remain open questions.
In this paper, we investigate the electronic structure of electrons and holes in a moire potential landscape for small interlayer twists, primarily in a MoS\({}_{2}\)/WSe\({}_{2}\) van der Waals heterobilayer (vdWHB). We use an efficient continuum model parameterized from first principles. Curiously, we find that the moire bands for electrons and holes _do not flatten with increasing twist angle_. On the contrary, the effective masses decrease as the twist angle grows.
As an alternative flattening strategy, we investigate the effect of vertical pressure on the bands, directly linked with the reduction of the interlayer distance. We find that pressure does enhance the strength of the moire potential and results in larger effective masses. As a further strategy, we also investigate the effect of an external electric field from vertical gating. We find that strong electric fields significantly enhance the moire potential depths for the electrons and holes. This leads to increases in the effective masses up to two orders of magnitude, and the associated bands
become ultra-flat.
## 2 Results and discussion
We have used Density Functional Theory (DFT) to identify an optimal configuration of van der Waals heterobilayers. We searched for a heterobilayer with type-II band alignment and with the electrons and holes fully confined in their respective layers. We considered combinations of MoSe\({}_{2}\), MoS\({}_{2}\), WSe\({}_{2}\), WS\({}_{2}\), from which we selected type-II MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with the electrons (holes) fully confined to the MoS\({}_{2}\) (WSe\({}_{2}\)) layer.[8] The band offsets are \(\approx 220\) (640) meV.[14, 15, 16]
Figure 1a shows the crystal structure of a twisted MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB for a small angle \(\theta\) centered on zero. The moire unit cell is characterized by periodically alternating local stacking registries, identified as \(R_{h}^{h}\), \(R_{h}^{M}\), and \(R_{h}^{X}\). Each stacking register exhibits different values for the energy of the conduction (valence) band minimum (maximum).
A further advantage of MoS\({}_{2}\)/WSe\({}_{2}\) is that the other material combinations yield valence band maxima (VBM) which for some stacking registries lie at the \(\Gamma\)-point. In contrast, for MoS\({}_{2}\)/WSe\({}_{2}\) the gap is always at the K-point regardless of the stacking registry, so only states at the K-point are involved.[17]
Since the band edges at the K-point are mostly composed of d-orbitals of the metal atoms buried in between chalcogen atoms, the coupling between these band edge states in the two layers should not significantly change the interlayer band offsets. This allows us to consider the type-II interlayer exciton as the lowest energy case. Indeed, this is verified by DFT calculations of untwisted heterobilayers with different stacking orders, where band edges at the K-point are observed to be essentially superpositions of the monolayer bands.[18]
Figures 1b and c show the effect of the moire potential on the spatial evolution of the band edges for electrons and holes along the diagonal of the moire unit cell in aligned MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB. The band edges are obtained using a continuous model for the moire potential (eqn (1) in Sec. 4.1) with DFT parameters \(V_{e,1}=-17.3\) meV, \(V_{h,1}=-107.1\) meV, \(V_{e,2}=-13.8\) meV, and \(V_{h,2}=-90.2\) meV. The energies are taken with respect to the minimum energy value corresponding to \(R_{h}^{h}\) where for this particular vdWHB MoS\({}_{2}\)/WSe\({}_{2}\) both the electrons and holes would be trapped. The effect of the moire pattern is much weaker for the electron bands than for the hole bands. This is because the VBM charge density extends to both layers giving rise to non-zero interlayer mixing, while the conduction band minimum (CBM) charge density is completely confined to the MoS\({}_{2}\) layer (Fig. 14 in Sec. 4.2). Thus, the VBM is modulated by the interlayer moire potential, while the CBM responds to the intralayer moire potential. Because the interlayer moire potential is much deeper than the intralayer potential, the changes in the VBM charge density are more pronounced than those in the CBM.
In the following subsections, the electrons (holes) are always confined in their MoS\({}_{2}\) (WSe\({}_{2}\)) layer, which allows us to take the same effective masses \(m_{e(h)}=0.43\) (\(0.35\))\(m_{0}\) throughout.[15, 16]
### Flattening the moire band by twisting
Figures 2 and 3 show the band structure of moire electrons and holes respectively, in the MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB along the \(k_{x}\) direction. The interlayer twist angles are \(\theta=0.5^{\circ}\), \(2^{\circ}\), and \(3.5^{\circ}\). The \(\Gamma\)-point of the moire Brillouin zone corresponds to the K-point of the Brillouin zone of the MoS\({}_{2}\) layer where the conduction band state is located. As the twist angle is increased, the period of the moire pattern decreases, leading to larger and larger moire Brillouin zones.
Figure 4 compares the lowest energy conduction (a) and valence band (b) for different twist angles. It is important to note that for small twist angles, bands may appear flat but this can be misleading. The apparent flatness is a result of a large anticrossing at the moire band edge combined with the short Brillouin zone length. Properties of compact moire interlayer excitons involve states with wavelengths exceeding the abbreviated first Brillouin zone, requiring states from the higher Brillouin zones within the unfolded zone scheme. Figs. 2 and 3 show that the reconstructed unfolded bands follow closely the bands of electrons and
Figure 1: (a) Sketch of a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with a small interlayer twist angle \(\theta\) around zero, along with its moire unit cell (black solid lines). Regions of local stacking registries \(R_{h}^{h}\), \(R_{h}^{X}\), and \(R_{h}^{M}\) are identified. (b) Conduction and valence band edges at the \(K\)-point from the moire potential of an infinite aligned MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB. Vertical dotted lines emphasize different stacking registers. (c) Moire potential for the electrons obtained from the continuum model.
holes in isolated layers, with their effective masses set at the K-point (red and blue dashed lines respectively). This means that these moire quasi-particles behave similarly to the original electrons and holes in isolated layers, with no significant increase in the quasi-particle effective masses.
Figure 5 shows the properties that are conventionally used to characterize the flattening of bands, namely the ground state bandwidths, the bandgap between the lowest energy bands, and the effective masses at the \(\Gamma\)-point. With increasing twist angle, the electron and hole bandwidths and bandgaps increase (Fig. 5a and b). Fig. 5c shows that the electron effective mass is insensitive to twisting, but that the hole effective mass significantly decreases with twisting. This difference in behavior reflects the shallowness of the electron moire potential relative to the hole moire potential (Fig. 1b).
Band flatness is closely related to wave function localization. In terms of a tight-binding model of the moire lattice, for a strongly localized wave function, the hopping between neighboring potential minima in the lattice will be small, resulting in flat bands. The moire potential landscape of untwisted MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB is too shallow to produce very flattened bands. Twisting this heterobilayer only further suppresses the original moire potential because it decreases the distance between the moire potential minima (larger moire Brillouin zone), leading to stronger hopping energies between adjacent unit cells. Thus as shown in Fig. 5, twisting works against flattening of the bands in this heterobilayer, in contrast to twisted homobilayers.
To work towards band flattening with large effective masses and energy gaps, one must identify ways to deepen the confining moire potential. In what follows, we explore the deepening of moire potentials through application of vertical pressure and electric fields.
### Flattening the moire band by vertical pressure
As an alternative strategy, we now look at the effect on the bands of pressure applied perpendicularly and uniformly across the layers to reduce the interlayer distance \(d\).[19] Since the moire potential is strongly affected by different interlayer couplings in the \(R_{h}^{h}\), \(R_{h}^{M}\), and \(R_{h}^{X}\) stacking regions, any decrease \(\delta d\) in the interlayer distance should efficiently control the depth of the moire potential while leaving the moire pattern unchanged.
The changes in the potential landscape due to the applied pressure are reflected in changes in the values of the parameters \(V_{i,1}\) and \(V_{i,2}\) in eqn (1). These determine the pressure-induced modifications on the band structures of the moire electrons and holes in a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with the interlayer distance uniformly reduced in all regions by \(\delta d\), up to \(\delta d\sim 0.5\) A. When \(\delta d\sim 0.4\) - 0.5 A, for a few stackings the bandgap becomes indirect. The value \(\delta d=0.5\) A corresponds to an applied pressure \(\sim 9\) GPa, well within the typical experimental range.[20] Table 1 lists the values of the parameters for different values of \(\delta d\).
Figure 6 shows the conduction band (a) and valence band (b)
Fig. 4: Lowest energy moiré bands for (a) electrons and (b) holes in a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB for different twist angles \(\theta\).
Fig. 3: Moire bands for holes in MoS\({}_{2}\)/WSe\({}_{2}\). vdWHB with interlayer twist angle (a) \(\theta=0.5^{\circ}\), (b) 2.0\({}^{\circ}\), and (c) 3.5\({}^{\circ}\). Blue dashed lines show for comparison a parabolic dispersion with hole effective mass \(m_{h}=0.35m_{0}\) in an isolated WSe\({}_{2}\) layer.
Fig. 2: Moire bands for electrons in MoS\({}_{2}\)/WSe\({}_{2}\). vdWHB with interlayer twist angle (a) \(\theta=0.5^{\circ}\), (b) 2.0\({}^{\circ}\), and (c) 3.5\({}^{\circ}\). Red dashed lines show for comparison a parabolic dispersion with electron effective mass \(m_{e}=0.43m_{0}\) in an isolated MoS\({}_{2}\) layer.
edges along the diagonal of the moire unit cell of MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB as a function of \(\delta d\). The applied pressure enhances the interlayer coupling and this increases the depth of the moire potential for \(R_{h}^{X}\) stacking. At \(\delta d\sim 0.2\) A, the lowest energy state switches from the \(R_{h}^{h}\) to the \(R_{h}^{X}\) stacking.
We see from Fig. 7 that, because of the switch, reducing the interlayer spacing has little effect on \(m_{\text{eff}}\) until \(\delta d\sim 0.2\) A. When \(\delta d>0.2\) A, the hole moire potential at \(R_{h}^{X}\) deepens to such an extent that, by \(\delta d=0.5\) A, the moire hole effective mass has increased by nearly an order of magnitude compared with its value in an isolated WSe\({}_{2}\) monolayer.
This is because the interlayer mixing of the VBM charge density significantly increases with decreasing interlayer distance. However, Fig. 6a shows for electrons that the variations in the moire potential are small, and we see that the moire electron effective mass does not change significantly with \(\delta d\). Indeed, the CBM charge density is not expected to be sensitive to \(\delta d\), since the conduction band states are strongly localized in the Mo atoms, buried in between the S atoms, which prevents electron states from undergoing significant modifications in the charge density due to the presence of another layer (Fig. 14a in Sec. 4.2). Valence band states, on the other hand, have their charge that also extends among the Se atoms, making them more susceptible to different interlayer distances and even to the stacking order (see Fig. 14b in Sec. 4.2).
We conclude that with applied uniform vertical pressure one obtains significant band flattening and an increase in the effective mass, but only for the valence band states.
### Flattening the moire band by vertical gating
As a final strategy, we investigate the effect on the bands of an external perpendicular electric field from gating. The electric dipole of interlayer excitons couples with the electric field \(E_{z}\), and this will affect the moire potential. From the Stark effect, the electric field \(E_{z}\) shifts the CBM and VBM energy levels in MoS\({}_{2}\) and WSe\({}_{2}\). An electric field applied from the MoS\({}_{2}\) layer to the WSe\({}_{2}\) layer should decrease the band gap. A field in the opposite direction would increase the gap, and for relatively small fields the electrons and/or holes can change layers.[10] The depth of the moire potential landscape and the moire effective masses can be readily increased and tuned by a field pointing from the MoS\({}_{2}\) to the WSe\({}_{2}\), since the \(R_{h}^{h}\) stacking region containing the minimum of the moire potential has the largest interlayer distance,[8, 9] and hence the largest dipole moment.
Figure 8 shows the conduction and valence band edges of the
\begin{table}
\begin{tabular}{l l l l l} \hline \(\delta d\) (Å) & \(V_{z,1}\) & \(V_{z,2}\) & \(V_{k,1}\) & \(V_{k,2}\) \\ \hline
0.0 & -17.3 & -13.8 & -107.1 & -90.2 \\
0.2 & -19.1 & -19.6 & -138.1 & -137.2 \\
0.4 & -22.0 & -24.4 & -168.9 & -199.2 \\
0.5 & -24.5 & -28.5 & -182.8 & -237.4 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters (in meV) for reconstructing moire potentials for electrons and holes [eqn (1) in Sec. 4.1] under applied vertical pressure.
Figure 5: (a) Ground-state bandwidths \(\delta E\), (b) bandgap between the two lowest energy bands \(E_{z}\), and (c) effective masses \(m_{\text{eff}}\) for electrons (red solid line) and holes (blue dashed line) in a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB as a function of the twist angle \(\theta\).
Figure 6: (a) Conduction and (b) valence band edges at the K-point from the moire potential of a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with \(\theta=0.5^{\circ}\), for decreases in the interlayer spacing \(\delta d=0\) (black solid line), 0.2 Å (orange dashed), 0.4 Å (purple dotted), and 0.5 Å (green dash-dotted line).
MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB for different applied uniform electric fields. We find that the moire potential indeed becomes progressively deeper with increasing \(E_{z}\), while the \(R_{h}^{h}\) always remains the stacking with minimum energy. The deepening of the moire potential is much larger than for the case with pressure (Fig. 6).
Figure 9 shows that with applied electric field, the lowest energy conduction band (a) and valence band (b) become remarkably flat. Fig. 10a shows bandwidths \(\delta E\) as small as \(\sim 10^{-2}\) meV, the narrowest reported to date. Fig. 10b shows the corresponding energy bands \(E_{g}\) which increase by a factor of two, and Fig. 10c shows that there is a dramatic increase in both the electron and hole effective masses for vertical electric fields \(E_{z}>100\) mV/A. The dramatic increase in the masses is severely curtailed when the twist angles are increased (inset Fig. 10), demonstrating once again the detrimental effects of twisting in this heterobilayer.
This is an exciting result, pointing to the possibility of achieving ultra-flat bands with associated strongly correlated interlayer exciton states in MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB by means of an applied electric field, without need for twisting.
### Effect of reconstruction of moire lattice on band flattening
Recent experiments have reported lattice reconstruction upon relaxation of MoSe\({}_{2}\)/WSe\({}_{2}\) heterobilayers with near-zero twist angles \(\theta<1^{\circ}\).[21, 22] In this combination of materials, the formation energies of the \(R_{h}^{h}\) and \(R_{h}^{K}\) stacking registries are almost degenerate and significantly smaller than the \(R_{h}^{h}\) formation energy. Consequently, the regions of the original moire pattern with these two stacking configurations expand and undergo reconstruction. In our MoS\({}_{2}\)/WSe\({}_{2}\) heterobilayers, the formation energies exhibit the same features as MoSe\({}_{2}\)/WSe\({}_{2}\) (Fig. 15a in Sec. 4.2). Therefore, we expect the lattice reconstruction at small twist angles would be similar.
As a consequence, the pattern of the electron and hole effective potentials would no longer be the original moire pattern, depicted in Fig. 1c for electrons, but rather it would acquire a different form, as shown in Fig. 11a. The result would be a super-lattice composed of triangular domains with \(R_{h}^{M}\) or \(R_{h}^{K}\) registries with a strongly-reduced \(R_{h}^{h}\) region connecting the corners of the triangles. We assume there is no change in lattice parameters upon reconstruction, so the potentials in these regions will be nearly the same as shown in Fig. 1b.
Figure 12 shows, for electrons (red solid lines) and holes (blue dashed lines), the bandwidths \(\delta E\) (a), the corresponding band gaps \(E_{g}\) between the ground and first excited states (b), and the increase in both the electron and hole effective masses (c) as a function of the perpendicular electric field. At zero electric field, the narrow and shallow potential in the \(R_{h}^{h}\) region is not able to confine electron and hole wave functions. They are strongly lo
Fig. 8: Conduction and (b) valence band edges at the K-point from the moiré potential of MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with \(\theta=0.5^{\circ}\), for unbiased case (black solid line), and under applied perpendicular electric fields \(E_{z}\) of 50 mV/A (orange dashed), 200 mV/A (purple dotted), and 400 mV/A (green dash-dotted line).
Fig. 10: (a) Ground-state bandwidths \(\delta E\), (b) bandgap between the two lowest energy bands \(E_{z}\), and (c) effective masses \(m_{\rm eff}\) for electrons (red solid line) and holes (blue dashed line) in a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with \(\theta=0.5^{\circ}\) as a function of the applied perpendicular electric field \(E_{z}\). For clarity, \(m_{\rm eff}\) is scaled to the values for zero field \(m_{\rm eff}(E_{z}=0)\). Inset in (c): \(m_{\rm eff}/m_{\rm eff}(0)\) for \(\theta=3.5^{\circ}\).
Fig. 9: Lowest energy moiré bands for (a) electrons and (b) holes in a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with \(\theta=0.5^{\circ}\), for unbiased case (black solid line), and under applied perpendicular electric fields \(E_{z}\) of 50 mV/Å (orange dashed), 200 mV/Å (purple dotted), and 400 mV/Å (green dash-dotted line).
calized but within the triangular \(R_{h}^{X}\) patches (Fig. 11b). They exhibit small overlap with adjacent triangles, and consequently, the hopping parameters are small compared with the unreconstructed moire potential. This leads to higher electron and hole effective masses in the reconstructed case.
In contrast to the unreconstructed case, here a weak electric field decreases very slightly the effective masses and the band flattening. This is because a weak electric field deepens the confinement potential in the \(R_{h}^{h}\) region, linking wave functions in the triangular \(R_{h}^{X}\) regions and thus increasing the overlap between wave functions in neighboring triangles. This leads to higher hopping parameters and hence less band flattening. However, except for weak fields, the deeper \(R_{h}^{h}\) potential will fully confine the electron and hole wave functions. The transition from wave functions confined in the triangular \(R_{h}^{X}\) regions at zero field to wave functions strongly confined in the \(R_{h}^{h}\) region is verified in the contour-plots of the electron probability densities for zero and 200 mV/A electric fields (Figs. 11b-c). This confinement again leads to a decrease in the hopping parameters and a significant flattening of the bands, just as in the unreconstructed case.
Figure 12c shows that for electrons (holes), the electric field still strongly increases the effective masses by a factor 60, decreases the bandwidths \(\delta E\) by 99.65% (99.86%), and increases the gaps \(E_{g}\) by a factor 10. We see that the dramatic increase of the effective masses with electric field and the associated sub-meV bandwidths are very similar to that for the unreconstructed moire lattices reported in Sec. 2.3. Therefore the conclusions in the main manuscript about band flattening using applied electric fields remain equally valid in the presence of reconstruction.
### Properties of interlayer excitons in moire potential under band-flattening
Using the single-particle energy bands calculated in the previous sections, we now investigate the properties of the indirect excitons. Flattening the electron and hole bands directly tunes the properties of the interlayer excitons and their strongly-correlated phases. The exciton Rydberg energy \(Ry^{*}=e^{2}/(4\pi\epsilon 2a_{B}^{*})\), Bohr radius \(a_{B}^{*}=\hbar^{2}4\pi\epsilon/(\mu e^{2})\), and binding energy \(E_{b}\) all depend on the effective reduced mass \(\mu\), which is increased when the bands flatten. These quantities are a measure of the strength of the electron-hole attraction, indicating the degree of difficulty of exciton dissociation [23] and the dissociation temperature \(k_{B}T\propto E_{b}\)[24]. The binding energy also provides insight into the properties of the strongly correlated excitonic states, including the exciton superfluid. The Berezinskii-Kosterlitz-Thouless transition temperature for the superfluid is proportional to \(E_{b}\)[25, 26].
We use a variational approach solving eqn (3) in Sec. 4.3 with the exciton wave-function modeled by an exponential. Fig. 13 shows the resulting exciton binding energy \(E_{b}\) and Bohr radius \(a_{B}^{*}\) as a function of (a) twist angle, (b) interlayer distance tuned by vertical pressure, and (c) applied electric field.
Figure 13a shows that with twisting, the moire electron and hole masses have very limited tunability, so the exciton binding energy and Bohr radius do not greatly vary. For all the twist an
Fig. 11: (a) Moire potential for the electrons in the reconstructed lattice of a MoS\({}_{2}\)/WSe\({}_{2}\)vdWHB with sufficiently small twist angle. The potential in the vicinity of the confined \(R_{h}^{h}\) region is modeled by a Gaussian function of width \(d=15\) Å[21]. Spatial distribution of the electron probability density for the reconstructed lattice, (b) in the absence of applied electric field, and (c) with perpendicular electric field 200 mV/Å.
Fig. 12: After small angle lattice reconstruction in a MoS\({}_{2}\)/WSe\({}_{2}\)vdWHB, (a) ground-state bandwidths \(\delta E\), (b) bandgap between the two lowest energy bands \(E_{g}\), and (c) effective masses \(m_{\rm eff}\) for electrons (red solid line) and holes (blue dashed line) as a function of the perpendicular electric field \(E_{c}\). \(m_{\rm eff}(0)\) is for zero electric field.
gles, \(a_{\text{B}}^{*}\) remains an order of magnitude smaller than the moire period \(\lambda\) (top \(x\)-axis). This indicates that twisting has little effect on the localization of the excitons. In contrast, Fig. 13b shows that if instead the interlayer distance is reduced by pressure, the binding energy increases by as much as 25% and the Bohr radius decreases by 50%.
The effect on the binding energy and Bohr radius is even more dramatic with application of a perpendicular electric field (Fig. 13c). The binding energy is enhanced by a factor of two for an increase in \(E_{z}\) from 0 to 400 mV/A, while the Bohr radius drops by a factor of 100. Here the moire period \(\lambda\sim 8.6\) nm for \(\theta=0.5^{\circ}\), and the ratio of the effective Bohr radius to \(\lambda\) decreases with increasing the electric field. By \(E_{z}=400\) mV/A where the bands are ultra-flat, \(a_{\text{B}}^{*}\) has decreased to two orders of magnitude less than \(\lambda\). These results indicate extreme localized exciton states and a striking evolution towards strong correlations.
## 3 Conclusions
In summary, we have determined the moire bands for electrons and holes in MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with small interlayer twist angles near \(\theta\sim 0^{\circ}\), to tune under twisting, pressure, or electric fields the flatness of the electron and hole bands, for the purpose of investigating interlayer excitons in the strongly correlated regime.
We have developed a continuum model parameterized from first principles, one that respects the crucial changes in the moire potential from one stacking register to another. The method is readily adaptable for other material combinations of van der Waals heterobilayers, as well as for small twist angles near \(\theta\sim 60^{\circ}\). The method is robust and reliable and does not suffer from the computational limitations in first principle calculations of moire heterostructures, imposed by the large number of atoms in the unit cell.
We first demonstrate for this heterobilayer that in the vicinity of the \(\Gamma\)-point, the moire potentials in the presence of a small interlayer twist _do not_ flatten the bands. This is opposite to the trend known for twisted bilayer graphene and expected for other homobilayers. Although when the twist angle increases from zero the bands are seemingly flatter within the shorter Brillouin zone, we find that in fact, the overall effective mass of the quasi-particles remains practically unchanged.
As an alternative idea for deepening the moire potential, we considered reducing the interlayer spacing by application of uniform vertical pressure. This active manipulation was able to increase the effective mass of the hole by nearly an order of magnitude when the interlayer distance was decreased by 0.5 A while, interestingly, the electron mass is left nearly unaltered. Such different behavior of electrons and holes bears nontrivial consequences on the resulting properties of the interlayer excitons, of crucial importance to any further exotic strongly-correlated excitonic phases in van der Waals heterobilayers.
We find that as an even more effective strategy, applying an electric gating field perpendicular to the heterobilayer can dramatically deepen the moire potential, thereby leading to strong increases of the moire electron and hole effective masses. Concretely, in MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB with a 400 mV/A electric field, the effective masses of the moire electron and hole are both increased by a factor of \(\sim\) 40. This makes the moire bands ultra-flat, with bandwidths as narrow as 0.05 meV, the narrowest reported to date.
With such different yet complementary effects of these three manipulations, with their selective influences on electron and hole bands, and with the consequent tunability of the exciton binding energies and Bohr radii, we expect our results will help guide future works that seek strongly correlated electronic and excitonic phases in flat bands of 2D heterostructures.
## 4 Theoretical Methods
### Continuous Model
We build our continuum model for the moire potential using parameters extracted from the DFT calculations. The \(C_{3r}\) symmetry of the potential landscape is imposed by assuming a moire potential expressed as
\[V_{i}(\vec{r})=V_{i,1}|f_{1}(\vec{r})|^{2}+V_{i,2}|f_{2}(\vec{r})|^{2}, \tag{1}\]
with the index \(i=e(h)\) for electron (hole), \(f_{1}(\vec{r})=(e^{-i\vec{k}\cdot\vec{r}}+e^{-i\hat{C}\hat{K}\cdot\vec{r}}+e^{ -i\hat{C}\hat{K}\cdot\vec{r}})/3\), and \(f_{2}(\vec{r})=[e^{-i\vec{k}\cdot\vec{r}}+e^{-i(\hat{C}\hat{K}\cdot\vec{r}+ \theta,2)}+e^{-i(\hat{C}\hat{K}\cdot\vec{r}+\theta_{0})}]/3\). The \(\hat{C}_{3}\) operator represents a 120\({}^{\circ}\) rotation, and \(\theta_{s}=4\pi/3\).
To determine the band structures, we represent the Hamiltonian in a finite difference scheme, incorporating the 2D moire
Figure 13: Moiré interlayer exciton binding energy \(E_{b}\) and Bohr radius \(a_{\text{B}}^{*}\) in a MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB (a) as a function of twist angle \(\theta\) (and the corresponding period of the moire potential \(\lambda\)), (b) as a function of reduction of the interlayer distance \(\delta d\) upon pressure, and (c) as a function of a perpendicular electric field \(E_{z}\).
potential landscape of eqn (1) and assuming periodic boundary conditions with a Bloch wave approach. The time independent Schrodinger equation is numerically solved separately for electrons and holes within the effective mass approximation.
### Density Functional Theory calculations
The Density Functional Theory (DFT) calculations from first principles were performed using the Projector Augmented Wave (PAW) method [27] implemented in the Vienna _Ab-initio_ Simulation Package (VASP) [28, 29]. The generalized gradient approximation (GGA) from Perdew-Burke-Ernzerhof (PBE) is used for the exchange-correlation functional [30].
We present details of DFT for untwisted MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB. The van der Waals interactions between the MoS\({}_{2}\) and WSe\({}_{2}\) monolayers were included by the dispersion-corrected density functional (DFT-D3) method [31]. A vacuum spacing of 18 A is employed along the out-of-plane direction to model an isolated heterostructure. To limit the induced strain, an average of the experimentally measured lattice constants of bulk MoS\({}_{2}\) (3.160 A) [32] and bulk WSe\({}_{2}\) (3.282 A) [33] is used as the in-plane lattice constant of the MoS\({}_{2}\)/WSe\({}_{2}\) heterobilayer (3.221 A) for all the stackings considered.
Structural relaxation was performed using the conjugate-gradient method until the absolute value of the components of forces on out-of-plane positions converged to within 0.005 eV/A. During the structural relaxation across all the stackings, the in-plane atomic positions were kept fixed. Further refinement of the model to incorporate lattice relaxation in a heterostructure is left as an outlook, but should not affect the main conclusions of the work. An energy cutoff of 450 eV, energy convergence threshold of 10\({}^{-7}\) eV, and \(\Gamma\)-centered k-mesh of 15\(\times\)15\(\times\)1 were used for structural relaxation and self-consistent calculations.
Six valence electrons were used in the PAW pseudo-potential, \(d^{5}s^{1}\) for W/Mo and \(s^{2}p^{4}\) for S/Se. Spin-orbit coupling was taken into account in all calculations except in structure relaxation. For each stacking, the MoS\({}_{2}\) and WSe\({}_{2}\) band edges are calculated at the \(K\)-point.
Figure 14 shows the wave-functions at the \(K\)-point of the conduction band minimum (CBM) and valence band maximum (VBM). The major contribution to the CBM charge density (panel (a)) comes from the \(d\)-orbital of the metal atom and is fully confined to the MoS\({}_{2}\) layer. The VBM charge density (panel (b)) has contributions both from the metal and chalcogenide atoms, and it extends out from the WSe\({}_{2}\) layer to give rise to a small but finite interlayer mixing (see inset).
Figure 15 shows for the different stackings, the variation in the total energy, the interlayer distances \(d\), and the bandgap. We take \(d\) as the distance between Mo and W atoms (see inset). Fig. 15a shows that among all the sliding geometries explored, R\({}_{k}^{M}\) stacking is energetically the most favorable. Fig. 15b shows that the interlayer distance has a sensitivity to the stacking order of up to 0.6 A. Uniform vertical pressure was modeled by reducing the interlayer distance by the same fixed amount across all the stackings without relaxation. Fig. 15c shows the evolution of the bandgap for the different stackings when the interlayer distance is reduced by \(\delta d\).
### Exciton binding energy in moire potential
A Wannier-Mott exciton in the presence of the moire potential is described within an effective mass approximation by a Hamiltonian that includes (i) separate kinetic energy terms, (ii) moire potentials for each electron and hole (\(V_{e}\) and \(V_{h}\), respectively),
Figure 14: Wave-function of (a) conduction band minimum and (b) valence band maximum states at the \(K\)-point, averaged along the perpendicular direction \(z\), for untwisted MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB. The isosurface plots shown in red are for a value 0.004 e/Å\({}^{3}\) at \(R_{k}^{M}\) stacking. Inset in panel (b) is a zoom-in showing the interlayer mixing.
Figure 15: (a) Variation of the total energy. (b) interlayer distance \(d\), and (c) bandgap for the different stackings of aligned MoS\({}_{2}\)/WSe\({}_{2}\) vdWHB. In panel (a) the energy is relative to the energetically favorable stacking (R\({}_{k}^{M}\)). In panel (c) \(\delta d\) is the reduction in the interlayer distance.
and (iii) an electron-hole interaction term \(V_{eh}\):
\[H_{\sigma}=-\frac{\hbar^{2}}{2M}\nabla_{R}^{2}-\frac{\hbar^{2}}{2\mu}\nabla_{r}^ {2}+V_{e}(R+\frac{m_{h}}{M}r)+V_{eh}(R-\frac{m_{e}}{M}r)+V_{eh}(r), \tag{2}\]
where \(r\) is the in-plane relative coordinate and \(R\) the center-of-mass coordinate of the electron-hole pair, and \(M=m_{e}+m_{h}\) the total mass.
Unlike the system considered in Ref. Yu _et al._[9], in which the energy scale of the electron-hole binding is much larger than that of the moire potential landscape, here the moire potentials reach hundreds of meV and are of the same order of magnitude as the electron-hole interaction. With a deep moire potential landscape, we assume that the moire electron and hole band structures in isolated layers can be used to rewrite eqn (2) independent of \(R\). The binding energy \(E_{b}\) of interlayer excitons in moire potential is then obtained from
\[\left[-\frac{\hbar^{2}}{2\mu^{\prime}}\nabla_{r}^{2}+V_{eh}(r)\right]\psi(r)=E _{b}\psi(r)\,. \tag{3}\]
The modified reduced mass \(\mu^{\prime}\) is determined from the moire effective electron and hole masses. \(V_{eh}(r)\) is the Keldysh potential for electrons and holes in different layers,[34]
\[V_{eh}(r)=\frac{e^{2}}{4\pi\epsilon e_{0}}\frac{\pi}{2r_{0}}\left[H_{0}\left( \frac{\sqrt{d^{2}+r^{2}}}{r_{0}}\right)-Y_{0}\left(\frac{\sqrt{d^{2}+r^{2}}}{ r_{0}}\right)\right], \tag{4}\]
where \(H_{0}\) and \(Y_{0}\) are respectively the Struve and Bessel functions of the second kind, and \(r_{0}=2\pi\chi/\epsilon\) is the screening length. \(\chi\approx 7\) nm is the 2D polarizability of the medium, and \(\epsilon\approx 4\) the dielectric constant for a MoS\({}_{2}\)/WSe\({}_{2}\)vdWHB embedded in hexagonal boron nitride.[35] The exciton occupies the energetically most favorable registry stacking. In any case the dependence of \(E_{b}\) on the local register due to varying interlayer distances at the different registries, is expected to be weak because the associated variation in \(d\) is a tiny fraction of the exciton Bohr radius (Fig. 15b).
## Author Contributions
Conceptualization: S.C., A.C., D.N., and M.V.M; Data curation: S.C.; Formal Analysis: S.C., A.C., and T.P.; Funding acquisition: S.C., A.C.; Investigation: S.C., A.C, L.C., D.N., and M.V.M.; Methodology: S.C., A.C., L.C., F.M.P., and M.V.M.; Project administration: M.V.M.; Resources: S.C., A.C., and T.P.; Software: S.C., A.C., and T. P.; Supervision: F.M.P., D.N., and M.V.M.; Validation: F.M.P., D.N., and M.V.M.; Visualization: S.C., and T.P.; Writing - original draft: S.C., A.C., and T.P.; Writing - review & editing: S.C., A.C., T.P., L.C., F.M.P., D.N., and M.V.M.
## Conflicts of interest
The authors declare no conflict of interest.
## Acknowledgements
Discussions with Andrea Perali are gratefully acknowledged. S.C. and T.P. are supported by postdoctoral fellowships of the Research Foundation - Flanders (FWO-VI). A.C. and F.P. are supported by the Brazilian Council for Research (CNPq), through the PRONEX/FUNCAP, Universal, and PQ programs. The computational resources and services used in this work were provided by the VSC (Flem- ish Supercomputer Center), funded by FWO and the Flemish Government department EWI.
|
2310.10311 | Dynamics-augmented cluster-based network model | In this study, we propose a novel data-driven reduced-order model for complex
dynamics, including nonlinear, multi-attractor, multi-frequency, and multiscale
behaviours. The starting point is a fully automatable cluster-based network
model (CNM) (Li et al. J. Fluid Mech. vol.906, 2021, A21) which kinematically
coarse-grains the state with clusters and dynamically predicts the transitions
in a network model. In the proposed dynamics-augmented CNM (dCNM), the
prediction error is reduced with trajectory-based clustering using the same
number of centroids. The dCNM is first exemplified for the Lorenz system and
then implemented for the three-dimensional sphere wake featuring periodic,
quasi-periodic and chaotic flow regimes. For both plants, the dCNM
significantly outperforms the CNM in resolving the multi-frequency and
multiscale dynamics. This increased prediction accuracy is obtained by
stratification of the state space aligned with the direction of the
trajectories. Thus, the dCNM has numerous potential applications to a large
spectrum of shear flows, even for complex dynamics. | Chang Hou, Nan Deng, Bernd R. Noack | 2023-10-16T11:43:32Z | http://arxiv.org/abs/2310.10311v2 | # Dynamics-augmented cluster-based network model
###### Abstract
In this study, we propose a novel data-driven reduced-order model for complex dynamics, including nonlinear, multi-attractor, multi-frequency, and multiscale behaviours. The starting point is a fully automatable cluster-based network model (CNM) (Li _et al._ J. Fluid Mech. vol. 906, 2021, A21) which kinematically coarse-grains the state with clusters and dynamically predicts the transitions in a network model. In the proposed dynamics-augmented CNM (dCNM), the prediction error is reduced with trajectory-based clustering using the same number of centroids. The dCNM is first exemplified for the Lorenz system and then implemented for the three-dimensional sphere wake featuring quasi-periodic and chaotic flow regimes. For both plants, the dCNM significantly outperforms the CNM in resolving the multi-frequency and multiscale dynamics. This increased prediction accuracy is obtained by stratification of the state space aligned with the direction of the trajectories. Thus, the dCNM has numerous potential applications to a large spectrum of shear flows, even for complex dynamics.
W -Wakes/Jets: Wakes, Nonlinear dynamic systems: Low-Dimensional Models
## 1 Introduction
Advancements in computational capabilities and flow measurement technologies are producing a rapidly increasing amount of high-fidelity flow data. The coherent spatio-temporal structures of the flow data enable data-driven reduced-order models (ROMs). In terms of kinematics, ROMs furnish simplified descriptions that enrich our understanding of fundamental flow processes (Holmes _et al._, 1996), facilitated by increasingly powerful
machine learning methods (Brunton et al., 2020). ROMs may also allow the prediction of future states with acceptable accuracy. In the context of flow control, ROMs are serving as efficient tools for designing and testing control strategies, replacing costly high-fidelity simulations with an acceptable trade-off in accuracy (Bergmann and Cordier, 2008).
First-principle-based ROMs have historically been the foundation of the ROM community, as only a limited number of large data sets were available at that time. The Galerkin framework is one of the most classic methods in this category. By projecting the Navier-Stokes equations onto a low-dimensional subspace, the Galerkin model elegantly describes the original dynamics, exhibiting self-amplified amplitude-limited dynamics (Stuart, 1971; Busse, 1991; Noack and Eckelmann, 1994). Landau (1944) and Stuart (1958) pioneered the mean-field model, a major progress in first-principle-based ROMs that provides insight into flow instabilities and bifurcation theory. For instance, in the case of a supercritical Hopf bifurcation, mean-field models have been applied to the vortex shedding behind a cylinder (Strykowski and Sreenivasan, 1990; Schumm et al., 1994; Noack et al., 2003) and high Reynolds number turbulent wake flow (Bourgeois et al., 2013). For more complex flows undergoing successive bifurcations, including both Pitchfork and Hopf bifurcations, weakly nonlinear mean-field analysis is applied to the wake of axisymmetric bodies (Fabre et al., 2008), the wake of a disk (Meliga et al., 2009) and the fluidic pinball (Deng et al., 2020). Furthermore, in the field of resolvent analysis, the mean-field theory also contributes by decomposing the system into time-resolved linear dynamics and a feedback term involving quadratic nonlinearity (McKeon et al., 2004; Gomez et al., 2016; Rigas et al., 2017).
In contrast to a first principle ROM, a data-driven version is based on a low-dimensional representation of flow snapshots. Proper orthogonal decomposition (POD) is a commonly used example. POD begins with the eigenvalue or singular value decomposition of the correlation matrix, yielding a low-dimensional subspace comprising leading orthogonal eigenvectors. This subspace provides an "optimal" Galerkin expansion with minimal average residual in the energy norm. Since Aubry et al. (1988) introduced the groundbreaking POD-Galerkin model for unforced turbulent boundary layers, numerous POD models have emerged for various configurations. Examples include POD models for channel flow Podvin and Lumley (1998); Podvin (2009), the wake of a two-dimensional square cylinder Bergmann et al. (2009), laminar and turbulent vortex shedding Iollo et al. (2000), and flow past a circular cylinder with dynamic subgrid-scale model and variational multiscale model Iollo et al. (2000). There are also various variations of the POD model, e.g. integrating the actuation terms into the projection system for control design (Bergmann and Cordier, 2008; Luchtenburg et al., 2009) and balanced POD (Rowley, 2005), which is derived from a POD approximation to the product of controllability and observability Gramians to obtain an approximately balanced truncation (Moore, 1981). Increasingly powerful machine learning methods can make data-driven ROMs more automated. Examples include the sparse identification of nonlinear dynamics (SINDy) aim at human interpretable models (Brunton et al., 2016), building ROMs with artificial neural networks (ANNs) (San and Maulik, 2018; San et al., 2019; Zhu et al., 2019; Kou and Zhang, 2021), turbulence modelling and flow estimation with multi-input multi-output by deep neural networks (DNNs) (Kutz, 2017; Li et al., 2022), and manifold learning methods (Farzannik et al., 2023).
In this work, we focus on automated data-driven modelling. The starting point is cluster-based ROMs (CROMs), pioneered by Burkardt et al. (2006). Clustering is an unsupervised classification of patterns into groups commonly used in data science (Jain and Dubes, 1988; Jain et al., 1999; Jain, 2010), it is popular in data mining, document retrieval, image segmentation, and feature detection (Kim et al., 2022). The foundation of the CROM lies in the cluster-based Markov model (CMM) proposed by Kaiser et al. (2014), which combines a cluster analysis of an ensemble of snapshots and a Markov model for transitions between different flow
states reduced by clustering. The CMM has provided a valuable physical understanding of the mixing layer, Ahmed body wakes (Kaiser _et al._, 2014), combustion-related mixing (Cao _et al._, 2014), and supersonic mixing layer (Li & Tan, 2020). Nair _et al._ (2019) applied the cluster-based model to feedback control for drag reduction and first introduced a directed network for dynamical modelling. Building on this concept, Fernex _et al._ (2021) and Li _et al._ (2021) further proposed the cluster-based network model (CNM) with improved long-timescale resolution. Instead of the "stroboscopic" view of the CMM, the CNM focuses on non-trivial transitions. The dynamics are restricted to a simple network model between the cluster centroids, like a deterministic-stochastic flight schedule which allows only a few possible flights with corresponding probabilities and flight times consistent with the data set. Networks of complex dynamic systems have gained great interest, forming an increasingly important interdisciplinary field known as network science (Watts & Strogatz, 1998; Albert & Barabasi, 2002; Borner _et al._, 2007; Barabasi, 2013). Network-based approaches are often used in fluid flows (Nair & Taira, 2015; Hadjighasem _et al._, 2016; Taira _et al._, 2016; Yeh _et al._, 2021; Taira & Nair, 2022), in conjunction with clustering analysis (Bollt, 2001; Schlueter-Kuck & Dabiri, 2017; Murayama _et al._, 2018; Krueger _et al._, 2019). The critical structures that modify the dynamical system can be identified by the intra- and inter-cluster interactions using community detection (Gopalakrishnan Meena _et al._, 2018; Gopalakrishnan Meena & Taira, 2021).
CROMs are fully automated, robust, and physically interpretable, while the model accuracy is strongly related to the clustering process. The state space is equivalently discretised in the abovementioned CROMs, leading to a lack of dynamic coverage. For example, the CNM struggles to capture multiscale behaviours such as the oscillations near attractors and the amplitude variations between trajectories. To address this issue, an effective solution is to employ dynamics-augmented clustering to determine the centroid distribution. Inspired by the hierarchical clustering (Deng _et al._, 2022) and the network sparsification (Nair & Taira, 2015), we propose a dynamics-augmented cluster-based network model (dCNM) with an improved resolution of complex dynamics. In this case, the time-resolved dynamics are reflected by the evolution of trajectory segments after the state space is clustered. These segments are automatically identified from cluster transitions and are represented by centroids obtained through segment averaging. A second-stage clustering further refines the centroids, eliminating the network redundancy and also deepening the comprehension of underlying physical mechanisms. The proposed dCNM can systematically identify complex dynamics involved in the case of multi-attractor, multi-frequency, and multiscale dynamic systems. Figure 1 provides a comparative illustration of the CNM and dCNM in terms of kinematics and dynamics, exemplified by an inward spiralling trajectory in a two-dimensional state space.
The dCNM is initially applied to the Lorenz system (Lorenz, 1963) as an illustrative example and subsequently demonstrated on the quasi-periodic and chaotic wake of a three-dimensional sphere. The Lorenz system, governed by only three ordinary differential equations, is notable for the "butterfly effect", showcasing chaotic dynamics on two attractors. The three-dimensional sphere wake is a well-investigated benchmark configuration, serving as a prototype flow of bluff body wakes commonly encountered in many modern applications, for instance, the design of drones, air taxis, or micro-robots. Despite the simple geometry, the sphere wake can experience a series of bifurcations with the increase of Reynolds number. Along the route to turbulence, the flow system exhibits steady, periodic, quasi-periodic, and chaotic flow regimes. The quasi-periodic and chaotic scenario, characterised by multi-frequency and multiscale behaviours, provides a challenging testing ground for reduced-order modelling.
This manuscript is organised as follows: In SS 2, the clustering algorithm and the different
perspectives on the dCNM strategy are described. In SS 3, the dCNM is exemplified on the Lorenz system, and in SS 4 it is demonstrated on the sphere wake of quasi-periodic and chaotic flow regimes. In SS 5, the main findings and improvements are summarised, and future directions are suggested.
## 2 Dynamics-augmented cluster-based network model
In this section, we detail the process of the dynamics-augmented cluster-based network model. In SS 2.1, the \(k\)-means++ clustering algorithm and its demonstration on the state space are introduced. The second-stage clustering on the trajectory segments is further discussed in SS 2.2. In SS 2.3, the transition characteristics are described, and in SS 2.4, different criteria are introduced to evaluate the performance of the proposed model. The variables used in this section are listed in table 1.
Figure 1: Principle sketches: The CNM versus the dCNM, exemplified by an inward spiralling trajectory in a two-dimensional state space with the same number of centroids. In the comparison of kinematics, the thick solid lines represent the cluster divisions, and the thin solid lines represent the sub-cluster divisions. The centroids are represented by coloured dots, with their colours representing their cluster affiliation. In the comparison of dynamics, the red solid lines with an arrowhead represent the transition between centroids. The CNM centroids are derived from the average of snapshots in each cluster, with good geometric coverage. While dCNM centroids are assigned with the incorporation of dynamics, therefore exhibiting a weighted distribution. As a consequence, the CNM only reconstructs coarse large-scale dynamics, which occasionally leads to misestimations. Conversely, the dCNM can accurately reconstruct the cycle-to-cycle variations and meanwhile guarantee a precise sequence. With multi-stage clustering, the dCNM is less sensitive to the number of clusters; thus, a smaller number of clusters can be acceptable.
### Clustering the state space
The dynamics-augmented clustering procedure is divided into two steps. Initially, the state space is clustered, yielding coarse-grained state transition dynamics with trajectory segments composed of time-continuous snapshots within each cluster. Subsequently, we cluster these trajectory segments, utilising centroids derived from the average of each segment. This step optimises the centroid distribution and eliminates the redundancy of the trajectory segments.
The first-stage clustering discretises the high-dimensional state space by grouping the snapshots. We first define a Hilbert space \(\mathcal{L}^{2}(\Omega)\), in which the inner product of vector fields
\begin{table}
\begin{tabular}{l l} \hline \hline Variables & Description \\ \(\mathbf{u}^{m}\) & Time-resolved snapshots \\ \(M\) & Number of snapshots \\ \hline \multicolumn{2}{c}{Clustering the state space} \\ \hline \(K\) & Number of clusters \\ \(C_{k}\), \(C_{i}\) & Clusters obtained by the state space clustering \\ \(\chi_{k}^{m}\) & Characteristic function of the state space clustering \\ \(M_{k}\) & Number of snapshots in cluster \(C_{k}\) \\ \(\chi_{ik}^{m}\) & Characteristic function of transition from \(C_{k}\) to \(C_{i}\) \\ \(\mathbf{c}_{k}\) & Centroids of clusters \\ \(n_{ik}\) & Number of transitions from \(C_{k}\) to \(C_{i}\) \\ \(n_{k}\) & Total number of transitions from \(C_{k}\) \\ \(n_{\text{traj}}\) & Total number of transitions of the data set \\ \(Q_{ik}\) & Cluster transition probability from \(C_{k}\) to \(C_{i}\) \\ \(T_{ik}\) & Cluster transition time from \(C_{k}\) to \(C_{i}\) \\ \(\mathbf{Q}\) & Cluster transition probability matrix \\ \(\mathbf{T}\) & Cluster transition time matrix \\ \(\mathbf{R}^{u}\) & Cluster deviation on snapshots \\ \hline \multicolumn{2}{c}{Clustering the trajectory segments} \\ \hline \(\mathcal{T}_{\{kl\}}\) & The \(l\)-th trajectory segment in \(C_{k}\) \\ \(\chi_{(kl)}^{m}\) & Characteristic function of the second-stage clustering \\ \(M_{(kl)}\) & Number of snapshots in trajectory segment \(\mathcal{T}_{\{kl\}}\) \\ \(\mathbf{c}_{(kl)}\), \(\mathbf{c}_{(ij)}\) & Centroids of trajectory segments \\ \(\mathbf{L}=[L_{1},\dots,L_{K}]^{\intercal}\) & Number of sub-clusters for the second-stage clustering \\ \(n_{(ij)(kl)}\) & Number of transitions from \(\mathbf{c}_{(kl)}\) only to \(\mathbf{c}_{(ij)}\) \\ \(Q_{(ij)(kl)}\) & Centroid transition probability from \(\mathbf{c}_{(kl)}\) to \(\mathbf{c}_{(ij)}\) \\ \(\mathbf{Q}_{ik}\) & Centroid transition probability matrix \\ \(\mathcal{Q}_{k}\) & Centroid transition probability tensor \\ \(\mathbf{R}^{\mathcal{T}}\) & Cluster deviation on trajectory segments \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table of variables. Subscripts \(k\) and \(i\) are related to the level of clusters from the state space clustering, and subscripts \(l\) and \(j\) are related to the level of trajectory segments.
in the domain \(\Omega\) is given by a square-integrable function:
\[(\boldsymbol{u},\boldsymbol{v})_{\Omega}=\int_{\Omega}\mathrm{d} \boldsymbol{x}\,\boldsymbol{u}\cdot\boldsymbol{v}, \tag{1}\]
where \(\boldsymbol{u}\) and \(\boldsymbol{v}\) represent snapshots of this vector field, also known as observations in the machine learning context. The corresponding norm is defined as:
\[\|\boldsymbol{u}\|_{\Omega}:=\sqrt{(\boldsymbol{u},\boldsymbol{ u})_{\Omega}}. \tag{2}\]
The distance \(D\) between two snapshots can be calculated as follows:
\[D(\boldsymbol{u},\boldsymbol{v})=\|\boldsymbol{u}-\boldsymbol{v} \|_{\Omega}. \tag{3}\]
The unsupervised \(k\)-means++ algorithm (MacQueen, 1967; Lloyd, 1982; Arthur & Vassilvitskii, 2007) is used for clustering. It operates automatically, devoid of assumptions or data categorisation. Serving as the foundation of cluster analysis, this algorithm partitions a set of \(M\) time-resolved snapshots \(\boldsymbol{u}^{m}\), where \(m=1\ldots M\), into \(K\) clusters \(\mathcal{C}_{k}\), where \(k=1\ldots K\). Each cluster corresponds to a centroidal Voronoi cell, with the centroid defined as the average of the snapshots within the same cluster. The algorithm comprises the following steps:
1. Initialisation: \(K\) centroids \(\boldsymbol{c}_{k}\), where \(k=1\ldots K\), are randomly selected. In contrast to the \(k\)-means algorithm, \(k\)-means++ optimises the placement of these centroids to prevent sensitivity to initial conditions.
2. Assignment: Each snapshot \(\boldsymbol{u}^{m}\) is allocated to the nearest centroid by \(\underset{k}{\arg\min}\,D(\boldsymbol{u}^{m},\boldsymbol{c}_{k})\).
The characteristic function is used to mark their affiliation, and it is defined as follows:
\[\chi^{m}_{k}:=\left\{\begin{array}{ll}1,&\mbox{ if }\boldsymbol{u}^{m} \in\mathcal{C}_{k}\\ 0,&\mbox{ otherwise}\end{array}\right. \tag{4}\]
3. Update: Each centroid is recalculated by averaging all the snapshots belonging to the corresponding cluster as follows:
\[\boldsymbol{c}_{k}=\frac{1}{M_{k}}\sum_{\boldsymbol{u}^{m}\in \mathcal{C}_{k}}\boldsymbol{u}^{m}=\frac{1}{M_{k}}\sum_{m=1}^{M}\chi^{m}_{k} \boldsymbol{u}^{m}, \tag{5}\]
where
\[M_{k}=\sum_{m=1}^{M}\chi^{m}_{k}. \tag{6}\]
4. Iteration: The Assignment and Update steps are repeated until convergence is reached. Convergence means that the centroids do not move or stabilise below a certain threshold. The algorithm minimises the intra-cluster variance and maximises the inter-cluster variance. The intra-cluster variance is computed as follows: \[J\left(\boldsymbol{c}_{1},\ldots,\boldsymbol{c}_{K}\right)= \sum_{k=1}^{K}\sum_{m=1}^{M}\chi^{m}_{k}\left\|\boldsymbol{u}^{m}-\boldsymbol {c}_{k}\right\|_{\Omega}^{2}.\] (7) Each iteration reduces the value of the criterion \(J\) until convergence is reached.
The cluster probability distribution \(\boldsymbol{P}=[P_{1},\ldots,P_{K}]\) is determined by \(P_{k}=M_{k}/M\) for each cluster \(\mathcal{C}_{k}\), and satisfies the normalisation condition \(\sum_{k=1}^{K}P_{k}=1\).
The geometric properties of the clusters are quantified for further analysis. The cluster standard deviation on the snapshots \(R^{u}_{k}\) measures the attractor characterisation, following
Kaiser _et al._ (2014), as:
\[R_{k}^{u}=\sqrt{\frac{1}{M_{k}}\sum_{m=1}^{M}\chi_{k}^{m}\left\|\mathbf{u}^{m}-\mathbf{c}_{ k}\right\|_{\Omega}^{2}}. \tag{2.8}\]
The time-resolved snapshots should be equidistantly sampled and cover a statistically representative time window of the coherent structure evolution. As a rule of thumb, at least ten periods of the dominant frequency are needed to obtain reasonably accurate statistical moments and at least \(K\) snapshots per characteristic period to capture an accurate temporal evolution.
### Clustering the trajectory segments
After the state space is discretized, the trajectory is also divided into segments. We use the cluster transition information to identify the trajectory segments that pass through a cluster.
Based on the temporal information from the given data set, the nonlinear dynamics between snapshots are modelled as linear transitions between clusters, known as the classic CNM (Fernex _et al._, 2021; Li _et al._, 2021). We infer the probability of cluster transition from the data as follows:
\[Q_{ik}=\frac{n_{ik}}{n_{k}},\quad i,k=1,\ldots,K, \tag{2.9}\]
where \(Q_{ik}\) is the direct cluster transition probability from cluster \(C_{k}\) to \(C_{i}\) and \(n_{ik}\) is the number of transitions from \(C_{k}\) only to \(C_{i}\):
\[n_{ik}=\sum_{m=1}^{M}\chi_{ik}^{m}, \tag{2.10}\]
where
\[\chi_{ik}^{m}=\left\{\begin{array}{ll}1,&\text{if }\mathbf{u}^{m}\in C_{k}\ \&\ \mathbf{u}^{m+1}\in C_{i}\\ 0,&\text{otherwise}\end{array}\right. \tag{2.11}\]
\(n_{k}\) is the total number of transitions from \(C_{k}\) regardless of the destination cluster:
\[n_{k}=\sum_{i=1}^{K}n_{ik},\quad i,k=1,\ldots,K. \tag{2.12}\]
If \(Q_{ik}\neq 0\), it can be inferred that in cluster \(C_{k}\) there exists at least one trajectory segment that is bound for cluster \(C_{i}\). We assign distinct labels to each trajectory segment corresponding to all destination clusters, denoted as \(\mathcal{T}_{(kl)}\), where \(k\) and \(l\) represent the \(l\)-th segment in \(\mathcal{C}_{k}\). Therefore the snapshots are marked according to their trajectory affiliations by a characteristic function:
\[\chi_{(kl)}^{m}=\left\{\begin{array}{ll}1,&\text{if }\mathbf{u}^{m}\in\mathcal{T}_{ (kl)}\\ 0,&\text{otherwise}\end{array}\right. \tag{2.13}\]
where \(k\) represents the cluster affiliation, and \(l\) represents the trajectory segment affiliation. The total number of trajectory segments in \(\mathcal{C}_{k}\) equals \(n_{k}\). Note that the final trajectory segment of the data set will not be considered as it will not lead to any destination cluster and is usually incomplete. The total number of trajectory segments in the data set can be obtained by the sum of \(n_{k}\) as follows:
\[n_{\text{traj}}=\sum_{k=1}^{K}n_{k}. \tag{2.14}\]
Analogous trajectory segments within the same cluster will be merged in the subsequent clustering stage. Operations on the trajectories can often be costly. Efficiency in clustering can be achieved by mapping the operations performed on trajectory segments to their corresponding averages, i.e., the trajectory segment centroids, given their topological relationship. Additionally, the propagation of our model relies on centroids, rendering the trajectory information essentially unnecessary. We define the centroids \(\boldsymbol{c}_{(kl)}\) as the average of snapshots belonging to the same trajectory segment:
\[\boldsymbol{c}_{(kl)}=\frac{1}{M_{(kl)}}\sum_{\boldsymbol{u}^{m}\in\mathcal{I} _{(kl)}}\boldsymbol{u}^{m}=\frac{1}{M_{(kl)}}\sum_{m=1}^{M}\chi^{m}_{(kl)} \boldsymbol{u}^{m}, \tag{15}\]
where
\[M_{(kl)}=\sum_{m=1}^{M}\chi^{m}_{(kl)}. \tag{16}\]
The subsequent question pertains to how to determine the number of sub-clusters. The allocation of sub-clusters within each cluster can be automatically learned from the data. To achieve a better spatial resolution, more sub-clusters should be assigned to clusters with more significant trajectory dispersion. We first introduce the standard deviation vector of the trajectory segments \(\boldsymbol{R}^{\mathcal{T}}\). The dispersion of trajectories in \(\mathcal{C}_{k}\) is represented by the standard deviation of the \(n_{k}\) centroids \(\boldsymbol{c}_{(kl)}\) with respect to the cluster centroid \(\boldsymbol{c}_{k}\), which is represented as follows:
\[R_{k}^{\mathcal{T}}=\sqrt{\frac{1}{n_{k}}\sum_{l=1}^{n_{k}}\left\|\boldsymbol {c}_{(kl)}-\boldsymbol{c}_{k}\right\|_{\Omega}^{2}}. \tag{17}\]
Next, we denote the number of sub-clusters as \(L_{k}\) for clustering the centroids in cluster \(\mathcal{C}_{k}\). A \(K\)-dimensional vector \(\boldsymbol{L}=[L_{1},\ldots,L_{K}]^{\top}\) records the numbers of sub-clusters in each cluster, with \(L_{k}\) determined by:
\[L_{k}=\min(\lfloor\hat{R}_{k}^{\mathcal{T}}n_{\text{traj}}(1-\beta)\rfloor+1,n _{k}). \tag{18}\]
Here the trajectory dispersion vector \(\boldsymbol{R}^{\mathcal{T}}\) is normalised with the sum \(\sum_{k=1}^{K}R_{k}^{\mathcal{T}}\) as \(\boldsymbol{\hat{R}}^{\mathcal{T}}\), which ensures a suitable distribution of sub-clusters for the ensemble of \(n_{\text{traj}}\) trajectories. To increase the flexibility of the model, we introduce a sparsification controller \(\beta\in[0,1]\) in this clustering process. For the extreme value of \(\beta=1\), all the centroids are merged into one centroid, and the dCNM is identical to a classic CNM, with the maximum sparsification. For the other extreme \(\beta=0\), the dCNM is minimally sparsified according to the trajectory dispersion. For periodic or quasi-periodic systems, the dCNM with a large \(\beta\) can capture most of the dynamics, while for complex systems such as chaotic systems, a small \(\beta\) may be needed. In addition, the minimum function prevents the possibility that the left-hand side of the equation exceeds the number of centroids \(n_{k}\) when \(\beta\) is too small, causing the second-stage clustering to not be performed.
The refined centroids are obtained by averaging a series of centroids related to analogous trajectory segments. The redundancy of the \(n_{\text{traj}}\) centroids is mitigated, and the corresponding transition network becomes sparse. The \(k\)-means++ algorithm is also used in the second-stage clustering. It will iteratively update the centroids \(\boldsymbol{c}_{(kl)}\) and the characteristic function \(\chi^{m}_{(kl)}\) until convergence or the maximum number of iterations is reached. The overall clustering process of the dCNM is summarised in Algorithm 1.
### Characterising the transition dynamics
We use the centroids obtained from SS 2.2 as the nodes of the network and the linear transitions between these centroids as the edges of the network. First, we introduce two transition properties: the centroid transition probability \(Q_{(ij)(kl)}\) and the transition time \(T_{ik}\).
Figure 2 illustrates the definition of the subscripts in the centroid transition probability \(Q_{(ij)(kl)}\), which can contain all possible transitions between the refined centroids of clusters \(\mathcal{C}_{k}\) and \(\mathcal{C}_{i}\). Considering the transitions between these centroids, we define \(Q_{(ij)(kl)}\) as:
\[Q_{(ij)(kl)}=\frac{n_{(ij)(kl)}}{n_{k}},\quad i,k=1,\ldots,K,\quad j=1,\ldots, L_{i},\quad l=1,\ldots,L_{k}, \tag{2.19}\]
where \(n_{(ij)(kl)}\) is the number of transitions from \(\boldsymbol{c}_{(kl)}\) only to \(\boldsymbol{c}_{(ij)}\). This definition differs from that of the CNM, which uses the cluster transition \(Q_{ik}\) in 2.9 to define the probability. In fact, we can compute \(Q_{ik}\) by summing up \(Q_{(ij)(kl)}\) as follows:
\[Q_{ik}=\sum_{j=1}^{L_{i}}\sum_{l=1}^{L_{k}}Q_{(ij)(kl)}. \tag{2.20}\]
The definition of the transition time \(T_{ik}\) is identical to the CNM, as shown in figure 3. This
property is not further investigated in the present work, as the transition time crossing the same clusters varies little in most dynamic systems.
Let \(t^{n}\) be the instant when the first snapshot enters, and \(t^{n+1}\) be the instant when the last snapshot leaves on one trajectory segment passing through cluster \(\mathcal{C}_{k}\). The residence time \(\tau_{k}^{n}\) is the duration of staying in cluster \(\mathcal{C}_{k}\) on this segment, which is given by:
\[\tau_{k}^{n}=t^{n+1}-t^{n}. \tag{2.21}\]
For an individual transition from \(\mathcal{C}_{k}\) to \(\mathcal{C}_{i}\), the transition time is defined as \(\tau_{ik}^{n}\), which can be obtained by the average of the residence times from both clusters:
\[\tau_{ik}^{n}=(\tau_{k}^{n}+\tau_{i}^{n})/2. \tag{2.22}\]
By averaging \(\tau_{ik}^{n}\) for all the individual transitions from \(\mathcal{C}_{k}\) to \(\mathcal{C}_{i}\), the transition time can be expressed as follows:
\[T_{ik}=\frac{\sum_{n=1}^{n_{ik}}\tau_{ik}^{n}}{n_{ik}}. \tag{2.23}\]
The essential dynamics can also be summarised into single entities as in the CNM, since the cluster-level information is still retained in the current model. For completeness, we introduce the cluster transition probability matrix \(\mathbf{Q}\) and the cluster transition time matrix \(\mathbf{T}\) as:
\[\begin{array}{l}\mathbf{Q}=Q_{ik}\in\mathbb{R}^{K\times K},\quad i,k=1, \ldots,K\\ \mathbf{T}=T_{ik}\in\mathbb{R}^{K\times K},\quad i,k=1,\ldots,K.\end{array} \tag{2.24}\]
Figure 3: Individual transition time \(\tau_{ik}^{n}\) for the transition from cluster \(\mathcal{C}_{k}\) to \(\mathcal{C}_{i}\)
Figure 2: Illustration of the subscripts in the refined centroid transitions. After the state space is clustered, only one subscript is needed to distinguish the different clusters, such as \(\mathcal{C}_{k}\) and \(\mathcal{C}_{i}\). After the trajectory segments are clustered, two subscripts are needed to represent the refined centroids, such as \(\boldsymbol{c}_{(kl)}\) in \(\mathcal{C}_{k}\) and \(\boldsymbol{c}_{(ij)}\) in \(\mathcal{C}_{i}\).
The cluster indices are reordered in both matrices to enhance readability. \(\mathcal{C}_{1}\) is the cluster with the highest distribution probability, \(\mathcal{C}_{2}\) is the cluster with the highest transition probability leaving from \(\mathcal{C}_{1}\), and \(\mathcal{C}_{3}\) is the cluster with the highest transition probability leaving from \(\mathcal{C}_{2}\), so on and so forth. If the cluster with the highest probability is already assigned, we choose the cluster with the second highest probability. If all the clusters with nonzero transition probabilities are already assigned, we choose the next cluster with the highest distribution probability among the rest.
By analogy with \(\mathbf{Q}\), the centroid transition probability \(Q_{\,(ij)\,(kl)}\) for given affiliations of the departure cluster \(k\) and destination cluster \(i\) can form a centroid transition matrix \(\mathbf{Q}_{ik}\) that captures all possible centroid dynamics between the two clusters:
\[\mathbf{Q}_{ik}=Q_{\,(ij)\,(kl)}\in\mathbb{R}^{L_{i}\times L_{k}},\quad j=1, \ldots,L_{i},\quad l=1,\ldots,L_{k}. \tag{25}\]
Moreover, to summarise the centroid transition dynamics, the centroid transition probability \(Q_{\,(ij)\,(kl)}\) for a given affiliation \(k\) of only the departure cluster can form a centroid transition tensor \(\mathcal{Q}_{k}\) that captures all the possible centroid dynamics from this cluster, as:
\[\mathcal{Q}_{k}=Q_{\,(ij)\,(kl)}\in\mathbb{R}^{K\times L_{i}\times L_{k}}, \quad i=1,\ldots,K,\quad j=1,\ldots,L_{i},\quad l=1,\ldots,L_{k}. \tag{26}\]
The dCNM propagates the state motion based on the centroids \(\boldsymbol{c}_{\,(kl)}\) for the reconstruction. To determine the transition dynamics, we first use \(\mathcal{Q}_{k}\) to find the centroid transitions from the initial centroid \(\boldsymbol{c}_{\,(kl)}\) to the destination \(\boldsymbol{c}_{\,(ij)}\). As the destination centroids are determined, the cluster-level dynamics are determined correspondingly. Then, \(\mathbf{T}\) is used to identify the related transition time.
We assume a linear state propagation between the two centroids \(\boldsymbol{c}_{\,(kl)}\) and \(\boldsymbol{c}_{\,(ij)}\) obtained from the tensors, as follows:
\[\boldsymbol{u}^{m}(t)=\alpha_{ik}(t)\boldsymbol{c}_{\,(ij)}+\left[1-\alpha_{ ik}(t)\right]\boldsymbol{c}_{\,(kl)},\quad\alpha_{ik}=\frac{t-t_{k}}{T_{ik}}. \tag{27}\]
Here \(t_{k}\) is the time when the centroid \(\boldsymbol{c}_{\,(kl)}\) is left. Note that we can use splines (Fernex _et al._, 2021) or add the trajectory supporting points (Hou _et al._, 2022) to interpolate the motion between the centroids for smoother trajectories.
Intriguingly, we observe that the trajectory-based clustering of the dCNM enhances the resolution of the cluster transitions. Now each centroid only has a limited number of destination centroids, often within the same cluster. This minimises the likelihood of selecting the wrong destination cluster based solely on the cluster transition probability matrix, as is the case in classic CNM. Consequently, it becomes feasible to accurately resolve long-term cluster transitions without the need for historical information. It can be argued that dCNM effectively constrains cluster transitions, leading to outcomes similar to those obtained with the higher-order CNM (Fernex _et al._, 2021). This improvement is attained by replacing higher-order indexing with higher-dimensionality dual indexing. Specifically, the dual indexing also results in a substantial reduction in the model complexity. While the complexity of the high-order CNM is defined as \(K^{\tilde{L}}\), where \(K\) is the number of clusters and \(\tilde{L}\) is the order, the model complexity of the dCNM is expressed as \(\sum_{k=1}^{K}L_{k}\), which is a significantly smaller value, particularly when \(\tilde{L}\) is relatively large.
### Validation
The auto-correlation function and the representation error are used for validation. We examine the prediction errors for cluster-based models considering both spatial and temporal perspectives. The spatial error arises from the inadequate representation by cluster centroids, as evidenced by the representation error and the auto-correlation function. The temporal error arises due to the imprecise reconstruction of intricate snapshot transition dynamics.
This can be observed directly through the temporal evolution of snapshot affiliations and, to some extent, through the auto-correlation function.
The auto-correlation function is a practical tool for evaluating ROMs, as it can statistically reflect the prediction errors. Additionally, the auto-correlation function circumvents the problem of directly comparing two trajectories with finite prediction horizons, which may suffer from phase mismatch (Fernex et al., 2021). This is particularly relevant for chaotic dynamics, whereby minor differences in initial conditions can lead to divergent trajectories, making the direct comparison of time series meaningless. The unbiased auto-correlation function of the state vector (Protas et al., 2015) is given by:
\[R(\tau)=\frac{1}{T-\tau}\int_{0}^{T-\tau}(\mathbf{u}(x,t),\mathbf{u}(x,t+\tau))_{\Omega }\mathrm{d}t,\quad\tau\in[0,T]\;. \tag{28}\]
In this study, \(R(\tau)\) will be normalised by \(R(0)\)(Deng et al., 2022). This function can also infer the spectral behaviour by computing the fluctuation energy at the vanishing delay.
The representation error can be numerically computed as:
\[E_{r}=\frac{1}{M}\sum_{m=1}^{M}D_{\mathcal{T}}^{m}, \tag{29}\]
where \(D_{\mathcal{T}}^{m}\) is the minimum distance from the snapshot \(\mathbf{u}^{m}\) to the states on the reconstructed trajectory \(\mathcal{T}\):
\[D_{\mathcal{T}}^{m}=\min_{\mathbf{u}^{m}\in\mathcal{T}}\left\|\mathbf{u}^{m}-\mathbf{u}^{n }\right\|_{\Omega}. \tag{30}\]
## 3 Lorenz system as an illustrative example
In this section, we apply the dCNM to the Lorenz (Lorenz, 1963) system to illustrate its superior spatial resolution in handling multiscale dynamics. We also compare it with the CNM (Li et al., 2021; Fernex et al., 2021) of the same rank as a reference.
The Lorenz system is a three-dimensional autonomous system with non-periodic, deterministic, and dissipative dynamics that exhibit exponential divergence and convergence to strange fractal attractors. The system is governed by three coupled nonlinear differential equations:
\[\begin{split}\mathrm{d}x/\mathrm{d}t&=\sigma(y-x), \\ \mathrm{d}y/\mathrm{d}t&=x(\rho-z)-y,\\ \mathrm{d}z/\mathrm{d}t&=xy-\beta z.\end{split} \tag{31}\]
The system parameters are set as \(\sigma=10\), \(\rho=28\), and \(\beta=8/3\). These equations emulate the Rayleigh-Benard convection. The trajectory of the system revolves around two weakly unstable oscillatory fixed points, forming two sets of attractors, that are loosely called "ears". These two ears have similar but not identical shapes, with the left ear being rounder and thicker in the toroidal region. The region where the ears overlap is called the branching region. The Lorenz system has two main types of dynamics. One is that the inner loop in each ear varies and oscillates for several cycles. The other is that the inner loop may randomly switch from one ear to another in the branching region and resume oscillatory motion.
We numerically integrate the system using the fourth-order explicit Runge-Kutta method. The time-resolved 10000 snapshots data with \(\mathbf{u}^{m}=[x,y,z]^{\intercal}\) are collected at a sampling time step of \(\Delta t=0.015\) with an initial condition of \([-3,0,31]^{\intercal}\)(Fernex et al., 2021). This time step corresponds to approximately one-fiftieth of a typical cycle period. The first 5% of the snapshots are neglected to reserve only the post-transient dynamics.
Figure 4 shows the phase portrait of the clustered Lorenz system from the CNM and dCNM. We set \(K=10\) for the state space clustering of the dCNM, which is consistent with previous studies (Kaiser _et al._, 2014; Li _et al._, 2021). This number is large enough for the further subdivision of transition dynamics and is also small enough to obtain a simple structure for understanding. The sparsification index \(\beta\) is chosen with large numbers such as \(\beta=0.90\) to allow for a distinct visualisation of the centroids. In addition, since the trajectory in each "ear" is confined to a two-dimensional surface, a high value of \(\beta\) is deemed suitable.
The two models exhibit notable differences in centroid distribution. CNM clustering relies solely on the spatial topology in the phase space, evenly dividing the entire attractor and dispersing centroids uniformly throughout the phase portrait. It can be inferred that increasing the number of centroids under this uniform distribution does not lead to substantial changes, merely resulting in a denser centroid distribution. This uniform distribution possesses certain disadvantages regarding the dynamics. First, it unnecessarily complicates the transition rhythm as the deterministic large-scale transition may be fragmented into several stochastic transitions. Second, even with many centroids, it fails to capture the increasing oscillation amplitude between the loops in one ear, as the uniform distribution provides only a limited number of centroid orbits. The same result occurs for the branching region where these limited numbers of centroids usually oversimplify the switch between ears. In contrast, the distribution of the dCNM centroids resembles a weighted reallocation. For the Lorenz system, the state space is stratified along the trajectory direction, leading to a concentrated distribution of the dCNM centroids in the radial direction of the attractor and the branching region, which correspond to the system's primary dynamics. Additionally, varying quantities of the centroids can be observed in the radial direction in the toroidal region, depending on its thickness. In thinner toroidal regions with smaller variations between trajectory segments, the second-stage clustering assigns fewer sub-clusters and, consequently, builds fewer centroids.
The cluster transition matrices, which are a distinctive feature of cluster modelling, are preserved because the dCNM maintains the coarse-grained transitions at the cluster level. Figure 5 illustrates the cluster transition probability matrix \(\mathbf{Q}\) and the corresponding transition
Figure 4: Phase portrait of the clustered Lorenz system from the CNM and dCNM. The small dots represent the snapshots, and the large dots represent the centroids. Snapshots and centroids with the same colour belong to the same cluster. As a comparison, the CNM result in (a) is shown with the same number of centroids as the corresponding dCNM result. The dCNM result in (b) is shown with \(K=10\) and \(\beta=0.90\).
time matrix \(\mathbf{T}\) to illustrate the significant dynamics of the Lorenz system. It is worth noting that in the case of the CNM with an equivalent number of centroids, the matrices become considerably larger, which diminishes their readability and interpretability. The matrices reveal three distinct cluster groups. The first group comprises clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), which resolve the branching region and exhibit similar transition probabilities to clusters \(\mathcal{C}_{3}\) and \(\mathcal{C}_{7}\). The branching region is further linked to different ears and is crucial to the attractor oscillation. Clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) can be referred to as flipper clusters (Kaiser et al., 2014), representing a switch between the different groups. The equivalent transition probability from \(\mathcal{C}_{2}\) is consistent with the random jumping behaviour of the two ears. The other two groups demonstrate an inner-group circulation corresponding to the main components of the two ears, exemplified by the cluster chains \(\mathcal{C}_{3}\to\mathcal{C}_{4}\to\mathcal{C}_{5}\to\mathcal{C}_{6}\) and \(\mathcal{C}_{7}\to\mathcal{C}_{8}\to\mathcal{C}_{9}\to\mathcal{C}_{10}\). These chains exhibit deterministic transition probabilities that resolve the cyclic behaviour. In the second-stage clustering, these two groups are further categorised into numerous centroid orbits. Moreover, the transition time matrix resolves the variance in the transition times, with significantly shorter transition times observed in the cyclic groups compared to transitions involving the flipper clusters.
The original and reconstructed trajectories in the phase space are directly compared. We focus solely on the spatial resolution, disregarding phase mismatches during temporal evolution. Figure 6 shows the original Lorenz system and the reconstruction by the CNM and dCNM with the same parameters as in figure 4. To ensure clarity, we select a time window from \(t=0\) to \(t=30\) for the trajectories and employ spline interpolation for a smooth reconstruction. Inaccurate or non-physical centroid transitions, along with incomplete dynamic coverage, can lead to substantial deformations in the reconstructed trajectory. As expected, the dCNM provides a more accurate reconstruction than the CNM. The CNM uses a finite number of centroid orbits to represent oscillating attractors, converting slow and continuous amplitude growth into limited and abrupt amplitude jumps. Furthermore, the CNM may group one continuous snapshot loop into clusters belonging to different centroid orbits, often when these clusters are adjacent to each other. This can lead to unnecessary orbit-crossing centroid transitions and result in nonphysical radial jumps in the reconstructed trajectory. In contrast, the dCNM provides more comprehensive dynamic coverage, resolving more cyclic behaviour with additional centroid orbits. Dual indexing also guarantees accurate centroid transitions. The radial jumps are eliminated, as departing centroids can only transition to destination centroids within the same centroid orbits. Consequently, oscillations
Figure 5: Transition matrices of the Lorenz system. The colour bar indicates the values of the terms. (a) Transition probability matrix \(\mathbf{Q}\). (b) Transition time matrix \(\mathbf{T}\).
are effectively resolved by the centroid orbits, and transitions between them are constrained by densely distributed centroids in the branching region, ensuring a smoothly varied oscillation.
The auto-correlation function is computed to reflect the model accuracy, as shown in figure 7. In the original data set, the normalised auto-correlation function \(R(\tau)/R(0)\) vanishes smoothly as \(\tau\) increases, and the variance between the periodic behaviour can be clearly observed. However, the CNM reconstruction captures only the first four periods of oscillation dynamics. As \(\tau\) increases, there is a sudden amplitude decay accompanied by a phase mismatch. This can be attributed to amplitude jumps between the centroid loops and commonly occurring orbit-crossing transitions. In contrast, the dCNM reconstruction accurately captures both the amplitude and frequency of the oscillation dynamics, demonstrating robust and precise long-timescale behaviours.
Figure 6: Trajectory of the Lorenz system. The thin grey curve represents the original trajectory, the thick red curve represents the reconstructed trajectory, and the red dots represent the centroids. (a) The CNM reconstruction and (b) the dCNM reconstruction are performed with the same parameters as in figure 4.
Figure 7: Auto-correlation function for \(\tau\in[0,30)\) of the Lorenz system. The thin black curves represent the original data set, and the thick red curves represent the models: (a) CNM and (b)dCNM.
## 4 Dynamics-augmented modelling of the sphere wake
In this section, we evaluate the performance of the dCNM for the quasi-periodic and chaotic flow regimes of the sphere wake, which exhibit multi-frequency and multiscale dynamics. The numerical method for obtaining the flow field data set and the flow characteristics is presented in SS 4.1. The demonstration of the dCNM for the quasi-periodic and chaotic flow regimes is introduced in SS 4.2 and SS 4.3, respectively. The physical interpretation of the modelling strategy is discussed in SS 4.4.
### Numerical methods and flow features
Numerical simulation is performed to obtain the data set, as shown in figure 8. A sphere with a diameter \(D\) is placed in a uniform flow with a streamwise velocity \(U_{\infty}\). The computational domain takes the form of a cylindrical tube, with its origin at the centre of the sphere and its axial direction along the streamwise direction (\(x\)-axis). The dimensions of the domain in the \(x\), \(y\), and \(z\) directions are \(80D\), \(10D\), and \(10D\), respectively. The inlet is located \(20D\) upstream from the sphere. These specific domain parameters are chosen to minimise any potential distortion arising from the outer boundary conditions while also mitigating computational costs (Pan et al., 2018; Lorite-Diez and Jimenez-Gonzalez, 2020). The fluid flow is governed by the incompressible Navier-Stokes equations:
\[\begin{split}&\partial\mathbf{u}/\partial t+\mathbf{u}\cdot\nabla\mathbf{u}+ \nabla p-\nabla^{2}\mathbf{u}/Re=0,\\ &\nabla\cdot\mathbf{u}=0,\end{split} \tag{4.1}\]
where \(\mathbf{u}\) denotes the velocity vector \((u_{x},u_{y},u_{z})\), \(p\) is the static pressure, and \(Re\) is the Reynolds number, which is defined by:
\[Re=U_{\infty}D/\nu, \tag{4.2}\]
\(\nu\) is the kinematic viscosity.
The net forces on the sphere have three components \(F_{\alpha}\), \(\alpha=x\), \(y\), \(z\), and the corresponding force coefficients \(C_{\alpha}\) are defined as:
\[C_{\alpha}=\frac{2F_{\alpha}}{\rho U_{\infty}^{2}S}, \tag{4.3}\]
where \(S=\pi D^{2}/4\) is the projected surface area of the sphere in the streamwise direction. The
Figure 8: Numerical sketch of the sphere wake.
total drag force coefficient is \(C_{D}=C_{x}\). Since the lift coefficient can have any direction in the \(yz\) plane on the axisymmetric sphere, the total lift force coefficient \(C_{L}\) is given by:
\[C_{L}=\sqrt{C_{y}^{2}+C_{z}^{2}}. \tag{10}\]
The flow parameters are non-dimensionalised based on the characteristic length \(D\) and the free stream velocity \(U_{\infty}\). This implies that the time unit scales are \(D/U_{\infty}\), and the pressure scales are \(\rho U_{\infty}^{2}\), where \(\rho\) is the density. The Strouhal number \(St\) is correspondingly expressed as:
\[St=f, \tag{11}\]
where \(f\) is the characteristic frequency.
ANSYS Fluent 15.0 is used as the CFD solver for the governing equations with the cell-centred finite volume method (FVM). We impose a uniform streamwise velocity \(\mathbf{u}=[U_{\infty},0,0]\) at the inlet boundary and an outflow condition at the outlet boundary. The outflow condition is set as a Neumann condition for the velocity, \(\partial_{x}\mathbf{u}=[0,0,0]\), and a Dirichlet condition for the pressure, \(p_{\mathrm{out}}=0\). We apply a no-slip boundary condition on the sphere surface and a slip boundary condition on the cylindrical tube walls to prevent wake-wall interpolations. The pressure-implicit split-operator (PISO) algorithm is chosen for pressure-velocity coupling. For the governing equations, the second-order scheme is used for the spatial discretization, and the first-order implicit scheme is used for the temporal term. To satisfy the Courant-Friedrichs-Levy (CFL) condition, a small integration time step is set as \(\Delta t=0.01\) non-dimensional time unit, such that the Courant number is below 1 for all simulations. The simulations are conducted for \(t=500\) time units for the quasi-periodic flow and \(t=700\) time units for the chaotic flow, and snapshots are collected at a sampling time step of \(\Delta t_{s}=0.2\) time units. Moreover, we discard the first 200 time units to eliminate any transient phases. The relevant numerical investigation approach can be found in Johnson & Patel (1999); Rajamuni _et al._ (2018). For the convergence and validation studies, see Appendix A.
The wake of a sphere exhibits different flow regimes as \(Re\) increases, ultimately transitioning to a chaotic state. At \(Re=20\sim 24\), flow separation occurs, forming a steady recirculating bubble, as observed in previous studies (Sheard _et al._, 2003; Eshbal _et al._, 2019). The length of this wake grows linearly with \(\ln(Re)\). When \(Re\) surpasses 130 (Taneda, 1956), the wake bubble starts oscillating in a wave-like manner, while the flow maintains axisymmetry. The first Hopf bifurcation takes place at approximately \(Re\approx 212\)(Fabre _et al._, 2008), leading to a loss of axisymmetry and the emergence of a planar-symmetric double-thread wake with two stable and symmetric vortices. The orientation of the symmetry plane can vary (Johnson & Patel, 1999). At a subsequent Hopf bifurcation around \(Re=270\sim 272\)(Johnson & Patel, 1999; Fabre _et al._, 2008), the flow becomes time-dependent, initiating periodic vortex shedding with the same symmetry plane as before. In the range \(272<Re<420\)(Eshbal _et al._, 2019), periodicity and the symmetry plane diminish, with the vortex shedding becoming quasi-periodic and then fully three-dimensional. Beyond \(Re=420\), shedding becomes irregular and chaotic (Ormieres & Provansal, 1999; Eshbal _et al._, 2019; Pan _et al._, 2018), due to the azimuthal rotation of the separation point and lateral oscillations of the shedding.
In this study, we examine two baseline flow regimes of a sphere wake: quasi-periodic flow at \(Re=330\) and chaotic flow at \(Re=450\). Figure 9 illustrates the flow characteristics of the two regimes. Figure 9 (a) shows the instantaneous vortex structures of the quasi-periodic flow, identified by the \(Q\)-criterion and colour-coded by the non-dimensional velocity \(U_{\infty}\). The vortex shedding forms hairpin vortices with slight variations between successive shedding events, signifying the absence of short-term periodicity while retaining long-term
Figure 9: Flow characteristics of the 3D sphere wake. The quasi-periodic flow regime at \(Re=330\) is displayed by the (a) vortex structures, where the vortexes are identified by the \(Q\)-criteria, and are colour-coded by the non-dimensional velocity \(U_{\infty}\), (b) temporal evolution of the lift coefficient \(C_{L}\), (c) phase portrait of \(C_{L}\), and (d) power spectral density of \(C_{L}\) on time series of length \(T_{\rm traj}=100\) (red curve) and \(T_{\rm traj}=300\) (black curve). The chaotic flow regime at \(Re=450\) is displayed by the (e) vortex structures, (f) temporal evolution of \(C_{L}\), (g) phase portrait of \(C_{L}\), and (h) power spectral density of \(C_{L}\) on a time series of length \(T_{\rm traj}=500\).
periodic behaviour. Figure 9 (b) and (c) show the lift coefficient \(C_{L}\) and its phase portrait, respectively. The amplitude of \(C_{L}\) is strongly associated with the quasi-periodic dynamic, and the modulation also thickens the limit cycle of the oscillator on the phase portrait. The power spectral density in figure 9 (d) indicates two dominant frequencies: a higher frequency linked to natural shedding and a lower frequency associated with amplitude modulation resulting from variations between shedding events. For chaotic flow, periodicity entirely vanishes, and the flow regime displays the typical features of a chaotic system. The hairpin vortexes in figure 9 (e) shed irregularly, with varying separation angles and even double spirals. The temporal evolution of \(C_{L}\) in figure 9 (f) exhibits more complex dynamics, with the phase diagram in figure 9 (g) depicting many random loops that no longer exhibit circular patterns. Furthermore, the power spectral density of \(C_{L}\) in figure 9 (h) shows a broad peak, also indicating chaotic features.
We performed a lossless POD preprocessing on the snapshots to reduce the computational cost of clustering the three-dimensional flow field data set, as described in Appendix B. This preprocessing is optional and does not affect the distance measure in the clustering algorithm. For consistency, the notation snapshot is maintained in the following for the preprocessed data.
### The quasi-periodic flow regime at \(Re=330\)
The clustering results of the CNM and dCNM for the quasi-periodic flow regime are shown in figure 10. We employ classical multidimensional scaling (MDS) to project both the high-dimensional snapshots and the centroids onto a three-dimensional subspace \([a_{1},a_{2},a_{3}]^{\top}\) for visualisation purposes. We set \(K=10\) for the state space clustering, and \(\beta=0.80\) for the subsequent clustering. This particular value of \(\beta\) proves adequate for capturing the quasi-periodic dynamics and ensuring clarity in visualisation. Additional results obtained using various \(\beta\) values are presented in Appendix C for reference. In the three-dimensional subspace, the snapshots collectively form a hollow cylinder. The system's dynamics are
Figure 10: Three-dimensional visualisation of the clustered quasi-periodic flow regime of the sphere wake at \(Re=330\). Classical multidimensional scaling (MDS) is applied to the data set to represent the high-dimensional snapshots and centroids in the subspace. The small dots represent the snapshots, and the large dots represent the centroids. Snapshots and centroids with the same colour belong to the same cluster. For comparison, the CNM result in (a) is shown with the same number of centroids as the corresponding dCNM result. The dCNM result in (b) is shown with \(K=10\) and \(\beta=0.80\).
chiefly governed by two underlying physical phenomena: a cyclic behaviour that synchronises with natural vortex shedding and a quasi-stochastic component responsible for introducing variations between cycles, which is in turn synchronised with the oscillator amplitude.
The centroid distribution of the CNM reveals that the clustering algorithm fails to distinguish between the shedding dynamics and inter-cycle variations. It uniformly groups them based solely on spatial topology. Nonetheless, the CNM centroids effectively capture the cyclic behaviour, as there exist deterministic transitions between adjacent centroids within an orbit, forming a limit cycle structure akin to the "ear" of the Lorenz system. However, this centroid distribution inadequately models the quasi-stochastic component, as it overlooks the inter-cycle transitions. To comprehensively represent this dynamic, clear transitions between the limit cycles are essential. The clustering process obscures these transitions, causing the quasi-stochastic behaviour to resemble a random walk governed by a fully stochastic process. In essence, the clustering process cannot differentiate between the random jumps in the Lorenz system and the quasi-stochastic behaviour in this flow regime. This explains why the CNM often struggles with multifrequency problems. In contrast, the dCNM centroids automatically align along the axial direction of the cylinder with equidistant circumferential spacing, resulting in a greater number of centroid orbits compared to the CNM. This enhancement enables the accurate resolution of inter-cycle variations. For the quasi-stochastic behaviour, the denser and occasionally overlapping centroids in the axial direction ensure precise spatial representation of the transitions between the limit cycles. Additionally, this behaviour can be further constrained by the dual indexing approach for long-timescale periodicity, eliminating random jumps and ensuring accurate transitions between limit cycles.
The cluster transition matrices of the quasi-periodic flow regime are illustrated in figure 11. The quasi-periodic dynamics are evident from \(\mathbf{Q}\), which displays dominant transition probabilities corresponding to deterministic cyclic behaviour and minor wandering transitions signifying inter-cycle variations. Cluster \(\mathcal{C}_{4}\) serves as the transition cluster with two destination clusters, \(\mathcal{C}_{5}\) and \(\mathcal{C}_{10}\). The two destination clusters have similar transition probabilities since they are visited for comparable times during the quasi-periodic transitions. The transition cluster bridges the deterministic cluster chains \(\mathcal{C}_{1}\to\mathcal{C}_{2}\to\mathcal{C}_{3}\to\mathcal{C}_{4}\) and \(\mathcal{C}_{6}\to\mathcal{C}_{7}\to\mathcal{C}_{9}\to\mathcal{C}_{1}\) as two different limit cycles through two short cluster chains: \(\mathcal{C}_{4}\to\mathcal{C}_{5}\to\mathcal{C}_{6}\) and \(\mathcal{C}_{4}\to\mathcal{C}_{10}\to\mathcal{C}_{6}\). These two limit cycles alternate with a fixed order, ultimately forming an extended cluster chain that constitutes the fundamental elements of the long-term periodicity. However, this characteristic is not effectively portrayed in the transition
Figure 11: Same as figure 5, but for the quasi-periodic flow at \(Re=330\). (a) Transition probability matrix \(\mathbf{Q}\). (b) Transition time matrix \(\mathbf{T}\).
matrix. The purely probabilistic transitions from this matrix can result in arbitrary cluster transitions within the network model, introducing additional transition errors. Since the CNM relies on this cluster-level matrix, these transition errors present a notable challenge. While the transition tensors \(\mathbf{Q}\), which resolve the refined centroid transitions, mitigate this issue, we further discuss the transition tensors and the corresponding centroid transition matrices in Appendix D. The time matrix \(\mathbf{T}\) reveals that the transitions within a cyclic behaviour possess a generally similar time scale, with residence times in adjacent clusters changing smoothly, indicating the presence of a gradually evolving limit cycle.
The original and reconstructed trajectories using the CNM and dCNM for the quasi-periodic flow regime are displayed in figure 12. The reconstruction is achieved with the same parameters as in figure 10. As anticipated, the trajectory reconstructed by the CNM undergoes substantial deformation, featuring discontinuous cyclic behaviours and a serrated trajectory. Conversely, the dCNM produces cleaner cyclic behaviours with more noticeable variations. The reconstructed trajectory accurately replicates the intersecting limit cycles and guides the inter-cycle transition with reduced spatial errors. These observations highlight the ability of the dCNM centroids to capture significant dynamics without assuming any prior knowledge of the data set.
In the following sections, we shift our focus to the temporal aspects. Initially, we explore the cluster and trajectory segment affiliation for each snapshot in both the original data set and the dCNM reconstruction to illustrate the accuracy of transition dynamics, as depicted in figure 13. We maintain the same parameters as those used in figure 10 for the reconstruction. The affiliation of the original data reveals that the dual clustering effectively represents the quasi-periodic dynamics. The transition dynamics exhibit significant regularity, with centroids being sequentially and periodically visited, confirming deterministic transitions. Each period of centroid visits corresponds to an extended cluster chain, encompassing multiple centroid orbits and capturing cycle-to-cycle variations. The periodic visits of these extended cluster chains are instrumental in determining the long-timescale periodicity. These transition characteristics are fully preserved by the dCNM due to the dual indexing constraint. In this case, each departure centroid corresponds to only one destination centroid, eliminating the stochastic transition in the model and mitigating the transition errors.
Figure 12: Trajectory of the quasi-periodic flow at \(Re=330\). The thin grey curve represents the original trajectory, the thick red curve represents the reconstructed trajectory and the red dots represent the centroids. (a) The CNM reconstruction and (b) the dCNM reconstruction are obtained with the same parameters as in figure 10.
In the case of the \(x\) axis, the \(y\) axis is the trajectory segment, and the \(z\) axis is the cluster affiliation \(k\). The reconstruction is achieved from the same parameters as in figure 10.
\(f=0.05\), representing long-timescale periodicity. However, the CNM spectrum shows significant noise and lacks a clear dominant frequency due to frequent transition errors. This observation supports the CNM's limitation in capturing multifrequency dynamics effectively. By incorporating the historical information, the high-order CNM demonstrates superior performance by producing a cleaner spectrum closely aligned with the CFD data's dominant frequency. Remarkably, the dCNM outperforms all other models by precisely reconstructing both the frequency and amplitude while minimising noise.
### The chaotic flow regime at \(Re=450\)
Similar to the quasi-periodic flow regime, the clustering results of the CNM and dCNM with \(K=10\) and \(\beta=0.5\) for the chaotic flow regime are illustrated in figure 15. The results with different \(\beta\) for this flow regime are also presented in appendix C. As the dynamics become more complex, the snapshots form a chaotic cloud, which is driven by numerous cyclic behaviours of different scales and indicates irregular three-dimensional vortex shedding. The CNM continues to cluster the data set primarily based on spatial properties, essentially dividing the chaotic cloud into different segments in an evenly distributed manner. Figure 15 (a) illustrates this process, with the uniformly spread centroids capturing only part of one whole cyclic behaviour, limiting their ability to resolve the multiscale dynamics. The dCNM centroids concentrate in regions of rich dynamics, enabling a more comprehensive resolution of the cyclic behaviours. These centroids, in various combinations, form the basis of multifrequency and multiscale cyclic behaviour. Even after sparsification, the dCNM centroids can encompass a significant amount of scale diversity by merging only those that are spatially close to each other.
The cluster transition matrices of the chaotic flow are illustrated in figure 16. The probability matrix \(\mathbf{Q}\) in figure 16 (a) shows that most of the clusters have three or more destination clusters, indicating complex transition dynamics among them. Several dominant transition loops are identifiable, such as the large-size cluster chain: \(C_{1}\to C_{2}\to C_{3}\to C_{4}\to C_{5}\to C_{6}\to C_{1}\), the mid-size cluster chains: \(C_{1}\to C_{2}\to C_{3}\to C_{4}\to C_{5}\to C_{1}\) and \(C_{3}\to C_{4}\to C_{5}\to C_{6}\to C_{7}\to C_{3}\), and the small-size cluster chain: \(C_{6}\to C_{7}\to C_{8}\to C_{6}\). These cluster chains with different lengths represent cyclic loops at different scales. The small number of chains facilitates human understanding of the transition dynamics but is insufficient
Figure 15: Same as figure 10, but for the chaotic flow of the sphere wake at \(Re=450\). (a) The CNM result with the same number of centroids as the dCNM (b) The dCNM result with \(K=10\) and \(\beta=0.50\).
for accurately capturing the dynamics. The dominant loops have their key transition clusters inside, from which they can randomly jump into each other by choosing periodic or stochastic routes. This is where the transition error often occurs. The time matrix \(\mathbf{T}\) in figure 16 (b) shows the difference in the transition times between different types of transitions. Regarding the main loops, the time scale changes smoothly within its transitions. However, for the jumps between the loops, the time scale fluctuates considerably, and some transitions can be very large, showing the diversity of the dynamics. Moreover, this observation implies that the distribution density of snapshots differs among clusters. In other words, the distribution of the trajectory segments in different clusters also exhibits significant variations. This explains the necessity for determining the number of sub-clusters in the second-stage clustering based on the deviation \(\boldsymbol{R}^{\mathcal{T}}\). The refined transition matrices between the centroids are shown in Appendix D.
The original trajectory and the reconstructed trajectory by the CNM and dCNM of this flow regime are shown in figure 17. The reconstruction is achieved with the same parameters as in figure 15. For a clear visualisation, only the trajectories from the first half of the
Figure 16: Same as figure 5, but for the chaotic flow at \(Re=450\). (a) Transition probability matrix \(\mathbf{Q}\). (b) Transition time matrix \(\mathbf{T}\).
Figure 17: Same as figure 6, but for the chaotic flow at \(Re=330\). (a) The CNM reconstruction and (b) the dCNM reconstruction are obtained from the same parameters as in figure 15. The non-smooth trajectories are mainly caused by the spline interpolation between the finite centroids in a limit cycle.
entire time window are plotted. This selection suffices to analyse the precision of the current trajectory, as it contains ample dynamics. We exclude trajectory discrepancies triggered by phase mismatch and focus exclusively on the accuracy of the present trajectory. In the case of the CNM, noticeable disparities exist between the original trajectory and the reconstructed trajectory. These differences include variations in the shape, spatial location, and inclination angle of the cyclic loops. These disparities can be attributed to the elimination of small-scale structures and the blending of certain large-scale structures due to the uniform distribution of centroids. Regarding the dCNM, the reconstructed trajectory nearly occupies the entire chaotic cloud, closely resembling the original trajectory. The external and internal geometries are accurately reproduced, capturing both large-scale and small-scale structures. However, despite the improved accuracy, some deformations persist. These deformations arise from the interpolations between the limited centroids during one single cyclic loop. Notably, due to its complexity, achieving a superior reconstruction of a chaotic system often requires more refined centroids compared to a quasi-periodic system.
Figure 18 shows the auto-correlation function of the CNM, high-order CNM, and dCNM. We still normalise this function by \(R(0)\), and the time window is chosen from \(t=0\) to \(t=400\), which is sufficient for comparison. For the chaotic flow regime, \(R(\tau)/R(0)\) denotes the kinetic energy level of the time window. Nonetheless, a notable discrepancy arises in the CNM, where the amplitude experiences a distinct decay after the initial few periods. It eventually stabilises with minimal variation, primarily due to the distorted reconstructed trajectory and transition errors. This limited variance is indicative of inaccuracies in capturing short-term dynamics, consistent with the absence of historical information. The high-order CNM, which incorporates this historical information, outperforms the CNM in this regard. Its amplitude decays gradually and exhibits variance akin to that of the data set. Additionally, it reveals some peaks with similar time delays, due to the potential introduction of unnecessary long-timescale periodicity into the reconstruction via the high-order cluster chain. The dCNM also surpasses the CNM with regard to accuracy. Both the amplitude and phase are faithfully retained, with a gradual amplitude decay and more pronounced variation. Eventually, the amplitude diminishes, similar to the original data set. As \(\tau\) increases, all three models exhibit some degree of phase delay or lead. This is a consequence of averaged transition times introducing some errors into the model (Li et al., 2021).
Figure 18: Auto-correlation function of the chaotic flow; here \(R\) is normalised by \(R(0)\). The thin black curve represents the CFD data, and the thick red curve represents the reconstruction from different models. (a) The CNM reconstruction with the same number of centroids as the dCNM. (b) The high-order CNM reconstruction with \(L=10\) and the same number of centroids as the dCNM. (c) The dCNM reconstruction achieved with the same parameters as in figure 15.
### Physical interpretation
One of the major advances of the cluster-based model is its strong physical interpretability. The dCNM also maintains and even enhances this nature while improving the model accuracy. In this section, we discuss the physical interpretation of the CROM exemplified for the sphere wake, with particular emphasis on the dCNM.
The cluster-based model spatially coarse-grains the snapshots into groups and represents them by centroids to reduce dimensionality. In contrast to the projection-based methodology, such as the POD-Galerkin model, the cluster-based model uses cluster centroids which are linear combinations of several snapshots, and thus reflects the representative patterns. This feature contributes to its high interpretability. The snapshot dynamics are mapped into the pattern dynamics, followed by the construction of a probabilistic mechanism to reduce temporal dimensionality. The network model, with centroids as nodes and centroid transitions as edges, converts the complex dynamics into pure data analysis. The centroids act as a bridge between the data-driven model and its underlying physical background. Furthermore, it is conceivable that the same model can be easily transferred to analogous pattern dynamics, even with distinct backgrounds, through adjustments to the centroids' backstory.
The sphere wake offers a concise physical interpretation based on the centroids. The coherent structure evolution governs the flow field and manifests as vortex shedding events with diverse dynamics. These shedding events can be captured well by a limit cycle, with a set of centroids representing flow patterns at different shedding time points as foundational elements. The cyclic transitions between these specific flow patterns collectively characterise the entire shedding process. The deterministic-stochastic transitions between different shedding events contribute to the overall periodic-chaotic dynamics.
To explain the physical mechanisms of the flow regimes, we propose a chord transition diagram along with centroid visualisation, which provides a comprehensive view of the flow regime. We start with the quasi-periodic flow, as shown in figure 19. Here the CNM results of \(K=10\) are shown for comparison. Moreover, we did not choose more clusters as this would decrease the interpretability due to the increased complexity of the transition relationships. The cluster probability distribution \(P_{k}\) and the cluster deviation \(\mathbf{R^{u}}\) used for visualising the blocks are shown in figure 20 (a) and (b). We set \(\beta=0.95\) for the dCNM results since lower values produce too many sub-clusters that are unfavourable to visualise. The blocks are split based on the second-stage clustering results, the cluster deviation on the trajectory segments is shown in figure 20 (c) and the sub-cluster deviation is shown in figure 20 (d). The chord diagram is capable of clearly distinguishing the dynamic behaviour categories. The circumferential arrows along the boundary represent the cyclic behaviours, with centroids transitioning to adjacent destination centroids. The radial arrows crossing the graph signify cycle-to-cycle transitions, with the centroids transitioning to non-adjacent destinations. These arrows usually originate from the transition clusters. The number of radial arrows indicates the dynamic characteristics, with more arrows indicating more chaotic features. For the CNM in figure 19 (a), the cyclic clusters \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), \(\mathcal{C}_{3}\), \(\mathcal{C}_{6}\), \(\mathcal{C}_{7}\), \(\mathcal{C}_{8}\) and \(\mathcal{C}_{9}\) exhibit similar vortex structures, despite occurring at different time points within a shedding period. They also have a relatively higher probability distribution, suggesting dominant flow patterns in the flow field. The transition cluster \(\mathcal{C}_{4}\) and its destination cluster \(\mathcal{C}_{5}\), \(\mathcal{C}_{10}\) manifest more visible differences, reflecting cycle-to-cycle variations. From these observations, it is evident that the CNM captures the variation of shedding events mainly based on the transition clusters and that all the shedding events will share the same vortex structures at specific time points of a shedding period. Regarding the dCNM in figure 19 (b), more arrows can be found while preserving the transition rhythm from the CNM. This provides dCNM with a firm basis for inheriting the high physical interpretability of CNM. The quasi-periodic dynamics are
Figure 19: Transition diagram of the quasi-periodic flow at \(Re=330\). The cluster centroids are depicted by the vortex distribution. The vortices are identified by the iso-surfaces of \(z\)-Vorticity, with \(-1\) for the negative vortices coloured in blue and \(1\) for the positive vortices coloured in red. The transition dynamics are depicted by the directed arrows, the size of the arrows represents the transition probability and the colour is consistent with the departure block. (a) CNM result. Different blocks represent different clusters, the colour of the block represents the corresponding cluster probability distribution \(P_{k}\), and the size of the block represents the cluster deviation on the snapshots \(\mathbf{R^{u}}\). (b) dCNM result with \(\beta=0.95\). Here the blocks are split based on the second-stage clustering results. Blocks with the same colour belong to the same cluster, the colour still represents the cluster probability distribution \(P_{k}\), and the size of the block represents the sub-cluster deviation on the snapshots \(\mathbf{R^{u}_{\text{sub}}}\). The transition dynamics of dCNM are more detailed while still exhibiting the same rhythm as that of the CNM, which is intuitive for understanding.
more accurately resolved with increased circumferential (representing periodic behaviour) and radial (representing cycle-to-cycle variations) transitions. Moreover, the variations here are captured within each cluster. The centroids belonging to the same cluster are roughly at the same time point of a shedding period but exhibit different structures. Consequently, shedding pattern variations can be observed at any time during a shedding period in the model.
The chaotic flow regime exhibits a more complex transition graph, as shown in figure 21. The relative information is illustrated in figure 22. The cyclic behaviour still dominates the flow field, represented by the circumferential transition. However, there is an abundance of radial arrows with varying transition probabilities, indicating chaotic features, in contrast to the quasi-periodic flow regime. Regarding the CNM, each centroid represents a distinct flow pattern regardless of the shedding time point, even within the same cyclic cluster loop. This discrepancy indicates that the current flow patterns are inadequate for accurately capturing an entire shedding process. Regarding the dCNM, the arrows maintain the transition rhythm from the CNM but offer more specificity. The centroids within the same cluster exhibit different scales of vortex structures at the same shedding time. Direct centroid transitions further connect these centroids with vortex structures of similar scales. The diversity of the centroid transitions ensures a range of shedding scales, while simultaneously maintaining a consistent scale within the same cyclic cluster loop. Therefore, the dCNM provides a smoother cyclic behaviour and increases the number of cyclic loops, significantly enhancing the representation capacity of the model.
When comparing the CNM and dCNM, it is evident that with the trajectory segments, the state space clustering of the dCNM can be seen to automatically introduce prior knowledge into the model. This prior knowledge includes the inner-state kinematic information within each state, as resolved by the trajectory segments, and the inter-state dynamic information, as resolved by the transitions between the trajectory segments. The incorporation aids in the automatic assignment of refined centroids within each cluster and constrains the probabilistic
Figure 20: Cluster and centroid properties of the quasi-periodic flow at _Re_ = 330. (a) Cluster probability distribution. (b) Normalised cluster deviation on the snapshots. (c) Normalised cluster deviation on the trajectory segments. (d) Normalised sub-cluster deviation on the snapshots, where the elements from the same cluster sum to unity.
transition dynamics. In essence, the dCNM can be regarded as having a built-in unsupervised physics-informing process, which results in superior model accuracy.
## 5 Conclusions
We propose an automatable data-driven reduced-order model for nonlinear dynamics. This model can resolve the quasi-periodic and chaotic dynamics of a three-dimensional sphere wake featuring multi-frequency and multiscale behaviours. The starting point is the cluster
Figure 21: Same as figure 19, but for the chaotic flow at \(Re=450\). (a) CNM. (b) dCNM with \(\beta=0.95\).
-based network model (CNM) (Fernex _et al._, 2021; Li _et al._, 2021), which is an automated framework employing clustering and network science. The dynamics within the CNM are described using a deterministic-stochastic approach on a network, where centroids act as nodes, and transitions serve as edges. However, the clustering process in the CNM relies on a uniform geometric coverage of the snapshot data, agnostic of the temporal dynamic relevance. For multi-frequency dynamics, this can result in large prediction errors. One example is the long transient to a limit cycle. Here, the slow increase in the radius requires a finer resolution than the robust angular dynamics. Hence, the CNM can be expected to be more accurate if the centroids are much denser in the radial direction than in the angular motion. This idea is incorporated in the proposed dynamics-augmented CNM (dCNM). The model can automatically stratify the state space along the trajectory direction.
The dCNM was applied to the Lorenz system (in SS 3) and the three-dimensional sphere wake (in SS 4), with \(K=10\) clusters for the coarse-graining of the state space. The Lorenz system features oscillatory dynamics, presented as two "ears" consisting of many unstable orbits, and stochastic dynamics, presented as random switching between the "ears". For the future state, the phase can be accurately predicted, but the amplitude requires a higher resolution. The CNM is only capable of reconstructing limited loops of the cyclic behaviours and their related transitions in the branching area. Non-physical radial jumps also occur due to transition errors. On the other hand, the dCNM coarsely resolves the deterministic phases but accurately resolves the slowly varying amplitude. The attractor oscillations are distinctly defined, and the transitions in the branching region are subsequently constrained. Regarding the quasi-periodic sphere wake, the dCNM successfully captures both the periodic behaviour and cycle-to-cycle variations. Notably, it discerns intrinsic deterministic transition behaviours, which are often misinterpreted as stochastic transitions by the CNM. For the chaotic flow dominated by unstable periodic orbits with varying scales, the dCNM accurately distinguishes between these orbits and captures their transitions. Even after sparsification, chaotic features remain preserved, with transition dynamics demonstrating
Figure 22: Same as figure 20, but for the chaotic flow at \(Re=450\). (a) Cluster probability distribution. (b) Normalised cluster deviation on the snapshots. (c) Normalised cluster deviation on the trajectory segments. (d) Normalised sub-cluster deviation on the snapshots, where the elements from the same cluster sum to unity.
stochastic characteristics. Overall, these findings underscore the notable improvement of the dCNM in capturing and accurately representing multi-frequency and multiscale dynamics.
The dCNM offers several advantages over other reduced-order modelling strategies. It not only upholds the outstanding recognition capabilities of cluster-based approaches but also showcases several noteworthy features.
(i) The prediction error is minimized. The slow evolution of amplitude oscillations, the deterministic quasi-periodic dynamics, and the stochastic chaotic dynamics can be automatically resolved without any prior knowledge.
(ii) The model complexity is significantly reduced as the number of non-trivial transitions is mitigated by design. The CNM often requires more clusters and a higher order to achieve similar accuracy, with more complex cluster transition relationships.
(iii) The physical interpretability of the model is enhanced.
We highlight the non-intrusive aspects of our approach. The dynamics are integrated into the cluster-based model simply by following the time-resolved snapshots, leading to a dynamically optimised kinematic description of the state space. Although the concept of integrating dynamics into the kinematic description is not novel, previous attempts are typically intrusive, often requiring the derivation of a propagator, such as shift modes (Noack et al., 2003) and balanced truncation (Moore, 1981; Rowley, 2005) in POD modelling, and nonlinear terms in Galerkin modelling (Luchtenburg et al., 2009). We expect that the dCNM will typically outperform the CNM and can be applied to a large spectrum of shear flows. Furthermore, the dCNM may be generalized to control-oriented ROMs. The authors are actively exploring these avenues.
**Acknowledgements**. The authors appreciate the valuable discussions with Steven Brunton, Antonio Colanera, Guy Yoslan Cornejo Maceda, Stefano Discetti, Andrea Ianiro, Francois Lusseyran, Luc R. Pastur and Xin Wang.
**Funding**. This work is supported by the National Natural Science Foundation of China under grants 12172109, 12172111, and 12202121, by the China Postdoctoral Science Foundation under grants 2023M730866 and 2023T160166, by the Guangdong Basic and Applied Basic Research Foundation under grant 2022A1515011492, and by the Shenzhen Science and Technology Program under grant JCYJ20220531095605012.
**Declaration of Interests**. The authors report no conflict of interest.
**Author ORCIDs**. C. Hou, [https://orcid.org/0000-0001-7477-4242](https://orcid.org/0000-0001-7477-4242);
N. Deng, [https://orcid.org/0000-0001-6847-2352](https://orcid.org/0000-0001-6847-2352);
B. R. Noack, [https://orcid.org/0000-0001-5935-1962](https://orcid.org/0000-0001-5935-1962)
**Author contributions**. C. Hou: Methodology, Data Curation, Validation, Writing-Original draft preparation.
N. Deng: Supervision, Methodology, Validation, Writing-Reviewing and Editing, Funding acquisition.
B. R. Noack: Conceptualisation, Supervision, Funding acquisition, Writing-Reviewing and Editing.
## Appendix A Convergence and validation studies on the simulation of the sphere wake
To determine an optimal grid size for the numerical analysis, grid convergence studies were conducted at a \(Re=300\). For a set of grids with different numbers of grid cells, the values of the typical flow characteristics are compared to obtain grid-independent results, including the time-averaged drag coefficient \(\overline{C_{D}}\) and its standard deviation \(C_{D}^{\prime}\), the time-averaged lift coefficient \(\overline{C_{L}}\) and its standard deviation \(C_{L}^{\prime}\), and the Strouhal number \(St\).
The grid refinement is specifically applied to the surface of the sphere and the wake region. Across all grid configurations, the boundary layer thickness is adjusted to ensure that the \(y^{+}\) value on the sphere's surface remains below 1. This adjustment implies that the first layer of the near-wall grid has a thickness of \(0.01D\)(Pan et al., 2018) with a spacing ratio of 1.1.
The related flow characteristics of the simulations using different grids are listed in Table 2. Here, \(n_{s}\) refers to the number of nodes along the circumference of the sphere within one of the 'O'-blocks. This parameter is interconnected with the grid elements along the streamwise direction and the circumference of the cylinder. On the other hand, \(n_{r}\) signifies the number of elements along the radial direction originating from the surface of the sphere. Consequently, \(n_{s}\) governs the resolution of the wake region, whereas \(n_{r}\) dictates the resolution of the sphere surface region. Comparing grids (\(a\)), (\(b\)), and (\(e\)) reveals a relatively smaller difference between grids (\(b\)) and (\(e\)), especially in terms of standard deviations. As a result, we select \(n_{s}=49\) for further analysis concerning the sphere surface region. Examining grids (\(b\)), (\(c\)), and (\(d\)) leads to similar conclusions, given that there is a more significant increase in the number of grid cells from (\(c\)) to (\(d\)) than from (\(b\)) to (\(c\)), despite limited variations in the flow characteristics. Consequently, based on these comparisons, it can be concluded that grid (\(c\)) is suitable for conducting efficient simulations with sufficient accuracy in this study.
To validate the numerical method, we compare our results with available data from related studies. Table 3 presents a comparison between the time-averaged drag coefficient \(\overline{C_{D}}\), the time-averaged lift coefficient \(\overline{C_{L}}\), and the Strouhal number \(St\) obtained in this study and those reported in other work for \(Re=300\). The results obtained from various studies exhibit a high degree of similarity. This consistency indicates that our study aligns well with these flow characteristics, as the values are all small and sensitive.
The convergence and validation studies presented here instil confidence that our computational grid and selected numerical schemes are adequate for the wake simulations and for testing the reduced-order modelling method.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Cases & \(n_{s}\) & \(n_{r}\) & Grid cells & \(\overline{C_{D}}\) & \(C_{D}^{\prime}\) & \(\overline{C_{L}}\) & \(C_{L}^{\prime}\) & \(St\) \\ Grid (a) & 32 & 61 & 1.93 million & 0.6637 & 0.00183 & 0.0674 & 0.00960 & 0.1363 \\ Grid (b) & 49 & 61 & 4.61 million & 0.6615 & 0.00194 & 0.0666 & 0.01062 & 0.1363 \\ Grid (c) & 49 & 70 & 5.07 million & 0.6624 & 0.00185 & 0.0674 & 0.01031 & 0.1363 \\ Grid (d) & 49 & 100 & 6.58 million & 0.6623 & 0.00175 & 0.0664 & 0.01051 & 0.1363 \\ Grid (e) & 64 & 61 & 8.12 million & 0.6607 & 0.00193 & 0.0661 & 0.01084 & 0.1363 \\ \end{tabular}
\end{table}
Table 2: Grid independence test at \(Re=300\).
\begin{table}
\begin{tabular}{l c c c} \hline & \(\overline{C_{D}}\) & \(\overline{C_{L}}\) & \(St\) \\ Present study & 0.662 & 0.067 & 0.136 \\ Johnson \& Patel (1999) & 0.656 & 0.069 & 0.137 \\ Kim et al. (2001) & 0.657 & 0.067 & 0.137 \\ Giacobello et al. (2009) & 0.658 & 0.067 & 0.134 \\ Rajamuni et al. (2018) & 0.665 & 0.070 & 0.137 \\ \end{tabular}
\end{table}
Table 3: Validation of the numerical method at \(Re=300\), compared to the listed literature.
## Appendix B Optional POD before clustering
The computational burden of clustering algorithms becomes a concern when dealing with high-dimensional flow field data. Utilising a lossless proper orthogonal decomposition (POD) can effectively compress the dataset. Implementing the clustering algorithm on the compressed data rather than the high-dimensional velocity fields can significantly reduce the computational time.
Here we introduce the snapshot POD methodology for the completeness of our work. The \(M\) snapshots of the flow field can be decomposed into spatial POD modes with temporal amplitudes, where the \(m\)-th snapshot can be expressed as:
\[\mathbf{u}^{m}(\mathbf{x})\approx\mathbf{u}_{0}(\mathbf{x})+\sum_{i=1}^{M-1}a_{i}^{m}\mathbf{u}_{i }(\mathbf{x}),\] (B1)
where \(\mathbf{u}_{0}\) is the mean flow, \(a_{i}\) is the mode amplitude and \(\mathbf{u}_{i}\) is the related mode. For the three-dimensional sphere flow in this work, we maintain the leading 500 POD modes for a loseless POD, which can resolve more than 99.9% of the fluctuation energy from all the flow regimes.
The distance between the snapshots translates into the distance between the corresponding mode amplitudes as follows:
\[D(\mathbf{u}^{m},\mathbf{u}^{n})=D(\mathbf{a}^{m},\mathbf{a}^{n}).\] (B2)
With this transformation, the reduction in the computational time can be one or two orders of magnitude, and the statistical description has been formulated in Fernex et al. (2021) and Li et al. (2021).
## Appendix C Modelling with different \(\beta\)
We demonstrate the impact of \(\beta\) on the modelling of the sphere wake. Figure 23 displays the clustering outcomes for the quasi-periodic flow, where \(\beta\) takes values of 1, 0.95, 0.80, and 0. Figure 24 illustrates the results for the chaotic flow, with \(\beta\) values of 1, 0.95, 0.75, and 0. \(\beta=1\) means fully sparse, and the centroids are equivalent to the cluster averages, yielding results identical to those of the CNM. Conversely, when \(\beta=0\), the model is minimally sparse, resulting in the highest model accuracy. As \(\beta\) decreases, the centroids try to cover more cyclic behaviours, gradually outlining the entire structure. This expansion involves more trajectory segments and, consequently, increases the model resolution. For the quasi-periodic flow regime, there are limitations to this enhancement. Due to the finite axial length of the cylinder, the trajectory segments often overlap. Consequently, increasing the number of sub-clusters to a certain extent results in extensive centroid overlap, offering minimal contributions to the resolution improvement. This situation is evident when comparing figure 23 (c) and (d), where the centroid distributions are very similar, and where centroid overlap is prevalent. In contrast, centroid overlap is rare in chaotic flows, allowing for noticeable accuracy improvements with smaller \(\beta\) values. However, using a small \(\beta\) will result in lengthy and complex centroid transition information. Therefore, for chaotic dynamics, it is advisable to strike a balance between the model accuracy and complexity by adjusting \(\beta\) based on specific purposes.
From a temporal perspective, the assignments of each snapshot to the clusters and centroids of the chaotic flow are illustrated in figure 25. When \(\beta=1\), the centroid affiliation is disregarded, and only the cluster-level transitions can be observed, this is the same with the CNM. The temporal evolution of centroids relies solely on the stochastic cluster transition probabilities, with each centroid visited multiple times, as shown in figure 25 (a). Conversely, for \(\beta=0\), most centroids are visited only once, leading to the minimum transition error,
as seen in figure 25 (d). From figure 25 (b) and (c), we can conclude that even with sparsification, varied cyclic behaviours can still be effectively captured by the dCNM. This is because different centroid combinations in the dCNM reconstruction constitute extended cluster chains mentioned in SS 4.3, and the occurrence of extended cluster chains affirms the capability of the dCNM to effectively resolve the multiscale dynamics. The generally similar visiting sequences in the extended cluster chains from the dCNM reconstruction and the data set ensure the model accuracy and the difference highlights that the stochastic transition characteristics of chaotic dynamics are also reserved.
From a spatial perspective, we evaluated the representation error using different \(\beta\) for the two flow regimes, which is also relevant to determine the appropriate \(\beta\), as shown in figure 26. The representation error exhibits different trends for the two flow regimes as \(\beta\) increases. In the quasi-periodic flow, the representation error remains relatively constant over a wide range of \(\beta\) values and then sharply increases near \(\beta=1\). This abrupt rise suggests that sparsification eliminates the cycle-to-cycle variations. An acceptable reconstruction of this flow regime can be achieved with \(\beta<0.80\). For the chaotic flow, the representation error changes smoothly from \(\beta=0\) to \(\beta=1\), indicating the loss of diversity of the main loop. For an accurate reconstruction of this flow regime, a relatively smaller value of \(\beta\) is needed.
Figure 23: Clustering results with different \(\beta\) on the quasi-periodic flow. (a) \(\beta=1\). (b) \(\beta=0.95\). (c) \(\beta=0.80\). (d) \(\beta=0\)
## Appendix D The centroid transition matrix
For the nonzero terms in the cluster transition probability matrix, we can embed a corresponding centroid transition matrix and then record all the dual indexing centroid transitions by the transition tensors \(\mathcal{Q}\).
The centroid transition matrices of the quasi-periodic flow, as discussed in SS 4.2, departing from \(\mathcal{C}_{4}\) are shown in figure 27. The matrices of the chaotic flow, as discussed in SS 4.3, departing from \(\mathcal{C}_{1}\) are shown in figure 28. For the quasi-periodic flow regime, the matrices are sparse and clear, with centroids having only one destination, indicating deterministic transitions. Moreover, these transitions impose specific constraints on the quasi-stochastic dynamics. Once the departing centroid is determined, all destination centroids belong to the same destination cluster, and the nonzero terms in this column appear only in one matrix. The stochastic cluster transition can therefore become deterministic. Compared to the quasi-periodic flow, the transition probabilities in the matrices of the chaotic flow exhibit stochastic centroid transitions. Some departing centroids have destination centroids within the same cluster, while others do not. Consequently, some centroids participate solely in deterministic cluster loops, while others also engage in random jumps between cluster loops. This distinction separates the cluster transitions from periodic and stochastic routes and
Figure 24: Clustering results with different \(\beta\) on the chaotic flow. (a) \(\beta=1\). (b) \(\beta=0.95\). (c) \(\beta=0.50\). (d) \(\beta=0\)
serves as a constraint that distinguishes multiscale loops and their associated cycle-to-cycle transitions.
|
2309.02676 | Efficient Training for Visual Tracking with Deformable Transformer | Recent Transformer-based visual tracking models have showcased superior
performance. Nevertheless, prior works have been resource-intensive, requiring
prolonged GPU training hours and incurring high GFLOPs during inference due to
inefficient training methods and convolution-based target heads. This intensive
resource use renders them unsuitable for real-world applications. In this
paper, we present DETRack, a streamlined end-to-end visual object tracking
framework. Our framework utilizes an efficient encoder-decoder structure where
the deformable transformer decoder acting as a target head, achieves higher
sparsity than traditional convolution heads, resulting in decreased GFLOPs. For
training, we introduce a novel one-to-many label assignment and an auxiliary
denoising technique, significantly accelerating model's convergence.
Comprehensive experiments affirm the effectiveness and efficiency of our
proposed method. For instance, DETRack achieves 72.9% AO on challenging GOT-10k
benchmarks using only 20% of the training epochs required by the baseline, and
runs with lower GFLOPs than all the transformer-based trackers. | Qingmao Wei, Guotian Zeng, Bi Zeng | 2023-09-06T03:07:43Z | http://arxiv.org/abs/2309.02676v1 | # Efficient Training for Visual Tracking with Deformable Transformer
###### Abstract
Recent Transformer-based visual tracking models have showcased superior performance. Nevertheless, prior works have been resource-intensive, requiring prolonged GPU training hours and incurring high GFLOPs during inference due to inefficient training methods and convolution-based target heads. This intensive resource use renders them unsuitable for real-world applications. In this paper, we present DETRack, a streamlined end-to-end visual object tracking framework. Our framework utilizes an efficient encoder-decoder structure where the deformable transformer decoder acting as a target head, achieves higher sparsity than traditional convolution heads, resulting in decreased GFLOPs. For training, we introduce a novel one-to-many label assignment and an auxiliary denoising technique, significantly accelerating model's convergence. Comprehensive experiments affirm the effectiveness and efficiency of our proposed method. For instance, DETRack achieves 72.9% AO on challenging GOT-10k benchmarks using only 20% of the training epochs required by the baseline, and runs with lower GFLOPs than all the transformer-based trackers.
## 1 Introduction
Visual object tracking remains a critical challenge in computer vision, finding applications in diverse areas from surveillance to robotics and autonomous driving. The recent adoption of the Transformer [35] within visual trackers [6, 8, 9, 14, 34, 37, 43] has introduced both innovation and complexity to this domain. Although deep learning advancements have elevated performance, training state-of-the-art (SOTA) trackers leveraging Transformers demanding, both in terms of time and computational resources. The significant GPU hours necessary for training a competitive tracker pose challenges, particularly for researchers with limited computational resources. Furthermore, the substantial parameters and GFLOPs associated with a SOTA tracker hinder their applicability in downstream tasks.
The present mainstream approaches for visual object tracking typically follow three primary stages: (i) deep neural network based feature extraction from the search and template images (ii) an integration module using either convolution or attention mechanisms for feature matching/fusion, and (iii)a head for bounding-box localization through customized heads for corner, center or scale estimation, and target classification. In some cases, the first two stages can be combined using a unified architecture, Transformer Encoder and thus enjoying the powerful mask-image-modeling pretraining [7, 16, 39]. For accelerating running speed, sparsity is bringed into this unified process by some works. Specifically, some image features, also called as tokens in the Transformer, can be dropped if they are considered irrelevant to the target to be tracked. The spari
Figure 1: Comparison of our DETRack with other trackers on GOT-10k benchmark in terms of trained epochs following the official one-shot protocol [18]. The bubble size represents the relative GFLOPs. Our DETRack sets up a best trade-off among accuracy, trained epochs and running GFLOPs.
must be padded (usually by zeros). In the end, convolutional computaions are performed redundantly over padded dropped tokens. On the other hand, the learning objective for classification in the prevailing head designs, such as center and corner require the model predicting a unique and sharp class map, which is difficult to optimize.
To tackle the redundant computation problem, we introduce the transformer decoder as the target head and thus make our model a encoder-decoder framework, dubbed as DETRack. With capability of handling seqeunc of features, the decoder only deal with the reserved tokens after parcified process by the encoder, avoiding redundant computation on the dropped tokens. It enables our model keeping fully parcified as shown in Fig. 2.
Inesad of applying one-to-one Hungarian matching during training like the other encoder-decoder designs in object detection [5, 23], we found that a loose one-to-many label assignment significantly accelerating the training convergence. We allow all the feature pixels within the GT bounding box to predict a positive classification score, which is expected to reflect the localization quality, _e.g_., IoU between the predicted bounding box and the GT. For localization, we pick up the multiple predicted bounding box with the highest classification scores, thus introducing more supervision signal in the training. Further more, we design a novel denoising branch as a auxiliary training strategy. The noised GT bounding boxes as region proposals enrich the diversity of training samples, which further acccelerate the training speed.
Our extensive experiments validate the effectiveness and efficiency of our DETRack. Specifically, compared to the most training-efficient transformer-based tracker OSTrack, our method further reduce the training epochs to 20%, while achieving a higher performance with lower GFLOPs as shown in Fig. 1 and Tab. 1. Our main contributions are summarized as follows:
* We propose a DETR-like encoder-decoder framework for visual object tracking without convolution head, thus maintain the computation efficiency for a sparsified backbone.
* We design a novel one-to-many label assignment during training, which significantly accelerate the training convergence.
* To avoid low quality prediction and further acccelerate training convergence, we introduce a denoising training strategy which bring in rich supervision signal.
## 2 Related Work
### Visual Tracking Paradims
Over the past few years, Siamese trackers [10, 22, 1] have gained much popularity. Typically, they adopt a two-stream pipeline to separately extract the features of the template and search region. Cross-relations between the two streams are modeled by additional correlation modules. To exploit the power of highly discriminative features, most Siamese trackers [3, 20, 47, 48] use a pre-trained deep neural networks as the backbone, _e.g_. ResNet-50 [17], leave them frozen in training, thus they only need very few epochs to train the tracker. Recently, the Transformer [36] architecture has achieved promising results in visual tracking and has become the de-facto choice for many high-performance trackers [41, 40, 25, 29, 37, 14, 8].
To further enhance the feature interaction, several attempts [44, 15, 38] have investigated cross-relation modeling inside the backbone. Recently, another thread of progress [41, 25] concatenates the template and search tokens to conduct cross-relation modeling and self-relation modeling jointly. Inspired by these explorations, more recent trackers [43, 9, 6] adopt a one-stream pipeline to jointly extract the features and model the relations of both the template and search region by the self-attention mechanism. Based on this pipeline, they can utilize advanced pretrained models, _e.g_. MAE [16], instead of randomly initialized correlation modules for cross-relation modeling, thereby yielding a remarkable performance gain.
### DEtection with TRansformer (DETR)
Carion _et al_. [5] proposed a Transformer-based end-to-end object detector named DETR (DEtection TRansformer) without using hand-designed components like anchor design and NMS. Many follow-up papers have attempted to address the slow training convergence issue of DETR introduced by decoder cross-attention. Deformable-DETR [49]
Figure 2: (a) Trackers with convolutional head have to padded the dropped features; (b) Our DETRack with the decoder maintains the full sparsity without redundant computation on the padded features.
proposed a deformable attention module to focus on important regions from multiple feature levels. DN-DETR [23] and DINO [45] develop a denoising training strategy to helps the model to avoid duplicate outputs of the same target. Borrowing inspiration from DETR, STARK [41] casts target tracking as a bounding box prediction problem and solve it with an encoder-decoder transformer, in which the encoder models the global spatiotemporal feature dependencies between targets and search regions. However, it still adopt a convolutional head after the transformer for the final prediction. We also adopt the encoder-decoder framework for our tracker inspired by the DETR-like models, but totally get rid off the dense operation like convolution. Besides, we apply a novel label assignment and denoising training strategy, significantly reduced the training GPU hours.
## 3 Methods
This section presents the DETRack method in detail. First, we briefly overview the model architecture of our DETRack framework, including the encoder and the decoder. Then, we introduce the training strategy, including label assignment and denoising training.
### Preliminary
DETR [5] is an end-to-end Transformer-based framework for object detection. In DETR, each query in the transformer is expected to associated with one object in the image. As studied in some variants of DETR [27, 45], it becomes clear that queries in DETR are formed by two parts: a positional part and a content part, which are referred to as positional queries and content queries in this paper. DAB-DETR [27] explicitly formulates each positional query in DETR as a 4D anchor box \((x,y,w,h)\), where \(x\) and \(y\) are the center coordinates of the box and \(w\) and \(h\) correspond to its width and height. Such an explicit anchor box formulation makes it easy to dynamically refine anchor boxes layer by layer in the decoder.
DN-DETR [23] and DINO [45] introduces a denoising (DN) training method to accelerate the training convergence of DETR-like models. It shows that the slow convergence problem in DETR is caused by the instability of bipartite matching. To mitigate this problem, DN-DETR proposes to additionally feed noised ground-truth (GT) labels and boxes into the Transformer decoder and train the model to reconstruct the ground-truth ones.
### Model Architecture
We use an encoder-decoder structure for learning and inference. Such a network architecture is widely used in modern visual recognition, especially DETR-like models [23, 49, 23]. Unlike DETR [5] usually using a convolutional backbone for feature extraction in object detection, we simply use a single transformer encoder as backbone.
**Encoder.** The encoder is identical to the one used in Vision Transformer (ViT). We start by dividing a given image pair, comprising a template and a search region, into smaller image patches. Specifically, we denote the template image patch as \(z\in\mathbb{R}^{3\times H_{z}\times W_{z}}\) and the search region patch as \(x\in\mathbb{R}^{3\times H_{x}\times W_{x}}\). These patches are then transformed into tokens, \(\mathbf{H}_{z}\in\mathbb{R}^{D}\) and \(\mathbf{H}_{x}\in\mathbb{R}^{D}\), using a linear projection layer. Then we add template and search tokens with positional embeddings, concatenate as \([\mathbf{H}_{z};\mathbf{H}_{x}]\) and feed them into a plain ViT encoder to encode visual features. For computation efficiency, some tokens can be dropped if they are considered irrelevant to the target in the encoder. Here we simply adopt candidate elimination from OSTrack [43] for the syncified process.
**Query Selection.** Tokens from the encoder directly serve as object queries in the decoder. If there is no specified process for dropping some tokens in the encoder, it will bring unacceptable computational and memory cost for the self-attention modules in the decoder. To avoid this problem, we only select \(K\) (we set \(K=64\) in our implementation) tokens for the decoder no matter how many tokens are reserved after the encoder. Specically, each tokens output by the encoder directly predicts a bounding box and a foreground score by a linear layer. The tokens with top-K score will be selected as content queires, and the bouding box of them are picked as region proposals.
**Decoder.** There are mainly self attention and cross attention in each layer of the decoder. The query elements for both types of attention modules are of object queries. The self attention is a standard multi-head self attention, where object queries interact with each other. In the cross attention modules, object queries interact with the features from the encoder, where the key elements are of the output tokens from the encoder. The output object quiries by the decoder will be fed in to a MLP for the bounding box prediction and a linear layer for the foreground score prediction. Unlike the encoder, we adopt the deformable attention [49] in the decoder, which will be detailed describe in Sec. 3.3.
### Deformable Transformer Decoder
As shown in Fig. 4, we formulate each object query in the deformable transformer decoder as two part: position part and content part in the decoder. The position part are initialized from the region proposals (see Sec. 3.2) by \(\cos\) embedding in the fisrt layer. For content part, we initialize it directly with the tokens from the encoder. We simply element-wise add the two parts for the object queires. After a standard multi-head attention, the core module in the decoder is deformable attention, in which the queires only attend to a small set of key sampling points around a reference point, regardless of the spatial size of the feature
maps.
**Deformable Attention.** Given an input feature map \(\mathbf{x}\in\mathbb{R}^{C\times H\times W}\), let \(q\) index a query element with content feature \(\mathbf{z}_{q}\) and a 4-d reference box \((x,y,w,h)\), denoted as \(\mathbf{p}_{q}\). The deformable attention feature is calculated by
\[\begin{split}&\text{DeformAttn}(\mathbf{z}_{q},\mathbf{p}_{q},\mathbf{x})=\\ &\sum_{m=1}^{M}\mathbf{W}_{m}\big{[}\sum_{k=1}^{K}A_{mqk}\cdot\mathbf{W}_ {m}^{\prime}\mathbf{x}(\mathbf{p}_{q}+\Delta\mathbf{p}_{mqk})\big{]},\end{split} \tag{1}\]
where \(m\) indexes the attention head, \(k\) indexes the sampled keys, and \(K\) is the total sampled key number (\(K\ll HW\)). \(\Delta\mathbf{p}_{mqk}\) and \(A_{mqk}\) denote the sampling offset and attention weight of the \(k^{\text{th}}\) sampling point in the \(m^{\text{th}}\) attention head, respectively. The scalar attention weight \(A_{mqk}\) lies in the range \([0,1]\), normalized by \(\sum_{k=1}^{K}A_{mqk}=1\). \(\Delta\mathbf{p}_{mqk}\in\mathbb{R}^{4}\) are of 4-d real numbers with unconstrained range. As \(\mathbf{p}_{q}+\Delta\mathbf{p}_{mqk}\) is fractional, bilinear interpolation is applied in computing \(\mathbf{x}(\mathbf{p}_{q}+\Delta\mathbf{p}_{mqk})\). Both \(\Delta\mathbf{p}_{mqk}\) and \(A_{mqk}\) are obtained via linear projection over the query feature \(\mathbf{z}_{q}\).
After a layer of decoder, the object queries are fed into a MLP to predict another 4-D offset \((\Delta x,\Delta y,\Delta w,\Delta h)\), as the adjustment to the target bounding box from the last layer's output. As shown in Fig. 4, the refined box output by the current layer serves as the 4-d reference box for the next layer of decoder.
### Lable Assignment
In object detection, DETR fundamentally addresses the issue of encouraging redundant predictions during training and the need for additional modules to eliminate such duplications during inference. DETR's one-to-one Hungarian matching promotes independent predictions during training. However, in single-object tracking, a given input image contains at most one target to be tracked. Thus, it's sufficient to merely select the prediction with the highest confidence from the final output.
Certain studies in object detection have indicated that encouraging repetitive predictions can significantly accelerate the convergence of training. We treat all tokens within the Ground Truth (GT) bounding box as potential positive samples. However, simply assigning a hard label to the positive samples makes it difficult to select a high-quality prediction. To make the positive samples different, we introduce the localization quality (i.e., IoU score) into the classification score inpired by QFL [24], where its supervision softens the standard one-hot category label and leads to a possible float target \(y\in[0,1]\). Specifically, \(y=0\) denotes the negative samples with \(0\) quality score, and \(0<y\leq 1\) stands for the positive samples with target IoU score \(y\). Therefore, the
Figure 4: Illustration of one layer in the Deformable Transformer Decoder. In each layer, the query tokens elementwise add the position embeddings generated by the box proposals to form the object queries for the attention module. In the end of the layer, the predicted offset plus the box proposal as refined boxes are fed into the next layer.
Figure 3: (a) Illustration of the architecture of our proposed DETRack. The denoising part in the decoder is only added during training. (b) The label assignment for the denoising training. The most center feature within the GT bounding box is picked as the positive object query in one denoising group, while the corner ones are selected as negative queries.
loss function for the classification is:
\[\mathcal{L}_{\mathrm{cls}}(\sigma)=-|y-\sigma|^{\beta}((1-y)\log(1-\sigma)+y\log (\sigma)), \tag{2}\]
where that \(\sigma=y\) is the global minimum solution, standing for a accurate quality estimation. The parameter \(\beta\) controls the down-weighting rate \(|y-\sigma|^{\beta}\) smoothly. In experiments, we set \(\beta=2\) following [24].
For the box regression, to prevent supervision of low-quality tokens (e.g., tokens at the very edges or corners within the GT box), we filter out and retain only the top-k tokens based on predicted classification scores. We then compute the localization loss exclusively for the boxes output by these selected tokens. We combine the \(l_{1}\) loss and the generalized IoU loss [33] as the training objective for the localization. The loss function can be formulated as:
\[\mathcal{L}_{\mathrm{loc}}(b)=\lambda_{\mathrm{G}}\mathcal{L}_{\mathrm{GIoU}} (b_{i},\hat{b}_{i})+\lambda_{l_{1}}\mathcal{L}_{l_{1}}(b_{i},\hat{b}_{i}), \tag{3}\]
where \(b_{i}\) represents the groundtruth, and \(\hat{b}_{i}\) represents the predicted box. In experiments, we set the weights \(\lambda_{\mathrm{G}}=2\) and \(\lambda_{l_{1}}=5\).
### DeNoising Training
For each search image, we generate extra queries after the query selection by adding the noise to the GT. The noised queires enrich the supervision signal for the decoder during the training process. In implementation, the noised queires are also formulated as two part: position (box) and content (class).
**Box Denoising.** We consider adding noise to boxes in two ways as in [23]: center shifting and box scaling. We define \(\lambda_{1}\) and \(\lambda_{2}\) as the noise scale of these 2 noises. 1) **center shifting**: we add a random noise \((\Delta x,\Delta y)\), to the box center and make sure that \(|\Delta x|<\frac{\lambda_{1}w}{2}\), \(|\Delta y|<\frac{\lambda_{1}h}{2}\), where \(\lambda_{1}\in(0,1)\) so that the center of the noised box will still lie inside the original bounding box. 2) **box scaling**: we set a hyper-parameter \(\lambda_{2}\in(0,1)\). The width and height of the box are randomly sampled respectively. The noise scale for negative queires are set larger than that for the positive.
For class denoising, the token positioned at the very center of the GT bounding box is chosen as the content part of the positive query, as illustrated in Fig. 3(b). To sidestep low-quality predictions, we select the corner tokens for the negative queries. We adopt a soft label assignment for alignment with the regular part in Tab. 6. Specifically, the labels for positive queries are determined by the IoU between the predicted refined bounding box and the GT bounding box. In contrast, labels for negative queries are set to zero.
**Attention Mask.** It is proved to compromise the performance instead of improving it, without an attention mask preventing the GT information leaking from the denoising quires to un-denoising part [23]. We devide the denoising quires as multiple groups, in which contains a positive and a negative query. Queries in different groups and different parts cannot interact with each other. The devision is performed by adding a mask to the self-attention module in the decoder as in [23].
### Training Objective
The training objective for both denoising branch and regular branch are similar. Specifically, we apply QFL in both regular branch and denoising branch for classification. For localization in denoising branch, we compute loss only on the positive queries. As in DETR [5], we add auxiliary losses after each decoder layer and query selection module. Considering the loss of denoising branch after each decoder layer, our final loss can be written as generalized format:
\[\mathcal{L}=\sum_{l=0}^{L_{\mathrm{dec}}}[\lambda_{\mathrm{cls}}(\mathcal{L}_{ \mathrm{cls}}+\mathcal{L}_{\mathrm{cls}}^{\mathrm{DN}})+\lambda_{\mathrm{loc} }(\mathcal{L}_{\mathrm{loc}}+\mathcal{L}_{\mathrm{cls}}^{\mathrm{DN}})], \tag{4}\]
where \(l=0\) denotes for the output before the query selection and \(l=L_{\mathrm{dec}}\) represents for the final output of the decoder. We simply set the parameter \(\lambda_{\mathrm{cls}}\) and \(\lambda_{\mathrm{loc}}\) to 1 following [45].
## 4 Experiments
### Implementation Details
Our trackers are implemented using Python 3.8 and PyTorch 1.12. The models are trained on 2 RTX 3090 GPUs. We test the model on a single RTX 2080Ti GPU.
**Model.** We adopt the vanilla ViT-Base [12] model and initialze it with CAE [7] pre-trained weights on ImageNet [11] as the encoder of our DETRack. We leave the weight of the decoder randomly initialized. The sizes of the template and search images are 128\(\times\)128 pixels and 256\(\times\)256 pixels, respectively. All the input images are split into 16\(\times\)16 patches. The hidden dimension in the encoder and the decoder are 768 and 256. The dimension of tokens output by the encoder are transformed by a linear layer. The bouding box and its offset in each decoder layer is obtain by three-layer perceptrons, which shares the parameter. The classification score is predicted by a simple linear layer.
**Training.** For the GOT-10k [18] benchmark, we only use the training split of GOT-10k following the one-shot protocols and train the model for 20 epochs. For the other benchmarks, the training splits of GOT-10k, COCO [26], LaSOT [13] and TrackingNet [32] are used for training in 60 epochs. For video datasets, we sample the image pair from a random video sequence. For the image dataset COCO, we randomly select an image and apply data augmentations to generate an image pair. Common data augmentations such as scaling, translation, and jittering are applied on the image pair. The search region and the template are obtained by expanding the target box by a factor of 4 and 2, respectively. The optimizer is the AdamW optimizer [28],
with the weight decay of 1e-4. The initial learning rate of the encoder and the decoder are 4e-5 and 4e-4, respectively. We reduce the learning rate to 10% in the last 20% epochs. Each GPU holds 64 image-pairs, resulting a batch size of 128 in total. Training takes \(<\)**2** hours for GOT-10k and \(<\)**6** hours for the other benchmarks on two RTX 3090 GPUs. Note that the training time of the previous most efficient transformer-based tracker, OSTrack, is about 8 hours, which is 4\(\times\) that of our method.
**Testing.** During testing, we adopt Hanning window penalty to utilize positional prior like scale change and motion smoothness in tracking, following the common practice [2, 25, 43]. The output scores of each object queires by the decoder are reflected back on the 2D map and simply element-wise multiplied by the Hanning window with the same size, and we choose the box with the highest multiplied score as the target box.
### Comparison with State-of-the-art Trackers
We compare our DETrack with state-of-the-art(SOTA) trackers on 3 different large-scale benchmarks and 2 small benchmarks, including GOT-10k, TrackingNet, LaSOT, UAV123 and NFS. For fair comparison, the number of training epochs in Tab. 1 and Tab. 2 of different trackers are counted with the same settings as in [43], _e.g_., \(6\times 10^{4}\) image pairs sampled in each epoch.
**GOT-10k.** GOT-10k [18] is a large-scale dataset containing more than 10000 video segments of real-world moving objects. The object classes between train and test sets are zero-overlapped. We strictly follow the one-shot protocol to only train our model on the GOT-10k training split and evaluate the results through the evaluation server. As presetented in Tab. 1, DETrack improves all matrics by a large margin, _e.g_., 1.9% in Average Overlap (AO) compared with OSTrack which indicates the capability in accurate discrimination and localization of unseen objects. Notice that our tracker is trained with only 20 epochs, which is 20% of the most training-efficient transformer-based tracker OSTrack.
\begin{table}
\begin{tabular}{l|c|c c c c|c c c c c c|c} \hline \multirow{2}{*}{Tracker} & \multirow{2}{*}{Source} & \multicolumn{3}{c|}{GOT-10k*} & \multirow{2}{*}{\#Epochs\({}^{**}\)} & \multicolumn{3}{c|}{TrackingNet} & \multicolumn{3}{c|}{LaSOT} & \multirow{2}{*}{\#Epochs\({}^{**}\)} & \multirow{2}{*}{GFLOPs} \\ & & \multicolumn{1}{c}{AO} & \multicolumn{1}{c}{SR\({}_{0.5}\)} & \multicolumn{1}{c|}{SR\({}_{0.75}\)} & & & & & AUC & P\({}_{\mathrm{norm}}\) & P & AUC & P\({}_{\mathrm{norm}}\) & P \\ \hline DETrack & Ours & 72.9 & 82.1 & 69.9 & **20** & 83.2 & 88.3 & 83.1 & 69.0 & 78.9 & 75.1 & **60** & **15.6** \\ OSTrack [43] & ECCV’22 & 71.0 & 80.4 & 68.2 & 100 & 83.1 & 87.8 & 82.0 & 69.1 & 78.7 & 75.2 & 300 & 21.5 \\ OSTrack [43] & ECCV’22 & 71.5 & 80.7 & 68.9 & 100 & 83.0 & 87.7 & 82.0 & 68.5 & 78.1 & 74.9 & 300 & 16.4 \\ AiTrack [14] & ECCV’22 & 69.6 & 80.0 & 63.2 & 100 & 82.7 & 87.8 & 80.4 & 69.0 & 79.4 & 73.8 & 300 & 18.5 \\ SimTrack [6] & ECCV’22 & 68.6 & 78.9 & 62.4 & 500 & 82.3 & 86.5 & - & 69.3 & 78.5 & 74.0 & 500 & 20.1 \\ Unicom [40] & ECCV’22 & - & - & - & - & 83.0 & 86.4 & 82.2 & 68.5 & 76.6 & 74.1 & - & 33.5 \\ MixFormer [9] & CVPR’22 & 70.7 & 80.0 & 67.8 & 180 & 83.1 & 88.1 & 81.6 & 69.2 & 78.7 & 74.7 & 550 & 23.0 \\ ToMP [29] & CVPR’22 & - & - & - & - & 81.2 & 86.2 & 78.6 & 67.6 & 78.0 & 72.2 & 200 & - \\ CSWinTT [34] & CVPR’22 & 69.4 & 78.9 & 65.4 & 600 & 81.9 & 86.7 & 79.5 & 66.2 & 75.2 & 70.9 & 500 & 19.3 \\ STARK [41] & ICCV’21 & 68.0 & 77.7 & 62.3 & 500 & 81.3 & 86.1 & 78.1 & 66.4 & 76.3 & 71.2 & 500 & 18.5 \\ KeepTrack [30] & ICCV’21 & - & - & - & - & - & - & - & 67.1 & 77.2 & 70.2 & - & - \\ AutoMatch [46] & ICCV’21 & 65.2 & 76.6 & 54.3 & - & 76.0 & - & 72.6 & 58.3 & - & 59.9 & - & - \\ TransT [8] & CVPR’21 & 67.1 & 76.8 & 60.9 & - & 81.4 & 86.7 & 80.3 & 64.9 & 73.8 & 69.0 & - & - \\ Alpha-Refine [42] & CVPR’21 & - & - & - & - & 80.5 & 85.6 & 78.3 & 65.3 & 73.2 & 68.0 & - & - \\ TMT [37] & CVPR’21 & 67.1 & 77.7 & 58.3 & - & 78.4 & 83.3 & 73.1 & 63.9 & - & 61.4 & - & - \\ Ocean [48] & ECCV’20 & 61.1 & 72.1 & 47.3 & 50 & - & - & - & 56.0 & 65.1 & 56.6 & 50 & - \\ DiMP [4] & ICCV’19 & 61.1 & 71.7 & 49.2 & - & 74.0 & 80.1 & 68.7 & 56.9 & 65.0 & 56.7 & - & - \\ ATOM [10] & CVPR’19 & - & - & - & - & 70.3 & 77.1 & 64.8 & 51.5 & 57.6 & 50.5 & - & - \\ SiamRPN++ [21] & CVPR’19 & 51.7 & 61.6 & 32.5 & - & 73.3 & 80.0 & 69.4 & 49.6 & 56.9 & 49.1 & - & - \\ \hline \end{tabular}
\end{table}
Table 1: State-of-the-art comparison on GOT-10k, TrackingNet and LaSOT. The best three results are shown in red,blue and green fonts, respectively. We use * to denote that the results on GOT-10k are obtained following the official one-shot protocol. ** denotes the we calculate the effective number of epochs for \(6\times 10^{4}\) image-pairs sampled per epoch if the training detail is provided. \(\dagger\) indicates the tracker we replace the backbone with the same CAE [7] pretrained ViT-Base model as our DETrack.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Sparsity & Head & Params(M) & FLOPs(G) & AO(\%) \\ \hline - & conv & 70.9 & 22.3 & 71.5 \\ - & decoder & **68.9** (\(\downarrow\) 2.0) & **21.4** (\(\downarrow\) 0.9) & 72.9 \\ \hline CE & conv & 70.9 & 16.4 & 71.7 \\ CE & decoder & **68.9** (\(\downarrow\) 2.0) & **15.6** (\(\downarrow\) 0.8) & 72.9 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of params, FLOPs, and performance(Average Overlap) on GOT-10k. The sparsified method CE indicates the Candidate Elimination [43].
**TrackingNet.** TrackingNet [32] is a large-scale short-term tracking benchmark that provides more than 30000 video sequences with over 14 million boxes. The test split of TrackingNet contains 511 sequences without publicly available ground truth and covers diverse target classes and-scenes. We submit the tracking results to the official evaluation server and make comparisons with previous SOTA trackers in Tab. 1. The results show that our DETRack achieves 83.2% in success score(AUC) and 83.1% in precision score, overtaking all previously published trackers with the same backbone. It is notable that training our DETRack is very easy: only 60 trained epochs, which is only 10% to 20% of the previous SOTA trackers.
**LaSOT.** LaSOT [13] is a densely annotated large-scale dataset that contains 280 long-term video sequences for public evaluation. We evaluate our DETRack on the test set to compare with previous SOTA trackers. From Tab. 1, we find that our method acheives a comparable results, surpassing the OSTrack with the same backbone in all three matrics. Specifically, DETRack achieves 69.0% AUC score with merely 60 training epochs, which consumes quite a few GPU hours compared with the other SOTA methods, demonstrating the efficiency of our approach.
**NFS and UAV123.** NFS [19] comprises 100 video sequences featuring fast-moving objects, and is often used to test the robustness of tracking algorithms. UAV123 [31], with its 123 video sequences captured from a low-altitude unmanned aerial vehicle, poses challenges for long-term tracking due to its average sequence length of 915 frames. As reported in Tab. 2, our method consistently outperforms the baseline on both datasets. Moreover, our approach requires significantly fewer training epochs, leading to reduced GPU training time.
**Params, GFLOPs and Speed.** We provide the GFLOPs with state-of-the-art trackers in Tab. 1. For more details about params, GFLOPs and speed, we compare our DETRack with the baseline OSTrack [43] in Tab. 3 on a Nvidia RTX 2080Ti GPU. We re-implemented the OSTrack by replacing the MAE [16] backbone with CAE [7] for fair comparison. No matter with or without saprissified technique like candidate elimination (CE) [43], our DETRack has a lower number of parameters and FLOPs than the tracker using the same backbone but with a convolutional head. Still, the implemention details in engineering make our tracker run a bit slower.
## 5 Ablation Study and Analysis
We analyze the main properties of the DETRack framework. For the following experimental studies, we follow GOT-10k test protocol unless otherwise noted.
### Analysis on the Number of Decoder Layers.
We investigate the influence of varying numbers of decoder layers. As shown in Tab. 4, the decoder with 3 layers strikes the best trade-off between the efficiency and accuracy. Decreasing the number of decoder layers hurts the performance significantly, _e.g_., a two-layer decoder decrease the accuracy of 1.5 AO% to a three-layer decoder. Adding layers in the decoder, _e.g_., a four-layer decoder does not brings gains on performance but introducing redundant computation in terms of GFLOPs.
Benefits from applying auxiliary loss on each layer of decoder and sharing the prediction head among these layers, we can design a flexible configuration for activating different decoder layers between training and testing. For instance, we can obtain the intermediate prediction in the early layer of the decoder as the final prediction, leaving the final layers un-performed. For instance, only using first-two layers of a three-layer also achieves a acceptable performance as 72.1 AO% as shown in Tab. 4. Furthermore, we can only use the first layer in testing no matter how many layers we set for training. The performance of all one-layer decoder configurations shown in Tab. 4 surpass the convolutional baseline OSTrack(shown in the Tab. 1). The results demostrate that our tracker can adapt flexibly between efficiency and accuracy, even if the training is done.
### Denoising Strategy
We also investigate the strategy for denoising training. The origional denoising design for object detection (OD) adopts a embdeding layer to encode the positive and negative labels [23]. The encoded classification label is static to the input image because the classes are usually fixed in OD. However, in single object tracking (SOT), the class of the target is generic and dynamic. As shown in Tab. 5, the fixed embedding for classification label does not help improving the performance too much over the baseline which is trained without denoising, _e.g_., 71.4% AO vs. 71.2% AO
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline & \multicolumn{3}{c|}{AO(\%)} & \\ \hline \(\text{L}_{\text{test}}\text{L}_{\text{train}}\) & 2 & 3 & 4 & GFLOPs \\ \hline
4 & - & - & 72.5 & 15.9 \\
3 & - & **72.9** & 72.3 & 15.6 \\
2 & 71.4 & 72.1 & 71.7 & 15.3 \\
1 & 71.2 & 71.4 & 71.1 & 15.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of the ablation study on the number of decoder layers. \(\text{L}_{\text{train}}\) represents the number of decoder layers under training. \(\text{L}_{\text{test}}\) means the number of first n layers are used during the inference. The results of AO(%) means the accuracy obtained by only using the first n layers of the decoder. The experiment are reported on the GOT-10k test split following the one-shot protocol with 20 epochs.
in 1 and _baseline_. The slight increase of performance is come from the localization part, _i.e_., the bounding box denoising. As shown in 2 of Tab. 5, our proposed denoising strategy which the center feature is picked up as the positive query and the corner ones are selected as negative, improving the accuracy to 72.9% AO, leading the baseline by 1.7 point. The results also demostrate that dynamic selected features as denoising quiries are more suitable for SOT, compared with fixed and static label embeddings. Compared with 3, which randomly pick the feature pixels outside the GT bounding boxas as negative quiries, the corner features acting as hard negative samples closer to the GT boxes are more helpful to improve the performance, _e.g_., 72.9% vs. 72.4% AO.
### Label Assignment
Incorporating the decoder lets us set the training objective for classification, akin to convolution-based heads, such as centerness. From Tab. 6, it's clear that one-to-one assignments, represented by centerness 1 and bipartite matching 2, require a large number of training epochs to be effective, _e.g_., 55.8% AO at 20 epoch to 71.2% AO at 100 epoch in 1 on GOT-10k. In contrast, one-to-many assignments (3 and 4) notably decrease the epochs required to reach peak performance. Using 3 to allocate each positive sample with a rigid label results in sub-optimal performance. However, emphasizing localization quality, as represented by IoU, considerably enhances both convergence rate and performance. For instance, performance obtained in the early stage like 72.9% AO at 20 epoch on GOT-10k and 69.0% AUC at 60 epoch on LaSOT are higher than that with more epochs.
### Convergence
We provide the training IoU vs. epoch curve in the Fig. 5. All results are reported on 2 RTX 3090 GPUs with ViT-B initialized from CAE [7]. As we can see, our proposed DETRack achieve the similar training IoU in 20 epochs with the OSTrack in 100 epochs. The combined results from Tab. 6 and Fig. 5 also demostrate a interesting fact: the more accurate localization ability in the template-search image pair during training does not mean a higher overall tracking performance over a video sequence in testing.
## 6 Conclusion
In this research, we unveiled DETRack, an innovative encoder-decoder framework that leverages the deformable transformer decoder to supersede the conventional convolutional head, paving the way for a more pronounced sparsity and consequent reduction in GFLOPs. Through our novel implementation of the one-to-many label assignment and unique denoising technique during training, DETRack improves tracking accuracy with a significantly reduced training epochs. The marked reductions in GPU hours for training are especially beneficial for researchers with limited computational resources, potentially democratizing access to high-quality visual object tracking.
**Limitation.** Although our work achieves comparable accuracy on per-frame classification and localization with little GPU resource consuming, the overall performance on some long-term seqeunce tracking benchmark is limited. Besides, even if our tracker runs with lower FLOPs and params than the convolution-based ones with the same or weaker performance, the actual running speed of ours, _e.g_., FPS, is a little lower than the latters due to the engineering implementation.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{GOT-10k} & \multicolumn{3}{c}{LaSOT} \\ \cline{2-7} & AO\({}_{30}\) & AO\({}_{50}\) & AO\({}_{100}\) & AUC\({}_{60}\) & AUC\({}_{100}\) & AUC\({}_{300}\) \\ \hline \hline ()center & 55.8 & 65.4 & 71.2 & 43.0 & 52.3 & 68.8 \\ \hline ()fluorar label & 51.0 & 59.1 & 65.2 & 35.6 & 47.9 & 66.4 \\ \hline ()fluorar label & 65.7 & 65.4 & 66.3 & 62.1 & 63.4 & 62.9 \\ ()loc. quality & 72.9 & 72.9 & 72.4 & 69.0 & 68.6 & 68.9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison with different label assignment (for un-denoising part). AO\({}_{\text{k}}\) and AUC\({}_{\text{k}}\) denotes for the performance obtained by training k epochs for the benchmark.
\begin{table}
\begin{tabular}{c|c|c|c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Positive} & \multirow{2}{*}{Negative} & \multicolumn{3}{c}{GOT-10k} \\ \cline{4-5} & & & AO & SR\({}_{0.5}\) & SR\({}_{0.75}\) \\ \hline _baseline_ & - & - & 71.2 & 80.1 & 68.3 \\ 1 & embedding & embedding & 71.3 & 80.3 & 68.2 \\ 2 & center & corner & **72.9** & **82.1** & **69.9** \\ 3 & center & outside & 72.4 & 81.9 & **69.9** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of different label assignment for denoising part in the training. _Baseline_ indicates the tracker trained without any auxiliary denoising technique. The column **Positive** and **Negative** represents the source feature for the positive and negative queries in the denoising part when applying denoising training.
Figure 5: The label assignment for the denoising training. The most center feature within the GT bounding box is picked as the positive object query in one denoising group, while the corner ones are selected as negative queries. |
2304.13787 | Surrogate Assisted Generation of Human-Robot Interaction Scenarios | As human-robot interaction (HRI) systems advance, so does the difficulty of
evaluating and understanding the strengths and limitations of these systems in
different environments and with different users. To this end, previous methods
have algorithmically generated diverse scenarios that reveal system failures in
a shared control teleoperation task. However, these methods require directly
evaluating generated scenarios by simulating robot policies and human actions.
The computational cost of these evaluations limits their applicability in more
complex domains. Thus, we propose augmenting scenario generation systems with
surrogate models that predict both human and robot behaviors. In the shared
control teleoperation domain and a more complex shared workspace collaboration
task, we show that surrogate assisted scenario generation efficiently
synthesizes diverse datasets of challenging scenarios. We demonstrate that
these failures are reproducible in real-world interactions. | Varun Bhatt, Heramb Nemlekar, Matthew C. Fontaine, Bryon Tjanaka, Hejia Zhang, Ya-Chuan Hsu, Stefanos Nikolaidis | 2023-04-26T19:11:56Z | http://arxiv.org/abs/2304.13787v4 | # Surrogate Assisted Generation of
###### Abstract
As human-robot interaction (HRI) systems advance, so does the difficulty of evaluating and understanding the strengths and limitations of these systems in different environments and with different users. To this end, previous methods have algorithmically generated diverse scenarios that reveal system failures in a shared control teleoperation task. However, these methods require directly evaluating generated scenarios by simulating robot policies and human actions. The computational cost of these evaluations limits their applicability in more complex domains. Thus, we propose augmenting scenario generation systems with surrogate models that predict both human and robot behaviors. In the shared control teleoperation domain and a more complex shared workspace collaboration task, we show that surrogate assisted scenario generation efficiently synthesizes diverse datasets of challenging scenarios. We demonstrate that these failures are reproducible in real-world interactions.
## I Introduction
As the complexity of robotic systems that interact with people increases, it becomes impossible for designers and end-users to anticipate how a robot will act in different environments and with different users. For instance, consider a robotic arm collaborating with a user on a package labeling task, where the arm infers the user's intended goal object and moves simultaneously towards a different object to avoid collision (Fig. 1). The robot's motion depends on which object the user selects to label, how the user moves towards that object, and how all objects are arranged in the environment. Thus, evaluating the system requires testing it with a diverse range of user behaviors and object arrangements.
While user studies are essential for understanding how users will interact with a robot, they are limited in the number of environments and user behaviors that they can cover. This makes algorithmic scenario generation a compelling method that can complement user studies by finding failures and elucidating a holistic view of the strengths and limitations of a robotic system's behavior.
Our goal is not to find a maximally adversarial scenario, but to generate a diverse dataset of challenging scenarios. Previous work [18, 20] has formulated scenario generation as a quality diversity (QD) problem and demonstrated the effectiveness of several QD algorithms at generating diverse collections of scenarios in a shared control teleoperation domain. In that domain, a user teleoperates a robotic arm with a joystick interface, while the robot observes the joystick inputs to infer the user's goal and assist the user in reaching their goal.
While prior work could generate diverse datasets of failure scenarios in one-shot shared teleoperation interactions, these interactions last only a few seconds in contrast to collaborative, sequential tasks that last longer. For instance, many scenarios in the package labeling task (Fig. 1), where a user attaches a label while the robot presses a stamp, lasted a couple of minutes. The long duration of human-robot interaction scenarios makes their evaluation expensive in many domains. This limits the applicability of QD algorithms, which require millions of evaluations on hard
Fig. 1: Example scenario in a collaborative package labeling task found by our proposed surrogate assisted scenario generation framework. The presence of the two objects behind the robot results in its expected cost-minimizing policy to move towards the object in the front, resulting in a conflict with the user who is reaching the object at the same time.
Our key insight is that _we can train deep neural networks as surrogate models to predict human-robot interaction outcomes and integrate them into the scenario generation process_. In addition to making scenario evaluations inexpensive, deep neural networks are differentiable, which allows us to integrate state-of-the-art differentiable quality diversity (DQD) algorithms [17] for further gains in search efficiency.
We make the following contributions: (1) We propose using deep neural networks as surrogate models to predict human-robot interaction outcomes, such as time to task completion, maximum robot path length, or total waiting time; (2) We integrate, for the first time, surrogate models with differentiable quality diversity (DQD) algorithms that leverage gradient information backpropagated through the surrogate model. (3) We show, in the shared control teleoperation domain of previous work [18] and in a shared workspace collaboration domain, that surrogate assisted scenario generation results in significant benefits in terms of sample efficiency. It also achieves a significant reduction in computational time in the collaboration domain, where scenario evaluations are particularly expensive.
## II Problem Statement
We focus on the problem of algorithmically generating a diverse and challenging dataset of human-robot interaction scenarios. We formulate the problem as a quality diversity (QD) problem and adopt the definition from prior work [17].
We assume a scenario parameterized by \(\mathbf{\theta}\in\mathbb{R}^{n}\). The scenario parameters could be object positions and types in the environment, human actions, or latent inputs to a generative model of environments, which is converted to a scenario via a function \(G(\mathbf{\theta})\). The QD algorithm evaluates the scenario by simulating human and robot actions in a virtual environment.
The objective function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) assesses the quality of a scenario \(\mathbf{\theta}\), after the scenario is evaluated in the simulator. Because our objective is to find challenging scenarios, the quality of a scenario is high when the performance of the robotic system is low, e.g., when it takes a long time for the human-robot team to finish the task.
We further assume a set of user-defined measure functions, \(m_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\), or as a vector function \(\mathbf{m}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}\), that quantify aspects of the scenario that we wish to diversify, e.g., distance between objects or noise in user inputs. The range of \(\mathbf{m}\) forms a measure space \(S=\mathbf{m}(\mathbb{R}^{n})\), which we assume is tessellated into \(M\) cells. Each scenario \(\mathbf{\theta}\) is mapped to a cell based on the measures \(\mathbf{m}(\mathbf{\theta})\). The scenarios that occupy cells in the measure space form an _archive_.
The QD objective is to fill in all cells of the archive with scenarios of maximum quality:
\[\max_{\mathbf{\theta}_{i}}\sum_{i=1}^{M}f(\mathbf{\theta}_{i}) \tag{1}\]
Here \(\mathbf{\theta}_{i}\) refers to the scenario with the highest quality in cell \(i\). If there are no scenarios in a cell, \(f(\mathbf{\theta}_{i})\) is assumed to be zero. Eq. 1 computes the QD-score [52] of a generated archive containing scenarios \(\mathbf{\theta}_{i}\).
The differentiable quality diversity (DQD) problem formulation is a special case of QD where the objective function \(f\) and measure functions \(\mathbf{m}\) are first-order differentiable.
## III Background
### _Quality Diversity Optimization_
Quality Diversity (QD) optimization aims to find high-quality solutions that are diverse with respect to a set of pre-specified measures - also called behavior characteristics. QD algorithms have been used to generate diverse locomotion strategies [46, 61], video game levels [51], nanomaterials [40], and building layouts [27].
A popular QD algorithm, MAP-Elites [46], attempts to generate an archive of high-performing solutions by retaining the best performing solution in each cell of the measure space (see Sec. II). MAP-Elites uniformly selects solutions from the archive and perturbs them with isotropic Gaussian noise to generate new solutions. Each new solution is mapped to a cell in measure space, and it is added to the archive if that cell is either empty or the incumbent solution is of worse quality. The intuition is that existing high-performing solutions, called the "elites", can act as stepping stones to generate elites in other parts of the archive. Prior work [26, 30, 66] has combined MAP-Elites with surrogate models that use Gaussian processes or deep neural networks to guide the search.
### _Covariance Matrix Adaptation MAP-Annealing_
Recently, a new class of adaptive quality diversity algorithms [21, 17, 19] has emerged that combine the archiving properties of MAP-Elites with the adaptation mechanisms of CMA-ES [31]. The state-of-the-art algorithm of that class, Covariance-Matrix Adaptation MAP-Annealing (CMA-MAE) maintains a covariance matrix \(\mathbf{\Sigma}\) and a search point \(\mathbf{\theta}\in\mathbb{R}^{n}\). CMA-MAE samples a population of \(\lambda\) solutions from the Gaussian \(\mathbf{\theta}_{i}\sim\mathcal{N}(\mathbf{\theta},\mathbf{\Sigma})\). For each sampled solution, CMA-MAE computes the objective \(f(\mathbf{\theta}_{i})\) and measures \(\mathbf{m}(\mathbf{\theta}_{i})\), which map the solution to a cell \(e\) in the archive. The sampled solutions are then ranked based on archive improvement: \(f(\mathbf{\theta}_{i})-f_{A}(\mathbf{\theta}_{i})\). \(f_{A}(\mathbf{\theta}_{i})=t_{e}\) is computed based on an acceptance threshold \(t_{e}\), where \(t_{e}\) is updated every time a new solution \(\mathbf{\theta}^{\prime}\) is added to the cell \(e\) of the archive: \(t_{e}\leftarrow(1-\alpha)t_{e}+\alpha f(\mathbf{\theta}^{\prime})\). The learning rate \(\alpha\) regulates how quickly discount function \(f_{A}\) changes. CMA-MAE uses the ranked solutions \(\mathbf{\theta}_{i}\) to update \(\mathcal{N}(\mathbf{\theta},\mathbf{\Sigma})\) towards the natural gradient of the QD objective (Eq. 1) with respect to \(\mathbf{\theta}\).
### _Differentiable Quality Diversity (DQD) Optimization_
When the objective \(f\) and measure functions \(\mathbf{m}\) are differentiable, DQD algorithms [17] have shown significant performance benefits over their derivative-free counterparts, with the state-of-the-art algorithm being CMA-MAEGA [19]. Like CMA-MAE, CMA-MAEGA maintains a solution point \(\mathbf{\theta}\) and a MAP-Annealing archive. However, the Gaussian distribution \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\) models a distribution of gradient coefficients in objective-measure space \(\mathbb{R}^{k+1}\), rather than a search direction distribution in the search space \(\mathbb{R}^{n}\). The coefficients combined
with objective and measure gradients produce a branching perturbation to the current solution point \(\mathbf{\theta}\) through the update \(\mathbf{\theta}_{i}=\mathbf{\theta}+c_{0}\mathbf{\nabla}f(\mathbf{\theta})+\sum_{j=1}^{k}c_{j} \mathbf{\nabla}\mathbf{m}_{j}(\mathbf{\theta})=\mathbf{\theta}+\mathbf{c}\mathbf{\nabla}g(\mathbf{\theta})\). Like CMA-MAE, the new solutions \(\mathbf{\theta}_{i}\) are ranked by improvement. The same ranking steps the search point \(\mathbf{\theta}\) and updates the coefficient distribution \(N(\mathbf{\mu},\mathbf{\Sigma})\) towards the natural gradient of the QD objective (Eq. 1).
### _Algorithmic Scenario Generation_
Research in the algorithmic generation of scenarios has seen a range of applications, which include designing video game levels, training and testing robots and reinforcement learning agents, and finding failures in human-robot interaction.
For example, the procedural content generation (PCG) community [32, 58] focuses on generating game levels that provide entertainment value to a player. PCG combined with QD has resulted in generators for game levels with diverse game mechanics and agent behaviors [29, 23, 13, 41, 59, 57, 56].
In reinforcement learning, research has proposed procedural content generation for benchmarking agents [8, 6, 53]. There have also been an increasing number of approaches [62, 63, 25, 4, 10, 11] that co-evolve agents and environments for agent generalizability, with recent works generating environments that maximize regret between agent pairs [9, 39, 38, 48]. Most relevant to our work is the Deep Surrogate Assisted Generation of Environments (DSAGE) [3] algorithm, which exploits a surrogate model with quality diversity algorithms to generate environments. However, DSAGE has only been applied with _derivative-free_ quality diversity algorithms in _single-agent grid-world_ game domains. In contrast, we propose integrating _differentiable_ quality diversity algorithms with surrogate models in _complex human-robot interaction scenarios_.
In robotics, research has focused on generating scenarios to evaluate autonomous vehicles [2, 47, 1, 54, 28, 55, 24] and elicit targeted behaviors in motion planning algorithms [67]. Prior work has also adaptively generated environments as
Fig. 2: An overview of our proposed differentiable surrogate assisted scenario generation algorithm (DSAS) for HRI tasks. The algorithm runs in two stages: an inner loop to exploit a surrogate model of the human and the robot behavior (**red arrows**) and an outer loop to evaluate candidate scenarios and add them to a dataset (blue arrows). Our key insight is that leveraging predictions of the surrogate model is much faster than simulating the scenario, allowing many more steps of quality diversity optimization. The algorithm produces a diverse dataset, labeled by evaluation, that in turn improves the predictions of the surrogate model in subsequent iterations. Algorithm 1 contains the complete pseudocode of the algorithm.
curricula for learning [45, 43, 15, 16]. Most relevant to ours is prior work [18, 20] in human-robot interaction that applied the MAP-Elites and CMA-ME QD algorithms to find robot failures in a shared control teleoperation domain. In our work, we integrate surrogate models that predict human and robot interaction outcomes with state-of-the-art QD algorithms and show significant improvements in both sample and wall-clock efficiency. We test our approach both in the shared control teleoperation domain of previous work [18] and in a shared workspace human-robot collaboration domain, originally proposed in previous work on human-robot teaming [50].
## IV Surrogate Assisted Scenario Generation
Our method for algorithmically generating diverse collections of HRI scenarios builds upon recent advances in generating grid-world environments with surrogate models [3]. We describe the modifications necessary to scale these techniques from single-agent grid-world game domains to more complex HRI domains. Furthermore, we leverage the development of differentiable QD (DQD) algorithms [17] to efficiently exploit the differentiable surrogate model.
### _Surrogate Models for Human-Robot Interaction_
The DSAGE algorithm [3] proposed a method for efficiently generating collections of grid-world environments that minimize the performance of a single agent while being diverse in agent behavior and environment characteristics. The algorithm trains a deep surrogate model online on data produced by a QD algorithm to predict the objective and measure values for a given input environment. The key insight is that the diverse data generated by the QD algorithm helps train the surrogate model, while the surrogate model can efficiently guide the QD algorithm towards diverse and high-quality data.
The authors of DSAGE highlight that direct prediction of the objective and measures from the environment's initial conditions is difficult for a deep surrogate model. The DSAGE algorithm resolves this by predicting an occupancy grid of the agent, where each cell contains the probability that the agent occupied that cell at some point during the rollout. The surrogate model is trained to first predict agent occupancy before predicting the objectives and measures, improving the model accuracy.
DSAGE was designed for generating static, grid-world game environments that evaluate _single_ agents. We scale DSAGE to generate HRI scenarios that include both robot and human behavior and environment parameters. First, we allow parameters of both the environment and the human behavior as inputs to the surrogate model. Second, we discretize the shared workspace and predict two occupancy grids, one for the human and one for the robot. We then stack both predictions as inputs to a convolutional neural network, which predicts the objective and measure functions. Appendix A provides precise details of our surrogate model.
### _Scenario Repair via Mixed Integer Programming_
While the shared control teleoperation task of previous work included all objects in the same connected region, the shared workspace collaboration task in this work presents a new challenge for QD scenario generation. Each scenario is composed of disjoint workspaces for the objects, with each workspace imposing constraints on object arrangement, such as boundary and collision avoidance constraints.
We address this challenge by adopting a generate-then-repair strategy for object arrangement. The scenario is represented as unconstrained object locations. Following insights from prior work [65, 22], we then pass this scenario through a mixed integer program (MIP) that solves for the minimum cost edit of object locations - the sum of object displacement distances - that satisfies the constraints for a valid scenario. We provide the complete MIP formulation for scenario repair in Appendix B.
### _Objective Regularization_
The generate-then-repair strategy guarantees that scenarios generated by our proposed system are valid scenarios. However, because each scenario is composed of disjoint workspaces that each object can occupy, a QD algorithm generating new scenarios must make large changes to these object locations to move an object to a new workspace. Yet, a QD algorithm, as a type of local search strategy, performs incremental steps to existing solutions to generate new scenarios, making the discovery of new workspace regions difficult. Moreover, an object outside a valid region is repaired with a minimum cost edit. This means repaired objects will always occupy the boundary of a valid region until the search discovers solutions inside the valid region.
We address these above issues with objective regularization. Scenarios that contain objects not satisfying the workspace constraints have a positive minimum cost edit according to the MIP repair. We discount the objective function \(f\) of the QD formulation by the cost of the MIP repair. The discounted objective pushes the QD search to favor scenarios satisfying all constraints over scenarios that must be repaired.
While objective regularization benefits general QD search, we note that surrogate assisted methods like DSAGE inherit additional benefits. As the surrogate model makes predictions for all possible scenarios, and not only scenarios satisfying the workspace constraints, the QD search that exploits the surrogate model can move towards high-magnitude inputs in invalid regions of the scenario space when these inputs result in high objective values. The objective regularization helps prevent QD algorithms from exploiting errors in the surrogate model at extreme regions of the scenario parameter space.
### _DQD with Surrogate Models_
The DSAGE algorithm exploits the surrogate model with _derivative-free_ QD algorithms. A key observation is that the surrogate model is an end-to-end differentiable neural network. We propose taking advantage of this by exploiting the surrogate model with differentiable quality diversity (DQD) algorithms [17], which leverage the gradients of the objective and measure functions to accelerate QD optimization.
### _Differentiable Surrogate Assisted Scenario Generation_
The proposed improvements result in two versions of our algorithm. The derivative-free version, Surrogate Assisted Scenario Generation (SAS), employs a derivative-free QD algorithm in the inner loop, such as CMA-MAE. The differentiable version, Differentiable Surrogate Assisted Scenario Generation (DSAS), employs a differentiable QD (DQD) algorithm in the inner loop. Here we present DSAS with the state-of-the-art DQD algorithm CMA-MAEGA in the inner loop.
On each iteration of the outer loop, we initialize a new surrogate archive to store solutions that the surrogate model predicts are high performing and diverse (line 3). Then, we begin the inner loop (line 5). On line 6, we evaluate the current solution point \(\mathbf{\theta}\) with the surrogate model to obtain the predicted objective \(\hat{f}\), measures \(\hat{\mathbf{m}}\), and the branching gradients \(\mathbf{\nabla}_{\hat{f}}\) and \(\mathbf{\nabla}_{\hat{\mathbf{m}}}\). We then add the solution \(\mathbf{\theta}\) to the surrogate archive (line 8) based on the predicted evaluations, after applying the regularization penalty (line 7). Next, we generate a batch of solutions based on the branching gradients (line 9). For each solution, we sample gradient coefficients, which, combined with the gradients, produce a new candidate solution (lines 10-12). We evaluate each new candidate solution \(\mathbf{\theta}_{i}^{\prime}\) with the surrogate model (line 13), apply the regularization penalty (line 14), and add the solution to the surrogate archive (line 15). After processing a batch, we update the search parameters of CMA-MAEGA to move the search towards maximizing the QD objective (line 17).
After completing an inner loop, we select a subset of solutions from the surrogate archive to label (line 19). For each set of scenario parameters \(\mathbf{\theta}\), we generate-and-repair a scenario (line 21), evaluate the robotic system on the scenario (line 22), update our dataset by adding the scenario labeled with the true objective \(f\), measures \(\mathbf{m}\), robot occupancy grid \(\mathbf{y}_{r}\), and human occupancy grid \(\mathbf{y}_{h}\) (line 23), and finally add the scenario to our ground-truth archive (line 24). After updating the training data with newly labeled scenarios, we train the occupancy predictor for both the robot (line 27) and human (line 28), then train the surrogate model to predict the objectives and measures (line 29). The inner loop in future iterations exploits the more accurate surrogate model to produce better scenarios.
```
Input:\(N\): Maximum number of evaluations, \(N_{exploit}\): Number of iterations in the model exploitation phase, \(\mathbf{\theta}_{0}\): Initial solution for CMA-MAEGA, \(B\): Batch size for CMA-MAEGA Output:Final version of the ground-truth archive \(\mathcal{A}_{gt}\)
1 Initialize the ground-truth archive \(\mathcal{A}_{gt}\), the dataset \(\mathcal{D}\), robot occupancy predictor \(sm_{r}\), human occupancy predictor \(sm_{h}\), objective and measure predictor \(sm\)
2while\(evals<N\)do
3 Initialize CMA-MAEGA with the surrogate archive \(\mathcal{A}_{surr}\) and initialize solution \(\mathbf{\theta}\) to \(\mathbf{\theta}_{0}\)
4 Initialize CMA-ES parameters \(\mathbf{\mu}\), \(\mathbf{\Sigma}\)
5for\(itr\in\{1,2,\ldots,N_{exploit}\}\)do
6\(\hat{f},\mathbf{\nabla}_{\hat{f}},\hat{\mathbf{m}},\mathbf{\nabla}_{\hat{\mathbf{m}}}\gets sm(\mathbf{\theta},sm_{r}(\mathbf{\theta}),sm_{h}(\mathbf{ \theta}))\)
7\(\hat{f}\leftarrow\hat{f}-reg(\mathbf{\theta})\)
8\(\mathcal{A}_{surr}\leftarrow\textit{add\_solution}(\mathcal{A}_{surr},(\mathbf{ \theta},\hat{f},\hat{\mathbf{m}}))\)
9for\(i\in\{1,2,\ldots,B\}\)do
10\(\mathbf{c}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\)
11\(\mathbf{\nabla}_{i}\gets c_{0}\mathbf{\nabla}_{\hat{f}}+\Sigma_{j=1}^{k}\left(c_{j} \mathbf{\nabla}_{\hat{\mathbf{m}}_{j}}\right)\)
12\(\mathbf{\theta}_{i}^{\prime}\leftarrow\mathbf{\theta}+\mathbf{\nabla}_{i}\)
13\(\hat{f}^{\prime},*,\hat{\mathbf{m}}^{\prime},*\gets sm(\mathbf{\theta}_{i}^{\prime}, sm_{r}(\mathbf{\theta}_{i}^{\prime}),sm_{h}(\mathbf{\theta}_{i}^{\prime}))\)
14\(\hat{f}^{\prime}\leftarrow\hat{f}^{\prime}-reg(\mathbf{\theta}_{i}^{\prime})\)
15\(\mathcal{A}_{surr}\leftarrow\textit{add\_solution}(\mathcal{A}_{surr},(\mathbf{ \theta}_{i}^{\prime},\hat{f}^{\prime},\hat{\mathbf{m}}^{\prime}))\)
16 end for
17 Update \(\mathbf{\theta}\), \(\mathbf{\mu}\), \(\mathbf{\Sigma}\) via CMA-MAEGA update rules
18 end for
19\(\mathbf{\Theta}\leftarrow\textit{select\_solutions}(\mathcal{A}_{surr})\)
20for\(\mathbf{\theta}\in\mathbf{\Theta}\)do
21\(scenario\gets G(\mathbf{\theta})\)
22\(f,\mathbf{m},\mathbf{y}_{r},y_{h}\leftarrow\textit{evaluate}(scenario)\)
23\(\mathcal{D}\leftarrow\mathcal{D}\cup(\mathbf{\theta},f,\mathbf{m},\mathbf{y}_{r},\mathbf{y}_{h})\)
24\(\mathcal{A}_{gt}\leftarrow\textit{add\_solution}(\mathcal{A}_{gt},(\mathbf{ \theta},f,\mathbf{m}))\)
25\(evals\gets evals+1\)
26
27 end for
28\(sm_{r}\)_train_(\(\mathcal{D}\))
29\(sm_{h}\)_train_(\(\mathcal{D}\), \(sm_{r},sm_{h}\))
30 end for
```
**Algorithm 1**Differentiable Surrogate Assisted Scenario Generation Generation (DSAS).
## V Domains
We consider two HRI domains from prior work: shared control teleoperation [35] and shared workspace collaboration [50] with a 6-DoF Gen2 Kinova JACO arm. The following subsections provide a brief description of the search space, objective, and measure functions in these domains. Additional details about the robot policy, the human policy, and the implementation are provided in Appendix C-A, Appendix C-B, and Appendix D respectively.
### _Shared Control Teleoperation_
A teleoperation task involves a user providing joystick inputs to a robot arm with the intention of reaching a goal in the environment. It is generally hard for users to teleoperate a 6-DoF robot arm to the correct configuration [35]. Thus, in shared control teleoperation, the robot attempts to infer the human goal from a set of candidate goals by observing the low-dimensional joystick inputs provided by the user.
Following the shared control teleoperation framework from previous work [35], the robot solves a POMDP with the user's goal as a latent variable, while it updates its belief about the goal based on the human input trajectory assuming a noisily-optimal user. To enable real-time decision-making, the robot performs hindsight optimization to approximate the POMDP and assumes a first-order approximation of the value function. This results in the robot's actions being a weighted average
of the optimal path towards each goal, where the weights are proportional to the respective goal probabilities.
To formalize the scenario generation problem in the shared teleoperation domain, we follow the QD formulation of prior work [18, 20]. The environment parameters are the positions of the two goal objects in a bounded workspace, constrained to be reachable by the robot arm. The simulated human provides a trajectory of joystick inputs towards their goal object, parameterized by a set of waypoints. The human model parameters are disturbances to these waypoints. The scenario parameters \(\mathbf{\theta}\) include the environment and human model parameters. The objective function \(f\) in the QD search is the time taken to reach the correct goal, with a maximum time limit of 10 seconds if the robot fails to reach the goal. The search aims to find scenarios that are diverse with respect to the noise in human inputs and the scene clutter, thus the measures \(\mathbf{m}\) are the human variation from the optimal path and the distance between goals.
### _Shared Workspace Collaboration_
In shared workspace collaboration, the human and the robot simultaneously execute a sequential task in a shared workspace with disjoint workspace regions. We consider a package labeling task, which instantiates the human-robot shared workspace collaboration domain of previous work [50, 36]. The human and the robot have different actions, i.e., the human labels a package while the robot presses a stamp, and they share a set of goals, i.e., boxes to perform the task. The human and the robot cannot work simultaneously on the same object and the task finishes when all boxes are labeled and stamped.
We assume that the human picks a label for an object from a starting point and moves towards that object. Different boxes require different labels, thus we model the human as attempting to reach the box corresponding to the label they picked up, regardless of the robot's actions. On the other hand, the robot can switch its goal while moving, since stamping can be performed on any goal object with the same tool. This domain is more complex than the shared control teleoperation task because it includes manipulating a sequence of objects, rather than reaching a single object, and the objects are in disjoint workspace regions.
As in the shared control teleoperation task, the robot reasons over the human goal, which is a latent variable in a POMDP, and uses hindsight optimization and first-order value function approximations to act in real time. Unlike in shared control teleoperation, the human does not provide direct joystick inputs to the robot but acts independently. The robot tracks the position of the human hand as its observation. Furthermore, the robot attempts to avoid the goal intended by the human. Instead, it retains a feasible set of its goals that are different from a candidate human goal and selects the nearest goal from that set, i.e., it maps a human candidate goal to a different goal-to-go. The robot then takes an action that is a weighted average of the optimal path towards each goal-to-go, with weights proportional to the probability of the corresponding human goal.
We formalize the scenario generation problem to be similar to the formalization in the shared teleoperation domain. The scenario parameters consist of the locations of three goal objects in a larger, disconnected workspace. We set the workspace boundaries to the quadrants of the L-shaped table in Fig. 1 that are reachable by both the human and the robot arm. We model the human as moving to their goal while avoiding obstacles by optimally solving an MDP. The objective \(f\) is again the time to task completion since we wish to find challenging scenarios.
We choose two sets of measures \(\mathbf{m}\) described below:
#### V-B1 Minimum distance between goal objects and maximum wrong goal probability
We adopt the minimum distance measure from the shared control teleoperation domain in previous work [18]. Furthermore, one of the failure scenarios found in that work was caused by incorrect inference of the human goal by the robot. Thus, we set as our second measure the maximum probability that is assigned to the wrong goal by the robot during the task, to search for potential failures in which the robot actually infers the human goal correctly.
#### V-B2 Robot path length and total wait time
In the shared workspace collaboration task that we consider, there are two main sources of delay: the robot needing to move across the two workspaces to reach different goals, and the wait time caused due to both the human and the robot wanting to work on the same goal. Hence, we choose the path length of the robot and the total wait time as the two measures to see how the team performance changes as these are varied.
## VI Experiments
### _Experiment Design_
#### Vi-A1 Independent variables
Our two independent variables are the domain and the algorithm.
Our three domains are: (a) shared control teleoperation with distance between the goals and human variation as measures; (b) shared workspace collaboration with minimum distance between the goals and maximum wrong goal probability as measures (shared workspace collaboration I); (c) shared workspace collaboration with robot path length and total wait time as measures (shared workspace collaboration II).
In each domain, we compare five different algorithms:
* **Random search**, where we uniformly sample solutions from the valid regions.
* **MAP-Elites**[46], as adapted for scenario generation in previous work [18], with the objective regularization described in Sec. IV-C.
* **CMA-MAE**[19] with objective regularization.
* **SAS**: The proposed non-differentiable version of our surrogate assisted scenario generation algorithm. We apply CMA-MAE as the derivative-free QD algorithm in the inner loop of Algorithm 1.
* **DSAS**: The proposed differentiable surrogate assisted scenario generation with CMA-MAEGA in the inner loop, as shown in Algorithm 1.
#### Iv-A2 Dependent variable
We set QD-score [52] (Eq. 1 in Sec. II) as the dependent variable that summarizes the quality and diversity of solutions. We compute the QD-score at the end of 10,000 evaluations, averaged over 10 trials of random search, MAP-Elites, CMA-MAE, and - because of GPU usage constraints - 5 trials of SAS and DSAS.
#### Iv-A3 Hypotheses:
**H1.**_We hypothesize that the surrogate assisted QD algorithms SAS and DSAS will outperform CMA-MAE, MAP-Elites and random search._ We base this hypothesis on previous work, which has shown the benefit of integrating quality diversity with surrogate model predictions [66, 3].
**H2.**_We hypothesize that DSAS will outperform SAS._ We base this hypothesis on previous work, which has shown that DQD algorithms perform significantly better than their derivative-free counterparts [17] when the objective and measure gradients are available.
### _Analysis_
A two-way ANOVA test showed a significant interaction effect (\(F(8.0,105.0)=305.79,p<0.001\)). Simple main effects analysis on each domain showed a significant effect of the algorithm on the QD score (\(p<0.001\)).
Pairwise t-tests with Bonferroni corrections showed that SAS and DSAS performed significantly better than CMA-MAE, MAP-Elites, and random search in the shared control teleoperation and shared workspace collaboration I domains (\(p<0.001\)). However, in the shared workspace collaboration II domain, they outperformed CMA-MAE (\(p<0.001\)) and random search (\(p<0.001\)), while there was no significant difference with MAP-Elites. We attribute this to the large variance of the MAP-Elites runs and to the fact that MAP-Elites can easily populate the archive for the given measure of the robot's path length by making small isotropic perturbations in the object positions. Fig. 3 shows the QD-score as a function of the number of evaluations. We see that both SAS and DSAS achieve a high QD-score early in the search, indicating high sample efficiency.
The comparison between SAS and DSAS showed mixed results, with SAS performing significantly better in the shared workspace collaboration I domain (\(p<0.001\)), DSAS performing significantly better in the shared control teleoperation domain (\(p<0.001\)), and no significance in the shared workspace collaboration II domain (\(p=0.07\)). We observe that the shared control teleoperation domain has a 9-dimensional search space, while the shared workspace collaboration task has a 6-dimensional search space. Previous work [17, 19] has shown DQD algorithms improving efficiency in very high-dimensional search spaces by reducing the search from a high-dimensional search space to a low-dimensional objective-measure space. We conjecture that this explains the significant improvement in the shared control teleoperation domain and we will investigate higher-dimensional domains in future work.
We further compare all algorithms with respect to the wall-clock time. We ran all our experiments on a high-performance server, allocating 16 Intel Xeon-2640v3 CPU cores. We additionally allocated 1 NVIDIA V100 GPU for SAS and DSAS for online training of the surrogate model.
We observe that in the shared control teleoperation tasks, where all scenario evaluations last up to 10 seconds, CMA-MAE performed better in terms of wall-clock time than the surrogate assisted algorithms, even though the latter had better sample efficiency. On the other hand, in the shared
Fig. 3: QD-score attained in the three domains as a function of number of evaluations (top) and the wall-clock time (bottom). Plots show the mean and standard error of the mean.
workspace collaboration domains, where scenario evaluations last a couple of minutes because of the larger task complexity, surrogate assistance showed wall-clock time efficiency, unlike prior work [3] in surrogate assisted generation of environments that only showed sample efficiency improvements. Finally, we note that in the inner loop, SAS only requires a forward pass through the surrogate model while DSA requires both a forward pass for prediction and backpropagation for the objective and measure gradients. As a result, SAS can perform more evaluations than DSAS in a given amount of time.
Fig. 4 shows example heatmaps of the final archives in the shared control teleoperation domain. The heatmaps for MAP-Elites and random search match the results from prior work [18]. SAS and DSAS nearly fill in the archive, including scenarios in the lower right corner of the archive. This is the hardest region of the archive to find failures because it includes scenarios with a nearly optimal human and a large distance between the goal objects, which would typically enable correct inference of the human goal.
### _Ablation: Effect of Objective Regularization_
To test the effect of objective regularization on performance, we choose the shared workspace collaboration I domain and run 10 trials of DSAS, SAS, CMA-MAE, and MAP-Elites without objective regularization. As mentioned in Sec. IV-C, regularization is crucial for surrogate assisted QD methods, SAS and DSAS, since otherwise, the QD algorithm exploits surrogate model prediction errors, pushing the search towards infinite parameter values and breaking the QD search. Hence, due to numerical errors, none of the SAS or DSAS runs without objective regularization could be completed.
We compare the results of MAP-Elites and CMA-MAE runs with their corresponding runs from the previous section that included objective regularization. Statistical analysis showed that MAP-Elites performed similarly with and without regularization, while CMA-MAE performed significantly worse without objective regularization (\(t=-7.08,p<0.001\)). We attribute this to the fact that perturbations of existing solutions in MAP-Elites are not guided by the objective values. On the other hand, CMA-MAE guides the search based on the objective improvements of the sampled solutions, hence objective regularization has a significant effect on performance.
### _Real World Demo_
We wish to demonstrate that the generated scenarios are reproducible in the real world. Thus, we recreate four example scenarios from the generated archives and test them with a 6-DoF Gen2 Kinova JACO arm. We track the human hand position with a Kinect v1 sensor and the OpenNI package [44] for skeleton tracking. We discuss the scenarios below. We include videos of all four scenarios in the supplementary material.
#### Iv-D1 Incorrect robot motion because of delayed human goal inference (Fig. 5a)
We selected this scenario from the archive generated by DSAS in the shared workspace collaboration II domain. The scenario time was 81s and the robot path length was 4.9m.
In this scenario, after the human finishes working on goal G1 and the robot on G3, the robot is closer to G2 than its other remaining goal, G1. Based on the feasible goal set formulation (Sec. V-B), G2 becomes the goal-to-go for human candidate goals G1 and G3. Given that the combined probability of the human going to either G1 or G3 is higher than the probability of the human going to G2, the robot moves towards G2. However, once the robot realizes that the human is actually moving to G2 as well, the robot has to move all the way back to goal G1, resulting in a significant delay.
#### Iv-D2 Incorrect human goal inference with limited effect on robot motion (Fig. 5b)
We select a scenario from the archive generated by SAS with a scenario time 77s and a very high maximum wrong goal probability of 0.9.
The human finishes working on G1 and the robot on G2. As the human moves towards G2, the robot incorrectly thinks that the human is moving to G3, which is near the optimal path to G2, causing the robot to slow down in anticipation of the human motion. After the human reaches G2, the robot continues moving to G3.
#### Iv-D3 Long robot motion with correct human goal inference (Fig. 5c)
We additionally wish to find scenarios that result in poor team performance that is not due to incorrect inference. We select a scenario from a SAS archive in the shared workspace collaboration I domain that had a completion time of 74s and a low maximum wrong goal probability of 0.3.
The poor performance is caused by the interaction between the robot's policy and the object placement. As the robot
Fig. 4: Comparison of the final archive heatmaps in the shared control teleoperation domain.
moves between the two workspaces following a straight line path, the robot reaches a configuration close to self-collision or to joint limits, which prompts the system to perform a re-planning motion back to its start configuration. This happened twice during task execution, resulting in a significant delay.
#### Vi-A4 Long wait time due to both teammates needing to work on the same goal (Fig. 4(d))
Finally, we select a scenario from a DSA archive in the shared workspace collaboration II domain that has a high human and robot wait time.
This scenario was simple albeit unanticipated. The human goes to G1 followed by G2, while the robot goes to G2 followed by G1. The team coordinates smoothly until both agents need to work on G3 to finish the task, causing a delay.
## VII Discussion
### _Limitations_
Our approach scales surrogate assisted scenario generation from single-agent grid-world domains to complex human-robot interaction domains with continuous actions, environment dynamics, and object locations. However, our evaluation domains consist of objects of the same type and simple human models. We note that SAS and DSA are general algorithms and we are excited about integrating them with more complex models of environments [22] and human actions [37, 5], as well as leveraging high-fidelity human model simulators [64, 14] to improve realism. Furthermore, our system does not explain the reason behind the observed robot behavior in the generated scenarios, and future work will explore integrating scenario generation with methods for failure explanation [7]. Finally, while we focus on a single human interacting with a single robot, we believe that our workspace occupancy-based approach for surrogate model predictions can be extended to multi-human-robot team settings.
### _Implications_
We presented the SAS and DSA scenario generation algorithms that accelerate QD scenario generation via surrogate models. Results in a shared control teleoperation domain of previous work [18] and in a shared workspace collaboration domain show significant improvements in search efficiency when generating diverse datasets of challenging scenarios.
For the first time in surrogate assisted scenario generation methods, we see improvements not only in sample efficiency but also in wall-clock time in the shared workspace collaboration domain, where evaluations last a couple of minutes. On the other hand, the additional computation in the inner loop of the surrogate assisted algorithms resulted in more time required to match and exceed the performance of the baselines in the shared control teleoperation domain, where scenario evaluations last only a few seconds. Thus, for running-time performance, we recommend surrogate assisted methods in domains with expensive evaluations, in which the additional computation in the inner loop is offset by the improvement in sample efficiency.
We additionally highlight an unexpected benefit of our system during development. When we tested the shared workspace collaboration domain, SAS and DSA discovered failure scenarios that exploited bugs in our implementation which were subsequently fixed. For instance, some goal locations were reachable by the robot arm in the real world but unreachable in simulation because of small errors in the robot's URDF file, which prompted us to correct it.
Overall, we envision the proposed algorithms as a valuable tool to accelerate the development and testing of HRI systems before user studies and deployment. We consider this an important step towards circumventing costly failures and reducing the risk of human injuries, which is a critical milestone for widespread acceptance and use of HRI systems.
|
2304.02852 | Classification of Skin Disease Using Transfer Learning in Convolutional
Neural Networks | Automatic classification of skin disease plays an important role in
healthcare especially in dermatology. Dermatologists can determine different
skin diseases with the help of an android device and with the use of Artificial
Intelligence. Deep learning requires a lot of time to train due to the number
of sequential layers and input data involved. Powerful computer involving a
Graphic Processing Unit is an ideal approach to the training process due to its
parallel processing capability. This study gathered images of 7 types of skin
disease prevalent in the Philippines for a skin disease classification system.
There are 3400 images composed of different skin diseases like chicken pox,
acne, eczema, Pityriasis rosea, psoriasis, Tinea corporis and vitiligo that was
used for training and testing of different convolutional network models. This
study used transfer learning to skin disease classification using pre-trained
weights from different convolutional neural network models such as VGG16,
VGG19, MobileNet, ResNet50, InceptionV3, Inception-ResNetV2, Xception,
DenseNet121, DenseNet169, DenseNet201 and NASNet mobile. The MobileNet model
achieved the highest accuracy, 94.1% and the VGG16 model achieved the lowest
accuracy, 44.1%. | Jessica S. Velasco, Jomer V. Catipon, Edmund G. Monilar, Villamor M. Amon, Glenn C. Virrey, Lean Karlo S. Tolentino | 2023-04-06T04:13:54Z | http://arxiv.org/abs/2304.02852v1 | # International Journal of Emerging Technology and Advanced Engineering
###### Abstract
Automatic classification of skin disease plays an important role in healthcare especially in dermatology. Dermatologists can determine different skin diseases with the help of an android device and with the use of Artificial Intelligence. Deep learning requires a lot of time to train due to the number of sequential layers and input data involved. Powerful computer involving a Graphic Processing Unit is an ideal approach to the training process due to its parallel processing capability. This study gathered images of 7 types of skin disease prevalent in the Philippines for a skin disease classification system. There are 3400 images composed of different skin diseases like chicken pos, acne, eczema, Pityriasis rosea, psoriasis, Tinea corporis and utiligo that was used for training and testing of different convolutional network models. This study used transfer learning to skin disease classification using pre-trained weights from different convolutional neural network models such as VGG16, VGG19, MobileNet, ResNet50, InceptionV3, Inception-ResNetV2, Xception, DenseNet121, DenseNet169, DenseNet201 and NASNet mobile. The MobileNet model achieved the highest accuracy, 94.1% and the VGG16 model achieved the lowest accuracy, 44.1%.
Skin Disease Classification, Deep Learning, Convolutional Neural Networks, Transfer Learning, Python
## I Introduction
Skin diseases are defined as conditions that typically develop inside the body or on the skin and manifest outside. There are 3000 types known skin disease [1]. Some conditions are uncommon while some occurs commonly. Generally, this condition brings rich, pain, and sleep deprivation. Other effects of skin diseases include emotional and social impact due to its detectable visual sensation. However, dermatologists assures that majority of skin diseases can be controlled with proper medication and properly diagnosed.
An implementation of an accurate and precise automated skin disease detection application that can be used by dermatologists can help in reducing their job.
Big Data refers to gathering and processing dataset sets to the level where its size and complexity transcends the capacity of conventional data processing applications. It is distinguished by 5Vs: (1) huge volume of data, (2) wide variety of data types, (3) velocity of data processing, (4) variability of data, and (5) value of data [2]. Some of the repositories available online includes molecular, clinical, and epidemiology data. This provides a vast possibility for research opportunities for different scientific advancements [3]. By combining the use of big data, image recognition technology, and the field of dermatology, patients, dermatologists, and the research community might reap a great benefit. This is due to many skin diseases that be diagnosed by medical professionals by inspecting it with naked eye. The different visual feature of each condition makes them easy to diagnose with the use of artificial intelligence and deep learning technologies. Moreover, skin diseases that are common in the Philippines can be easily identified with the of image recognition technologies. These skin diseases include chicken pos, acne, eczema, pityriasis rosea, psoriasis, tinea corporis, and utiligo.
Previously, MobileNet was only used as the model in skin disease classification [4]. In this paper, additional learning models are implemented such has VGG16, VGG19, Xception, ResNet50, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, DenseNet201, and NASNet Mobile. They were used to classify skin diseases with the use of images gathered from websites that are professional and open to public use like photo atlas of dermatology. They were tested if they will outperform the previously implemented MobileNet.
## II Conceptual Literature
### _Transfer Learning_
Human learners have the ability to naturally transfer their knowledge between one task to another. In other words, when faced with new challenges, people can recognize and use the pertinent information from past experiences. The ease of learning a new task depends on how resembles our previous knowledge. Contrarily, typical machine learning algorithms focus on small tasks. Transfer learning aims to change this b y creating strategies to use knowledge acquired in one or more source activities and apply it to enhance learning in a related target activity. To make machine learning as effective as human learning, knowledge transfer techniques are being pushed to advance [5].
### _Keras Platform_
A Fully Convolutional Network (FCN) was implemented, designed and developed using Keras, Python, and Theano in the research "Fully convolutional networks for segmenting pictures from an embedded camera" [6]. The FCN is used in this research to perform basic computer vision operations on images from a robot-mounted small stereo imaging sensor.
The network's design was prototyped using the Keras library, which accelerated the search for a network with high accuracy and minimal computing resource usage. The dataset of images is modified to fit the stereo camera imaging acquisition presets for the robot. It was also used for the training and validation of the proposed network.
### _Inception V3_
The Inception-v3 model of the Tensor Flow platform was used by the researchers in the study "Inception-v3 for flower classification" [7] to categorize flowers. The flower category dataset was retrained using transfer learning technology, which can significantly increase flower classification accuracy. In comparison to previous methods, the model's classification accuracy was 95% for the Oxford-17 flower dataset and 94% for the Oxford-102 flower dataset.
### _MobileNet_
Researchers utilized a Convolutional Neural Network model called MobileNet in the study "Driver distraction detection using single convolutional neural network" [8] to identify driver distraction.
The findings for MobileNet accuracy are seen to be higher compared to the Inception ResNet. Moreover, system results vary widely depending on how quickly CPU/GPU processing time is.
### _Inception-ResNet-V2_
The researchers introduced a brand-new family of modules called the PolyInception in their paper "PolyNet: A Pursuit of Structural Diversity in Very Deep Networks" [9]. It can replace various network components in a composition or isolated form with flexibility. Architectural efficiency can be used to choose PolyInception modules to increase expressive capability while maintaining a similar computational cost.
Based on the Inception-ResNet-v2 has the highest documented single model accuracy on ImageNet. Inception blocks are utilized to capture the residuals in the most recent version of the residual structure, which combines the two. The building blocks of GoogleNet are inception blocks. Their structures have undergone multiple iterations of optimization and refinement.
### _Vgg-16_
Different CNNs have been introduced with various architectural designs. With lower convolutional size and strides, the VGG-16 consists of 16 layers (13 convolutional layers and 3 fully linked layers). 4096 channels are present in the first two fully connected layers, while 1000 channels are present in the third layer. With the exception of sampling the inputs from the cropped multi-scale training images, VGG 16 uses a nearly identical training process to AlexNet. Using a convolutional neural network, the marine industry can recognize visual objects [10].
### _Vgg-19_
The researchers suggested a method to help the blind by delivering contextual information of the surroundings using 360\({}^{\circ}\) view cameras mixed with deep learning in the study "360\({}^{\circ}\) view camera based visual assistive technology for contextual scene information" [11]. The feed gives the user contextual information in the form of audio. That is accomplished by utilizing CNN transfer learning properties with the pre-trained VGG-19 network to classify data using convolutional neural networks (CNN).
VGG-19 convolutional neural network is a 19-layers network. It is composed of convolutional layers, Maxpooling, fully connected layers, and an output Softmax layer.
## 3 Methodology
### _Dataset_
The photos required for the project's development are sourced from the www.dermweb.com photo atlas, notably www.dermnetnz.org, as well as many clinical dermatological photo atlas publications. Acne, Varicella (chickenpox), eczema, Pityriasis rosea, psoriasis, vitiligo, and Tinea corporis are examples of skin diseases depicted in Figure 1. The datasets were compiled using a combination of publicly accessible dermatological repositories, dermatology color picture atlases, and photographs acquired by hand. Dermatologists have approved them as the categorization of skin disorders.
The dataset comes from a combination of open-access dermatological website, color atlas of dermatology and taken manually. The dataset composed of 7 categories of skin diseases and each image is in.jpeg extension. There is a total of 3,406 images.
### _Experiment_
The system will be built on the Keras platform and will use Tensorflow as its backend. The Pycharm IDE will be used to develop the app. The method can detect skin problems such as acne, eczema, psoriasis, vitiligo, Tinea corporis, chicken pox, and Pityriasis rosea. This is accomplished through the use of convolutional neural network transfer learning models such as the VGG 16, VGG 19, Inception, Xception, ResNet50, DenseNet, and Mobilenet.
Figure 1: Sample Images of Dataset
Figure 2: Program flowchart of the Python Code
## International Journal of Emerging Technology and Advanced Engineering
**Website: www.jetae.com (E-ISSN 2250-2459, Scopus Indexed, ISO 9001:2008 Certified Journal, Volume 13, Issue 04, April 2023)**
Referring to Figure 2, imports such as Numpy, Keras, Scikit-Learn, and Matplotlib are organized first by the application.
The dataset should then be configured into several directories to separate the training and testing data (training, testing and validation). The third step is to load photographs of skin conditions from category subfolders. Making a foundation model of various pretrained convolutional neural networks is the next step. Next, the data is preprocessed to get the features. To handle this automatically, Keras includes tools. The model's testing and training configuration comes next. The model is trained using the Adam optimizer. In order to determine which architecture is optimal for classifying skin diseases, various architectures will be assessed and compared based on model accuracy, confusion matrix, loading time, and weight size after training.
Using pretrained convolutional networks, size of the input image differs for each model. The input image is equal to the size of the image (width and height) and the number of channels. Table I shows the fixed size of the input image for each model.
## IV Results
The following criteria were looked at to compare and validate each pre-trained convolutional neural network's performance in classifying skin diseases: the confusion matrix, loading speed, accuracy, and weight size.
### _Confusion Matrices_
The confusion matrices of several models over the seven types of skin diseases are displayed in Figures 3-12. The row denotes a projected class, while the column denotes the actual class [13]. It is also known as a matching matrix. This demonstrates the commonality in misclassification across several convolutional neural networks.
Figure 5: InceptionResNetV2 Confusion Matrix
Figure 6: ResNet50 Confusion Matrix
Figure 7: VGG16 Confusion Matrix
Figure 10: DenseNet121 Confusion Matrix
Figure 9: VGG19 Confusion Matrix
Figure 11: DenseNet169 Confusion Matrix
Figure 12: NASNet Mobile Confusion Matrix
Figure 10: DenseNet121 Confusion Matrix
## V Conclusion
The MobileNet model outperforms the others with an accuracy of 94.1% and a weight size of 16.823MB. It offers the highest precision and the smallest weight size. VGG16 and VGG19, on the other hand, load pages faster than MobileNet, taking 3.543 and 3.809 seconds, respectively.
### _Acknowledgement_
The authors would like to express their acknowledgement for the assistance of the following: Jean Wilmar Alberio, Jonathan Apuang, John Stephen Cruz, Mark Angelo Gomez, Benjamin Molina Jr., and Lyndon Tuala, for their utmost contribution and effort on the completion of this study.
|
2310.09898 | Monochromatic Mass Spectrum of Primordial Black Holes | During slow-roll inflation, non-perturbative transitions can produce bubbles
of metastable vacuum. These bubbles expand exponentially during inflation to
super-horizon size, and later collapse into black holes when the expansion of
the universe is decelerating. Estimating the rate for these transitions during
a time-dependent slow-roll phase requires the development of new techniques.
Our results show that in a broad class of models, the inflationary fine-tuning
that gives rise to small density fluctuations causes these bubbles to appear
only during a time interval that is short compared to the inflationary Hubble
time. As a result, despite the fact that the final mass of the black hole is
exponentially sensitive to the moment bubbles form during inflation, the
resulting primordial black hole mass spectrum can be nearly monochromatic. If
the transition occurs near the middle of inflation, the mass can fall in the
"asteroid" range $10^{17}-10^{22}$ g in which all known observations are
compatible with black holes comprising 100% of dark matter. | Matthew Kleban, Cameron E. Norton | 2023-10-15T18:00:02Z | http://arxiv.org/abs/2310.09898v2 | # Monochromatic Mass Spectrum of Primordial Black Holes
###### Abstract
During slow-roll inflation, non-perturbative transitions can produce bubbles of metastable vacuum. These bubbles expand exponentially during inflation to super-horizon size, and later collapse into black holes when the expansion of the universe is decelerating. Estimating the rate for these transitions during a time-dependent slow-roll phase requires the development of new techniques. Our results show that in a broad class of models, the inflationary fine-tuning that gives rise to small density fluctuations causes these bubbles to appear only during a time interval that is short compared to the inflationary Hubble time. As a result, despite the fact that the final mass of the black hole is exponentially sensitive to the moment bubbles form during inflation, the resulting primordial black hole mass spectrum can be nearly monochromatic. If the transition occurs near the middle of inflation, the mass can fall in the "asteroid" range \(10^{17}-10^{22}\)g in which all known observations are compatible with black holes comprising 100% of dark matter.
## 1 Introduction
One of the greatest mysteries in modern physics is the nature of dark matter. Despite accounting for over 25% of the energy density of our universe, its nature and origin remains uncertain. Decades of searches for weakly-interacting massive particles have so far failed to find any conclusive signal [1]. Axion dark matter is another interesting possibility, as these are well-motivated beyond the Standard Model particles [2] and can simultaneously account for other features of our universe [3]. A different possibility is that dark matter is composed of primordial black holes (PBHs) that formed in the early universe [4, 5]. PBHs could be formed from Standard Model matter and radiation without any exotic particle that survives until today, although the primordial mechanism that produced them in sufficient abundance likely requires new physics. Current observational constraints leave a 5 order of magnitude window of "asteroid" mass black holes in which a monochromatic spectrum of PBHs could account for all of dark matter [4, 6].
A challenge for PBH dark matter is identifying a plausible mechanism for producing them in the correct abundance and with mass distribution consistent with observational constraints. Various PBH production mechanisms have been proposed in literature (see [6] for a review). These include a peak in the spectrum of primordial density fluctuations [7], first-order phase transitions [8], second-order phase transitions [9], crossovers [10, 11], and collapse of cosmic strings [12].
Here we present a variation of the mechanism first proposed in [13] and followed up in [14] and [15], in which the quantum nucleation and expansion of vacuum bubbles or domain walls during inflation creates regions that collapse later in the evolution of the universe, forming black holes. Because the vacuum bubbles form _during_ inflation, their size and abundance at the end of inflation - and the masses and quantity of black holes that eventually form - is exponentially sensitive to when during inflation the transition occurred. These previous works assumed that the rate of production of these objects was approximately constant during inflation, and hence predicted a very broad, power-law spectrum of PBH masses. With some parameter choices these could account for dark matter and evade observational constraints due to the relatively low abundance in any given mass window [16].
By contrast, in our analysis the transition takes place over a fraction of an inflationary efold. Despite the exponential sensitivity of the mass on the production time, the resulting PBH mass spectrum is very close to a delta function, and the abundance can be such that PBHs constitute all dark matter. This is consistent with observational constraints if the peak of the mass distribution lies in the "asteroid" mass range.
A delta function-like mass distribution was also found in [17], though this paper focused on domain walls with time-varying tension instead of vacuum bubbles.
Another variation was studied in [18], where a qualitatively different potential led to an approximately constant tunneling rate and a broad PBH mass spectrum. Very recently, an interesting alternative mechanism to produce PBHs from single-field inflation that gives a fairly narrow mass distribution was studied in [19].
## 2 Bubble nucleation during inflation
Bubbles or membranes can be produced by non-perturbative quantum effects, typically because they represent an energetically preferred state. During inflation, these defects will expand as if in flat space until they reach the inflationary horizon size, after which they will be caught in the pseudo-de Sitter expansion and grow exponentially (regardless of their tension or the evolving energy difference with the surrounding inflationary phase). If less than one such defect is produced per Hubble volume per Hubble time, the transition will not percolate because the space expands fast enough to dilute the number density exponentially.
For definiteness, we will assume that during inflation two scalar fields have a potential similar to the one shown in Figure 1.1 We assume that it contains a "valley" with a small slope (vertical direction), separated from a "lake" by a barrier. Slow-roll inflation is driven by the field (labelled \(\phi\)) rolling vertically down the valley in the figure. We further assume that the vacuum energy in the lake \(\rho_{b}\) is lower than the inflationary energy density at any time during inflation, but higher than the energy density in the radiation dominated phase well after inflation ends when the vacuum bubbles re-enter the horizon.2
Footnote 1: Considering a model that produces domain walls rather than vacuum bubbles would change our conclusions quantitatively, but not qualitatively.
Footnote 2: Again, these assumptions are not necessary and could be relaxed without changing the qualitative results.
Potentials of this form will generally admit a single unique instanton; a trajectory in field space that solves the field and gravity equations in Euclidean signature, connecting a point near the lake minimum to a point on the other side of the barrier in the valley via a domain wall of radius \(R\).3 Tunneling from the valley to the lake (or from the lake to the valley) corresponds to the formation of a bubble. In the approximation that the bubble has walls thin compared to its radius, the fields inside and outside the wall will take values corresponding to the two end points of the trajectory. One might therefore expect that as the inflaton rolls down the valley, the transition occurs only when the inflaton wave functional assigns non-negligible probability to configurations where the inflaton field equals the valley end of the instanton trajectory in a region of size \(R\).
Footnote 3: It is possible for multiple discrete instantons to exist, for instance if there are several local minima, or for a continuum of transitions to exist when there is a symmetry or the potential for the second field is independent of that of the first.
### Approximating the tunneling rate
Tunneling between two local minima in the presence of gravity occurs via the Colemann-DeLuccia instanton [20]. In our case the initial state is an inflating universe, and so we are interested in tunneling from slow-roll down the valley into the lake. This presents an interesting complication that has not been previously studied (to our knowledge), since the initial state is time-dependent and the field is not near a minimum. As just mentioned, for potentials of this form there is generally a unique instanton solution that connects a specific point in the valley to the lake. If the potential were symmetric around \(\phi=0\) in Fig. 1, the tunneling trajectory would lie exactly along the line \(\phi=0\). Slow roll breaks this symmetry
slightly, but - absent special features or other symmetries - there is still only a single instanton trajectory. The instanton solution for an explicit two-field potential similar to this was constructed numerically and studied in [21], where the authors were interested in tunneling from lake to valley.
We expect the tunneling rate to be maximized at the time during inflation when the vacuum expectation value (vev) of the inflaton coincides with the end point of the instanton trajectory in the valley. Away from this time, when the field vev differs by \(\Delta\phi\) from the end point of the trajectory, the rate should be suppressed relative to this maximum. It is important for our analysis to understand how quickly the tunneling rate goes to zero away from this maximum. To our knowledge this question has not been considered previously. We develop two approaches to this question that are described below.
Numerically estimating the rate:One way to estimate the tunneling rate for \(\Delta\phi\neq 0\) is to deform the potential slightly to create an infinitesimal potential minimum at the point in the valley from which we want to estimate the rate. Typically this deformation creates a new instanton trajectory that connects the new minimum to the lake.4 Given a specific potential, we can calculate the new instanton's action and trajectory numerically (for instance with the "anybubble" package [22]). Because the deformation can be made arbitrarily small we expect this method to give a good approximation to the actual tunneling rate.
Figure 1: Schematic of the potential for a two-field inflationary model. The inflaton \(\phi\) is the vertical direction and slow-roll inflation can occur as \(\phi\) evolves downwards along a gently sloping “valley”. The valley is separated from a local minimum (a “lake”) by an interval in the second scalar field \(\chi\). The line indicates the (unique) instanton trajectory that connects the lake to the valley. This instanton describes the formation of a bubble of radius \(R\), inside of which the fields take values corresponding to the endpoint in the lake, and outside of which take values corresponding to the endpoint in the valley. At a time during inflation when the vacuum expectation value \(\langle\phi\rangle\) of the inflaton is displaced from the valley end-point of the instanton trajectory by a distance \(\Delta\phi\), the bubble can still appear but with probability exponentially suppressed in \((\Delta\phi)^{2}\).
Analytically estimating the rate:The numerical technique is not very informative in understanding how the rate depends in general on \(\Delta\phi\), the vertical field-space distance from the end point of the instanton. Instead, consider a less refined "two step" analytic estimate. Starting at a given point on the valley, we approximate the actual tunneling trajectory by a first step where the field fluctuates vertically the distance \(\Delta\phi\) to the end point of the instanton trajectory, and a second step where it tunnels across the barrier via the standard Coleman-de Luccia (CdL) instanton. The action for the full transition can be approximated as the sum of the actions for these two steps.
In order to create the initial conditions for the CdL instanton, the first fluctuation must occur in a region that is at least of size \(R\), the radius of the CdL bubble. To approximate the probability for the field to fluctuate down the valley, we calculate the variance of the field \(\phi\) averaged over a sphere of radius \(R\)
\[\phi_{R}(\vec{x},t)\equiv\frac{1}{V_{3}}\int_{R}d^{3}y\phi(\vec{x}+\vec{y},t), \tag{1}\]
where \(V_{3}=\frac{4}{3}\pi R^{3}\). The averaged field is approximately a gaussian random variable because the inflaton is a nearly free field. The probability is therefore given by
\[\exp\biggl{\{}-\frac{(\Delta\phi)^{2}}{2\sigma^{2}}\biggr{\}}, \tag{2}\]
where \(\sigma\) is the variance of the field. In our case, a simple calculation gives
\[\sigma^{2}=\langle\phi_{R}(\vec{x},t)\phi_{R}(\vec{x},t)\rangle=\frac{9}{32 \pi^{2}}\frac{1}{R^{2}}, \tag{3}\]
so the dependence of the probability on \(\Delta\phi\) is
\[\exp\biggl{\{}-\frac{16\pi^{2}}{9}(\Delta\phi)^{2}R^{2}\biggr{\}}. \tag{4}\]
Indeed, the dependence on \(R^{2}\) and \((\Delta\phi)^{2}\) essentially follows from dimensional analysis.5 We expect the coefficient in the exponent to be larger than \(16\pi^{2}/9\), because our calculation was for the _average_ field in a sphere of size \(R\) to fluctuate \(\Delta\phi\), whereas what we actually want is the more restrictive condition that the field fluctuates homogeneously everywhere in the region, in order to set up the correct initial conditions for the transition.
Footnote 5: The Euclidean action for such a fluctuation is \(S\sim\int d^{4}x(\partial\phi)^{2}\sim\int d^{4}x(\Delta\phi/R)^{2}\sim c( \Delta\phi)^{2}R^{2}\), with associated probability \(\sim e^{-S}\).
We compare this approximation to numerical results for a specific potential using the deformation technique mentioned above (Fig. 2). We find that the quadratic scaling of \((\Delta\phi)^{2}\) in the exponent provides an excellent fit to the numerical estimates of the action made using the deformation technique, but that (as expected) the best-fit coefficient is larger than the one in our analytic calculation.
### Slow roll
During inflation, an interval in the inflaton field \(\Delta\phi\) is related to an interval in the number of inflationary efolds \(\Delta N\) by
\[\Delta N=H_{i}\Delta t=\frac{H_{i}\Delta\phi}{\dot{\phi}}=\frac{H_{i}^{2}}{ \dot{\phi}}\frac{\Delta\phi}{H_{i}}=2\pi\Delta_{\mathcal{R}}\frac{\Delta\phi} {H_{i}}. \tag{5}\]
Here \(\Delta_{\mathcal{R}}^{2}\) is the power spectrum of the gauge-invariant curvature perturbation, with \(\Delta_{\mathcal{R}}^{2}\approx 10^{-9}\) during the observable period of inflation, and we approximate the Hubble rate during inflation \(H_{i}\) as constant. From our analysis in the previous subsection, we know that the transition rate is unsuppressed relative to the maximum rate when
\[\Delta\phi\lesssim\frac{1}{\sqrt{c}R}\approx\frac{1}{10R}, \tag{6}\]
where we have set the variance \(\sigma^{2}=1/2cR^{2}\) with \(c\) a dimensionless coefficient. The last approximation uses our numerical estimate that found \(c\approx 94\) (Fig. 2). Putting this together gives
\[\Delta N\lesssim\frac{\pi}{5}\frac{\Delta_{\cal R}}{H_{i}R}. \tag{7}\]
This shows that the range of efolds during which the transition takes place is proportional to the curvature perturbation divided by the radius \(R\) of the bubble measured in units of the inflationary Hubble length. During or shortly after the observable part of inflation, the numerator \(\Delta_{\cal R}\approx 10^{-4.5}\) and the spectral tilt is small and red (so that \(\Delta_{\cal R}\) decreases slowly with time). However, we will see that for PBHs in the asteroid mass range the transitions must take place after this phase of inflation, where we do not have a direct measurement of \(\Delta_{\cal R}\) (and cannot be certain that the extrapolation indicated by the observed red tilt is valid).
The denominator \(H_{i}R\) can range from \({\cal O}(1)\) when the bubble radius is comparable to the inflationary scale, to less than one if the bubble is smaller. For high-scale inflation with \(H_{i}\approx 10^{-5}M_{\rm Pl}\) we must have \(H_{i}R\gtrsim 10^{-5}\) for the bubble to be larger than the Planck length, but for lower-scale inflationary models it is possible for \(H_{i}R\) to be tuned smaller. However, if the dynamics governing the formation of the bubble are governed by some feature of inflation it is natural for \(H_{i}R\) to be not much less than one.
Following the "step" \(\Delta\phi\) that creates the initial conditions for the instanton, the field must tunnel through the potential barrier. The tunneling rate \(\lambda\) scales as
\[\lambda\sim e^{-B}, \tag{8}\]
where \(B=S_{I}-S_{V}\) is the action of the instanton minus the action for the inflaton to stay in the valley. Being exponentially sensitive, this rate can vary enormously. In the next section we will calculate how large \(\lambda\) should be to give the observed dark matter abundance.
Figure 2: Action for tunneling from a point displaced from the valley endpoint of the instanton by a distance \(\Delta\phi\). _Blue points:_ numerical approximation \(S_{\rm num}(\Delta\phi)\) calculated using [22] from a potential of the form in Fig. 1, with a small deformation added to create a local minimum when \(\Delta\phi\neq 0\). (The asymmetry in \(\Delta\phi\) due to the slope of the valley is too small to be visible.) _Black line:_ Semi-analytic approximation explained in the text, \(S\approx S_{0}+c(\Delta\phi)^{2}R^{2}\), where \(S_{0}=S_{\rm num}(\Delta\phi=0)\), \(R\) is the radius of the bubble at \(\Delta\phi=0\), and \(c\approx 94>16\pi^{2}/9\) is the best fit to the data points shown in blue.
Vacuum bubbles and black holes
After a vacuum bubble nucleates, pressure due to the lower energy state on the inside causes it to expand to horizon size, after which de Sitter expansion inflates it exponentially to superhorizon scales. After inflation it continues to grow, comoving with the expansion of the universe, until eventually re-enters the horizon. We are assuming that at this horizon-crossing time the vacuum energy inside the bubble is higher than the energy density of the radiation-dominated universe around it. In that case the bubble begins to collapse once it re-enters the horizon. The resulting black hole has a mass that is exponentially sensitive to the time during inflation that the bubble appeared [13, 15].
Neglecting an \(\mathcal{O}(1)\) correction, the bubble's radius during inflation is approximately
\[R(t)\approx H_{i}^{-1}\exp[H_{i}(t-t_{n})], \tag{9}\]
where \(t_{n}\) is the bubble nucleation time. We denote the time of the end of inflation by \(t_{i}\), and the radius \(R(t_{i})=R_{i}\). Letting \(N_{n}=H_{i}(t_{i}-t_{n})\) be the number of efolds before reheating that the bubble nucleates,
\[R_{i}\approx H_{i}^{-1}\exp\{N_{n}\} \tag{10}\]
After reheating, any initial velocity of the bubble walls rapidly decreases due to the pressure of the fluid around the bubble, so that it expands at rest with respect to the cosmic comoving frame until it re-enters the horizon at time \(t_{H}\) and subsequently collapses. The mass of the resulting black hole is can be approximated as
\[GM\sim t_{H}, \tag{11}\]
where \(t_{H}\) is the horizon crossing time of the co-moving scale corresponding to \(R_{i}\).6 We can find this by setting the Hubble radius equal to the radius of the bubble after inflation,
Footnote 6: This applies for the case of a super-critical bubble, which we explain a bit later in this discussion.
\[\frac{1}{H(t_{H})}=\left(\frac{a(t_{H})}{a(t_{i})}\right) \tag{12}\]
Assuming radiation domination, \(a(t)\sim\sqrt{t}\), \(t_{H}\) is given by
\[t_{H}\sim\frac{R_{i}^{2}}{t_{i}} \tag{13}\]
and the mass of the black hole as a function of \(N_{n}\) is
\[M\sim\frac{1}{GH_{i}}\exp\{2N_{n}\}. \tag{14}\]
(Had we considered domain walls instead [13], the mass would depend as \(M\sim e^{4N_{n}}\).) Once the wall re-enters the horizon it will rapidly collapse into a black hole due to its wall tension and the fact (in the two-field model of the last section) that the vacuum inside has higher energy than the universe outside. A black hole of mass \(M\) has a Schwarzchild radius (with \(c=1\))
\[R=2GM=1.5\times 10^{-10}\mathrm{m}\left(\frac{M}{10^{20}\mathrm{g}}\right) \tag{15}\]
The corresponding horizon crossing time is
\[2t_{H}=R=2.5\times 10^{-19}\mathrm{s}\left(\frac{M}{10^{20}\mathrm{g}}\right), \tag{16}\]
long before matter/radiation equality. The number of efolds before the end of inflation is
\[N_{n}\approx 24+\frac{1}{2}\ln\left(\frac{M}{10^{20}\mathrm{g}}\right)+\frac{1} {2}\ln\left(\frac{H_{i}}{10^{15}\mathrm{GeV}}\right). \tag{17}\]
If the bubble expands for a time longer than its internal inflationary Hubble time before it recollapses, it will continue to inflate forever inside, forming a baby universe connected to ours through a (non-traversible) wormhole. There is a critical mass \(M_{\rm cr}\), above which a baby universe is formed and below which an ordinary black hole is formed. Following [13], the expression for the critical mass can be estimated as
\[GM_{\rm cr}\sim{\rm Min}\{t_{\sigma},t_{b}\}, \tag{18}\]
where \(t_{\sigma},t_{b}\) are the gravitational times associated with the wall tension and vacuum energy inside the bubble. We assume \(GM_{\rm cr}\sim t_{b}=H_{b}^{-1}\equiv\sqrt{\frac{3}{8\pi\rho_{b}}}\) where \(\rho_{b}\) is the vacuum energy in the lake, so that for a bubble to be supercritical it is sufficient that
\[\rho_{b}>\frac{3}{8\pi G^{3}M^{2}}=\left(3.3\times 10^{6}\ {\rm GeV}\right)^{4} \left(\frac{M}{10^{20}{\rm g}}\right)^{-2}, \tag{19}\]
well below the energy density in typical inflation models. Hence, these hydrogen-atom sized PBHs contain baby universes that undergo their own internal exponential expansion and some form of decay or reheating, since the lake is at best meta-stable to further transitions.
After the black hole forms it will accrete. It was shown in [14] that in this regime (supercritical and forming during radiation domination) the effect is to increase the mass by approximately a factor of two. Given the homogeneity of the early universe \(\Delta_{\cal R}\ll 1\), we do not expect this accretion to affect the width of the black hole mass distribution significantly.
In the matter dominated phase, a some fraction of PBHs will accrete substantially due to repeated or extended encounters with stars. We will return to this briefly in Section 4.
We now estimate the spread in the mass distribution caused by the uncertainty in the nucleation time, \(H\Delta t=\Delta N\) (7). From (14) it follows immediately that
\[\frac{\Delta M}{M}\approx 2\Delta N. \tag{20}\]
As we have seen, it is natural for \(\Delta N\ll 1\), and so the mass distribution can be very close to monochromatic.
### Tunneling rate
We can now estimate the tunneling rate per Hubble volume \(\lambda\) necessary to produce the observed abundance of dark matter. The number density of vacuum bubbles at the time they were produced is \(\lambda\Delta tH_{i}^{3}\). The number density of these bubbles and the PBHs that form from them will dilute like the volume, and so at reheating the number density of bubbles is \(\lambda\Delta tH_{i}^{3}e^{-3N_{n}}\), and so the mass density of PBH dark matter today is
\[\rho_{\rm PBH}\approx\lambda\Delta tMH_{i}^{3}e^{-3N_{n}}(T_{0}/T_{\rm rh})^{ 3}. \tag{21}\]
Equating this to the measured density of dark matter today and using Eq. (14) to express \(N_{n}\) as a function of the PBH mass \(M\) gives
\[\lambda\Delta t\approx 1.3\times 10^{-16}\left(\frac{T_{\rm rh}}{\sqrt{H_{i}M_{ \rm Pl}}}\right)^{3}\left(\frac{M}{10^{20}{\rm g}}\right)^{1/2}, \tag{22}\]
where \(M_{\rm Pl}\) is the Planck mass. This quantity is the fraction of inflationary Hubble volumes in which a bubble nucleates during the transition. The maximum possible reheat temperature is \(T_{\rm rh,max}\approx\sqrt{H_{i}M_{\rm Pl}}\), so this is a small number (and as a result, collisions between bubbles are very rare).
A consistency check is that in order for the instanton approximation to be valid, the action for the tunneling must satisfy \(S\gg 1\). The rate \(\lambda=\alpha e^{-S}\). We expect the pre-factor \(\alpha\) to satisfy \(\alpha\gtrsim H_{i}\), so we have
\[S\gtrsim 25+3\ln\frac{\sqrt{H_{i}M_{\rm P}}}{T_{\rm rh}}+\ln\frac{H\Delta t}{10^ {-5}}-\frac{1}{2}\ln\frac{M}{10^{20}{\rm g}}\gg 1. \tag{23}\]
Constraints and detection
Currently, there are no observations constraining PBHs in the "asteroid" range \(10^{17}\)g \(<M<10^{23}\)g from constituting 100% of dark matter [4, 6]. Possible approaches to detecting this form of dark matter include lensing, accumulation of one or more PBHs inside stars that affect stellar evolution over a long period of time, and stellar explosions triggered by a transit of the PBH though a star.
The lower bound on the mass range arises from Hawking radiation, which for lighter PBHs produces gamma rays and energetic electron/positron pairs7[24, 25, 26]. These bounds could potentially be improved with future MeV telescopes or 21 cm observations [27, 28]. A study of microlensing of stars in M31 provides the upper bound on the mass range [29]. The microscopic size of the PBHs in this range relative to optical wavelengths, combined with finite-source size effects, makes it very difficult to push theses constraints to lower PBH mass. Lensing of gamma ray bursts is of interest because their cosmological distance and much shorter wavelength of electromagnetic radiation makes lensing by PBHs in this mass range stronger. However, there are no current constraints from this effect [30].
Footnote 7: Reference [23] points out that if PBHs were close to extremal, the lower bound on the mass would be reduced.
There are two potential sources of constraints from dynamical capture of PBHs by stars: stellar survival and observations of stellar destruction. If a PBH passes through a star, gravitational friction heats the star and reduces the kinetic energy of the PBH. This can lead to a bound orbit where the PBH repeatedly passes through the star, until it eventually settles into the center. Once inside the star, the PBH will gradually accrete matter, eventually growing to the point that it strongly affects stellar evolution.
The analysis in Ref. [30] shows that survival of stars (the observation that many stars have not been destroyed by PBHs) does not provide constraints on PBHs in the allowed mass window because captures in galaxies are rare even under optimistic assumptions. A constraint would arise only if globular clusters have high dark matter densities and low PBH velocity dispersion. Observational signatures from rare stellar destruction events present a more promising avenue for future constraints. More modeling is needed in order to better understand the evolution and destruction of the star after the PBH is captured and accretes a substantial amount of mass (see [31] for some recent work on neutron stars).
Observations of white dwarfs in certain mass ranges might have implications for PBH dark matter, as PBHs could trigger an explosion via heating even in the case that they are not dynamically captured by the white dwarf, and most white dwarfs will experience at least one such transit. While an initial analysis indicated this might occur for a certain range of PBHs [32], a more detailed treatment shows that this process does not provide any constraints in this mass range [30].
## 5 Conclusion
It is remarkable that dark matter could be composed of microscopic black holes produced in the earliest phase of the universe.8 The scenario considered here requires physics not far removed from what is already needed to drive inflation, without any new forces or particle species at accessible energies. Unfortunately, this also makes it difficult to test.
Footnote 8: It is perhaps even more remarkable that each such atom-sized black hole contains a large universe that underwent its own period of inflationary expansion and potentially reheating and further evolution.
There are a number of ways in which this analysis could be extended or generalized. One direction would be to study potentials in which the transitions occur not at one time during inflation, but at a discrete series of times. This can be natural in inflationary models involving a pseudo-periodic potential such as unwinding inflation [33, 34, 35]. We assumed that the vacuum energy in the "lake" was well below the energy density at the end of inflation. It would be interesting to analyze the situation where the energy density instead falls below that of the lake before inflation ends. In this work we focused on the "asteroid" mass range because of the lack of constraints on PBHs in this range. There is another range where the constraints are weak - the so-called stupendously large BHs [36, 37]. These black holes are larger than galactic halos and so cannot constitute all of dark matter, but evidently current constraints allow them to form an \(\mathcal{O}(1)\) fraction. The mechanism explored here can produce black holes with nearly
any mass, including in this range. It would also be of interest to extend our treatment of tunneling from slow roll to a more general analysis of tunneling from time-dependent initial states.
Acknowledgements:We would like to thank Yacine Ali-Haimoud, Heling Deng, Sergei Dubovsky, Oliver Janssen, Mehrdad Mirbabayi, and Giovanni Villadoro for useful discussions. Our work is supported by NSF grants PHY-1820814 and PHY-2112839.
|
2308.01056 | Impact of the noise knowledge uncertainty for the science exploitation
of cosmological and astrophysical stochastic gravitational wave background
with LISA | This paper investigates the impact of a lack of knowledge of the instrumental
noise on the characterisation of stochastic gravitational wave backgrounds with
the Laser Interferometer Space Antenna (LISA). We focus on constraints on
modelled backgrounds that represent the possible backgrounds from the mergers
of binary black holes of stellar origin, from primordial black hole generation,
from non-standard inflation, and from sound wave production during cosmic fluid
phase transitions. We use splines to model generic, slowly varying,
uncertainties in the auto and cross-spectral densities of the LISA time delay
interferometry channels. We find that allowing for noise knowledge uncertainty
in this way leads to one to two orders of magnitude degradation in our ability
to constrain stochastic backgrounds, and a corresponding increase in the
background energy density required for a confident detection. We also find that
to avoid this degradation, the LISA noise would have to be known at the
sub-percent level, which is unlikely to be achievable in practice. | Martina Muratore, Jonathan Gair, Lorenzo Speri | 2023-08-02T10:07:16Z | http://arxiv.org/abs/2308.01056v1 | Impact of the noise knowledge uncertainty for the science exploitation of cosmological and astrophysical stochastic gravitational wave background with LISA
###### Abstract
This paper investigates the impact of a lack of knowledge of the instrumental noise on the characterisation of stochastic gravitational wave backgrounds with the Laser Interferometer Space Antenna (LISA). We focus on constraints on modelled backgrounds that represent the possible backgrounds from the mergers of binary black holes of stellar origin, from primordial black hole generation, from non-standard inflation, and from sound wave production during cosmic fluid phase transitions. We use splines to model generic, slowly varying, uncertainties in the auto and cross-spectral densities of the LISA time delay interferometry channels. We find that allowing for noise knowledge uncertainty in this way leads to one to two orders of magnitude degradation in our ability to constrain stochastic backgrounds, and a corresponding increase in the background energy density required for a confident detection. We also find that to avoid this degradation, the LISA noise would have to be known at the sub-percent level, which is unlikely to be achievable in practice.
## I Introduction
The Laser Interferometer Space Antenna (LISA) is part of the European Space Agency Cosmic Vision program and is due to be launched in the mid-2030s. LISA will be the first observatory in space to study gravitational waves (GWs) at mHz frequencies. It will consist of a constellation of three satellites forming a quasi equilateral triangle and continuously exchanging laser beams [4]. LISA is expected to observe a large variety of sources, such as galactic binaries (GBs), massive black hole binaries (MBHBs) [29], stellar-origin black hole binaries (SOBHB) [31, 22, 43], extreme-mass-ratio inspirals (EMRIs) [8] and possibly stochastic backgrounds arising from astrophysical and cosmological processes [16].
When considering the science that can be done with LISA, it is typical to assume a known model for the instrumental noise in the detector data channels. However, these noise levels will not be known in practice. This is also true for ground-based gravitational wave detectors, but in that context spectral density estimation is easier because signals are rare and short-lived, allowing the spectral density to be estimated from data in the vicinity of observed events. LISA signals, by contrast, are typically long lived, which means that noise and signal properties must be simultaneously estimated by fitting a suitable model. While such methods and models are still under development, it is expected that the characterisation of deterministic signals will not be significantly affected by lack of instrumental noise knowledge (see Appendix B.4). The case of stochastic GW backgrounds (SGWB) is different, however, as these are intrinsically of the same character as the stochastic instrumental noise. Searches for stochastic signals in ground-based interferometers rely on the cross-correlation of data from independent detectors [3]. This would only be possible if there is another space-based interferometer in operation concurrently, such as Taiji [40], but this is not certain at the moment. Here we explore the challenge of distinguishing between the stochastic instrumental noise and a stochastic GW signal.
One approach is to use a model for the instrumental noise. It is possible to derive analytical models that describe how different known noise sources propagate into the LISA data stream. However, not all noise sources will be known in advance, so we will not be able to strictly rely on the models as we will not be able to perform full tests and directly measure the noise. In the LISAPathfinder mission [5] it was seen that at low frequency the analytical models couldn't fully explain the measured noise. Therefore, when we plan for LISA data analysis, we must be prepared for uncertainty in the noise models.
The goal of this paper is to assess the impact of lacking a noise model for LISA in parameter estimation of SGWBs. We consider four different models of cosmological and astrophysical SGWBs: a power law to model signals from stellar origin binary black hole inspirals, a Gaussian bump to model a background from primordial black hole generation, a power law with running to model a background from non-standard inflation and finally a first order phase transition model, representing GW production from sound waves in the cosmic fluid generated by colliding phase transition bubbles [16]. For each model, we will take a reference amplitude that corresponds to a relatively low signal to noise ratio (SNR) that is close to the boundary for detection. These are the backgrounds that will be most difficult to distinguish from instrumental noise. We will also explore what happens as the background energy density is varied in each model.
We represent our lack of knowledge of the LISA instrumental noise by multiplying a set of reference auto- and cross-spectral densities with cubic splines. For
the reference spectral densities we use the noise model from [4], which includes only the so-called secondary noises [33], the test mass (TM) acceleration and optical metrology noise (OMS). This noise model assumes that the laser noise [6], clock noise [46], tilt to length coupling [37; 23] have been suppressed by the initial noise reduction pipeline [32; 24]. To represent the fact that we will have some amount of information from noise modelling before launch, we place a Gaussian prior on the weights of the cubic spline. By varying the Gaussian variance we explore the effect of having more or less knowledge of the noise.
Several previous studies have tackled the problem of detecting a SGWB with LISA and distinguishing it from the noise, but these have used different methods than the one we employ in this paper. In [20] it was shown that SGWB reconstruction was possible for generic SGWB models, if the LISA instrumental noise can be represented by just two parameters, representing the level of TM and OMS noise, assumed equal for all arms of the interferometer. The authors of [2; 26] allowed the TM and OMS noises to differ from arm to arm, but still assumed that these noises had a known spectral shape as a function of frequency. In [10] an arbitrary noise shape was allowed, described by a spline, but using a simplified noise model for the single link. Finally, [33] derived an upper bound on the detectable SGWB amplitude when being agnostic on both the signal and noise shape and discussed limitations of the utility of the null channel for distinguishing between instrumental noise and a stochastic GW background.
The paper is organised as follows: in Section II we introduce the general data model that we use in the analysis and we describe the Fisher matrix formalism that will be used for this work. In Section II.3 we describe the spline model that we use to represent the uncertainties in the power spectral density (PSD) and cross-spectral density (CSD) of the instrumental noise. In Section II.4 we give the analytical noise model for a single LISA link that is used as the reference model, and the corresponding PSDs and CSDs for the time delay interferometry (TDI) channels, \(A\), \(E\) and \(\zeta\). In Section II.5 we describe how a stochastic signal appears in the three TDI channels and their cross-correlations, while in Section II.6 we describe the models for the cosmological and astrophysical SGWBs that we use in this paper. In Section III.1 we show how well we can estimate the parameters of the different SGWB models when we allow for uncertainty in our knowledge of the instrumental noise. For each model, we compare the precision of parameter estimation to that when noise knowledge is perfect and show how the parameter precisions vary as a function of the background energy density, \(\Omega\), evaluated at 1mHz. In Section III.2 we show how the results change as we vary our priors uncertainty on the instrumental noise. We conclude our results in Section III.3 by showing how well the signal, noise and galactic foreground can be reconstructed for a power law SGWB background. Section IV summarises our conclusions and future perspectives.
## II Methods
### Likelihood
We assume that the output of a gravitational wave detector, \(s(t)\), is expressed as a linear combination of a signal, \(h(t|\vec{\mu})\), determined by a finite set of (unknown) parameters, \(\vec{\mu}\), and instrumental noise, \(n(t)\). If we ignore the presence of calibration errors [42], the content of a single data stream, i.e., one output channel from one detector, can be written in the frequency domain as:
\[\tilde{s}(f)=\tilde{h}(f|\vec{\mu})+\tilde{n}(f), \tag{1}\]
where the tilde indicates the Fourier transform. The likelihood for the observed data can be written as \(p(\tilde{s}(f)|\vec{\mu})=p(\tilde{n}(f)=\tilde{s}(f)-\tilde{h}(f|\vec{\mu}))\). In a gravitational wave context it is usual to further assume that the instrumental noise follows a Gaussian distribution characterized by a one-sided PSD, \(S_{n}(f)\), defined such that
\[\mathbb{E}[\tilde{n}^{*}(f)\tilde{n}(f^{\prime})]=\frac{1}{2}S_{n}(f)\delta(f -f^{\prime}), \tag{2}\]
for \(f\), \(f^{\prime}>0\), where the expectation value \(\mathbb{E}\) is taken over the data generating process. The delta function in the previous equation implies that different frequencies are not correlated.
In reality the noise model is not known perfectly and could vary from the assumption above in a number of ways. For example, the PSD might have a different shape from the reference one [9], the probability distribution of the noise might not be Gaussian, or the noise might not be stationary, leading to correlations between frequencies.
In this work we will continue to assume that the noise is Gaussian and stationary, but we will allow the power spectral density to vary using a parametrized spectral density, \(S_{n}(f)\to S_{n}(f|\vec{\lambda})\), described by parameters \(\vec{\lambda}\). Then the log-likelihood depends on both sets of parameters, \(\vec{\mu}\) and \(\vec{\lambda}\), and can be written as:
\[l:=\ln p(\tilde{s}|\vec{\mu},\vec{\lambda}) =-\sum_{k=1}^{n}\ln\left[2\pi\frac{S_{n}(f_{k}|\vec{\lambda})}{4 \Delta f}\right]- \tag{3}\] \[\frac{1}{2}\sum_{k=1}^{n}\frac{|\tilde{s}(f_{k})-\tilde{h}(f_{k} |\vec{\mu})|^{2}}{\frac{1}{4\Delta f}S_{n}(f_{k}|\vec{\lambda})} \tag{4}\]
where the sum is performed over \(n\) frequencies and \(\tilde{n}(f_{k})=\tilde{s}(f_{k})-\tilde{h}(f_{k}|\vec{\mu})\) are the discrete Fourier component at frequency \(f_{k}=k\Delta f\), of the data minus signal model. The frequency bin width, \(\Delta f\), is related to the total observation time as \(T=1/\Delta f\). The first term does
not include the \(1/2\) factor because the real and imaginary parts of \(\tilde{n}(f_{k})\) are independent random variables. This follows from the fact that, for a real time series, \(\tilde{n}^{*}(f)=\tilde{n}(-f)\), which combined with Eq. (2) means that \(\langle\tilde{n}(f)\tilde{n}(f^{\prime})\rangle=0\) for \(f\), \(f^{\prime}>0\). This allows equation 2 to be rewritten as
\[\langle\Re[\tilde{n}(f_{k})]^{2}\rangle=\langle\Im[\tilde{n}(f_{k})]^{2} \rangle=\frac{\langle|\tilde{n}(f_{k})|^{2}\rangle}{2}=\frac{S_{n}(f_{k}| \vec{\lambda})}{4\Delta f} \tag{5}\]
for a discrete set of frequencies.
Stochastic gravitational wave backgrounds are not deterministic signals and can be treated on the same footing as the instrumental noise by defining the total variance at frequency \(f_{k}\) as
\[S_{t}(f_{k}|\vec{\theta},\vec{\lambda})=S_{\rm GW}(f_{k}|\vec{\theta})+S_{n}( f_{k}|\vec{\lambda})\,. \tag{6}\]
If we assume that all the deterministic sources have been correctly subtracted from the datastream \(s\) the log-likelihood becomes:
\[l(\vec{\theta},\vec{\lambda}) =-\sum_{k=1}^{n}\ln\left[T\pi\frac{S_{t}(f_{k}|\vec{\theta},\vec {\lambda})}{2}\right]\] \[-\frac{1}{2}\sum_{k=1}^{n}\frac{|\tilde{s}(f_{k})|^{2}}{\frac{T}{ 4}S_{t}(f_{k}|\vec{\theta},\vec{\lambda})} \tag{7}\]
The derivation of this likelihood can be found in Appendix A.
### Fisher matrix
We are interested in understanding the impact of noise knowledge uncertainties on the parameter measurement precision of stochastic gravitational wave backgrounds. The Fisher information matrix provides a lower bound on the covariance of an unbiased estimator of the model parameters and provides a good approximation to the precision of parameter estimation in the high signal-to-noise ratio limit. We will therefore use it to quantify our ability to measure both the noise parameters, \(\vec{\lambda}\), and the background parameters, \(\vec{\theta}\).
In a general context the Fisher matrix is defined by
\[\Gamma_{ij}=\mathbb{E}\left[\frac{\partial l}{\partial v^{i}}\frac{\partial l }{\partial v^{j}}\right]=-\mathbb{E}\left[\frac{\partial^{2}l}{\partial v^{i }\partial v^{j}}\right] \tag{8}\]
where the expectation value \(\mathbb{E}\) is taken over the data generating process, and the partial derivatives are taken with respect to the parameters, \(\vec{v}\), on which the likelihood depends. We want to compute the Fisher matrix on the extended parameter space \(\vec{v}=\{\vec{\theta},\vec{\lambda}\}\).
It can be shown that the expectation value of the product between the derivative of the log-likelihood with respect to deterministic and stochastic parameters is zero. Therefore, at the level of the Fisher matrix approximation it can be shown that the estimation of the noise and deterministic signal parameters is independent (see Appendix B.4).
For SGWBs, we can compute the Fisher matrix in continuous domain as:
\[\Gamma_{ij}=T\int_{0}^{\infty}(\Sigma^{-1})_{lr}\frac{\partial\Sigma^{rp}}{ \partial v^{i}}(\Sigma^{-1})_{pm}\frac{\partial\Sigma^{ml}}{\partial v^{j}}\, \mathrm{df}\,. \tag{9}\]
with
\[\Sigma(f|\vec{v}=\{\vec{\theta},\vec{\lambda}\})=\frac{1}{2}\begin{pmatrix}S_{ A}^{AA}&S_{t}^{AE}&S_{t}^{A\zeta}\\ S_{t}^{AE*}&S_{t}^{EE}&S_{t}^{E\zeta}\\ S_{t}^{A\zeta*}&S_{t}^{E\zeta*}&S_{t}^{\zeta\zeta}\end{pmatrix}\,. \tag{10}\]
where each element of the matrix can be written as a sum of an instrumental noise component and a stochastic gravitational wave component as indicated in Eq. 6. A complete derivation of this formula can be found in Appendix B.
Prior knowledge on the noise can be incorporated by imposing a prior on the instrumental parameters, \(\vec{\lambda}\). When doing numerical marginalisation any prior can be imposed, but in the Fisher matrix formalism it is easiest to work with a Gaussian prior [42]. The posterior covariance is then given by the inverse of the modified Fisher matrix:
\[\Gamma=\begin{pmatrix}\Gamma^{\theta\theta}&\Gamma^{\theta\lambda}\\ (\Gamma^{\theta\lambda})^{T}&\Gamma^{\lambda\lambda}+\Theta^{\lambda\lambda} \end{pmatrix} \tag{11}\]
with normal prior on the instrumental noise parameters with zero mean and covariance given by \((\Theta^{\lambda\lambda})^{-1}\). The diagonal elements of the inverse of this matrix provide estimates for the precision with which the corresponding parameters can be measured. The estimated precision of measurement of the SGWB parameters accounting for noise model uncertainty is thus given by the diagonal elements of the matrix:
\[\sigma_{\theta}=\sqrt{\text{diag}[(\Gamma^{\theta\theta}-\Gamma^{\theta\lambda} (\Gamma^{\lambda\lambda}+\Theta^{\lambda\lambda})^{-1}(\Gamma^{\theta\lambda} )^{T})^{-1}]} \tag{12}\]
Note that in the limit in which the instrumental noise parameters are perfectly known \(\Theta\rightarrow\infty\) and the measurement precision of the SGWB parameters is given by \(\sigma_{\theta}=\sqrt{\text{diag}[(\Gamma^{\theta\theta})^{-1}]}\).
### Modeling noise knowledge uncertainties
To model noise uncertainties, we allow the PSD and CSD of the instrumental channels to deviate from the design specification. However, we assume that such deviations vary smoothly over a relatively wide range of frequency and model the noise uncertainties as fractional deviations from the design PSD/CSD that are described by natural cubic splines. We write the PSD of the instrumental noise in each channel as:
\[S_{n}(f|\lambda)=S_{\rm des}(f)\,10^{C(f|\vec{\lambda})}\,, \tag{13}\]
where \(C(f|\vec{\lambda})\) is a natural cubic spline. The parameters \(\vec{\lambda}\) specify the values of the spline at the knots, labelled by \(i\). In this study we use knots evenly spaced in \(\log_{10}(f)\) between \(\log_{10}(f)=-4\) and \(\log_{10}(f)=0\) and we fix the number of knots to 13. Noise curves corresponding to this model, with the weights at each knot drawn randomly from a \(\log_{10}(f)\sim U[-1,1]\) distribution, are shown in fig. 1. We note that this choice of prior means we are allowing approximately one order of magnitude variation in the PSD. When we evaluate the Fisher matrix we will always do so at the reference point where the weights of the spline are zero, i.e., where the PSD is equal to the reference value shown in Fig. 2.
We cannot follow the same procedure for specifying the CSD, because the reference model is smaller by 1 or even 2-3 order of magnitude a low frequency with respect to the PSD (see Fig. 3 and Fig. 2). It was shown in [26] that when the LISA response is constructed allowing for unequal noises in the different laser links, the CSD can be much larger and become comparable to the PSD. Since our goal is to allow the splines to vary in such a way to mimic un-expected and un-modelled noise components with respect to the simplified scenario (three unequal but fixed-length arms) we model the CSD as1:
Footnote 1: In principle our model does not force the matrix to be positive definite. We are forcing the reference spectral density matrix to be positive definite, but in principle we could have a factor of 10 variation in the CSD while the PSD is unchanged. It doesn’t matter for the Fisher matrix because this is a local approximation and we are evaluating it at a point where the matrix is positive definite. The CSD at the central point is 0.1 of its maximum value, so in an open set around that point it will be positive definite and thus all derivatives are well defined. The conclusion is that the model used here is fair for what we want to demonstrate but would not be a suitable model to use when analysing the data.
\[S_{n}(f|\{\lambda_{i}\})=\] \[\sqrt{S_{\text{des,i}}(f)S_{\text{des,i}}(f)}\,\sigma_{R}\,10^{C (f|\{\log_{10}(f_{i})\},\{\lambda_{i}\})}\] \[+\sqrt{S_{\text{des,i}}(f)S_{\text{des,i}}(f)}\,i\sigma_{I}\,10^{ C(f|\{\log_{10}(f_{i})\},\{\lambda_{i}\})} \tag{14}\]
where we fix \(\sigma_{R}=0.1,\sigma_{I}=0.8\sigma_{R}\). We do not expect that varying the relationship between \(\sigma_{I}\) and \(\sigma_{R}\) to significantly change the conclusions; although we fix \(\sigma_{I}\) slightly smaller than \(\sigma_{R}\) accordingly to Fig. 3. There at lower frequencies the imaginary components are about 1 order of magnitude smaller than the real components. The indexes \(i\) and \(j\) run over the number of detectors or channels with \(i\neq j\). The additional factors \(\sigma_{I}\) and \(\sigma_{R}\) are used to limit the amplitude, and allow us to model the CSD as a sum of splines times the geometric mean of the square root-PSDs. Using the (scaled-)geometric mean of the PSDs as a reference for the CSD rather than the CSD of the reference equal-noise configuration, allows for much larger CSD variations. This is consistent with the results presented in [26].
It is important to state that our model is not completely general since we are imposing a certain amount of smoothness in the PSD variation, and consequently in the CSD, when we specify the number and spacing of the knots. Thus we are not able to fit for all possible noise scenarios. In particular, this model does not attempt to reproduce the zeros of the TDI transfer functions faithfully. This will become important above \(f\sim 0.05\)Hz, but this should not affect our results as the SGWBs we consider do not have much power at those frequencies, as can be seen from Fig. 6. Other models could be considered, for example by imposing the spline variations at the level of the noise in individual laser links (generalising the approach taken in [10]), before applying the TDI transfer function. This should be explored in the future, but this would increase the number of parameters further so we might expect there to be additional degeneracies, which would lead to practical difficulties in fitting noise and signal simultaneously. However, for the purpose of the current study, the model we use is adequate to represent generic, slowly varying, fluctuations in the PSD and CSD.
### Noise at the TDI input and outputs
Here, we present the instrumental noise model used to define the reference PSD in this work. Among the different noise sources for LISA, the laser noise is the main source of noise, which must be reduced by eight orders of magnitude by applying a post-processing technique called time delay interferometry (TDI) [45]. TDI synthesises an equal arm-length interferometer by
Figure 1: Deviations from the design power spectral density obtained using the cubic spline model, with \(\lambda_{i}\sim U[-1,1]\), and with knots equally spaced between \(\log_{10}(f)=-4\) and \(\log_{10}(f)=0\). The plot shows the ratio of the total PSD and the design one for different parameter realizations \(\lambda_{i}\).
appropriately delaying and combining the interferometric measurements in many different ways to form TDI channels free from laser noise. The standard second generation TDI channels (unequal and time varying arm-length) are the Michelson interferometer channels, \(X\), \(Y\) and \(Z\), from which we form the more GW sensitive channels, \(A\) and \(E\)[39]. Together with the GW sensitive channels we consider a null channel, the \(\zeta\) channel [35] that is less sensitive to GWs and can in principle be used as a noise monitor.
In the current work we will assume that laser noise has already been reduced thus we can work directly with the first generation TDI [27], allowing us to consider three unequal but fixed arm-lengths. This choice should not significantly affect the conclusions of the analysis but simplifies the transfer function derivations.
We also assume that all known calibrated and measured instrumental noise sources have been subtracted, such as the optical tilt to length cross-coupling to spacecraft motion and clock noise ([23], [18] and [25]). The remaining noises, for which we have neither a measurement for coherent subtraction nor a high precision a priori model [33], fall into two broad categories, the acceleration noise of each individual test-mass (TM) and an overall optical metrology system (OMS) noise term for each single link measurement (see [36] for the case of multiple OMS noise terms).
We represent the TM acceleration noise PSD of a single TM by \(S_{g_{ij}}\). To directly compare the OMS and TM contributions we can directly convert the acceleration noise of a single TM to an equivalent displacement, whose PSD is given by
\[S_{g_{ij}}^{\rm disp}=S_{g_{ij}}/(2\pi f)^{4} \tag{15}\]
where \(f\) is the Fourier frequency. We denote the time series associated with this displacement as \(x_{ij}^{g}(t)\). We also define the PSD of the OMS noise as \(S_{\rm{OMS}_{ij}}(f)\) and we denote the time series of the single OMS as \(x_{ij}^{m}(t)\). All TDI combinations can be constructed from a combination of single link TM to TM measurements. Such measurements are represented by the intermediary variables [24]:
\[\tilde{\eta}_{ij}^{N}(\omega)=\tilde{x}(\omega)_{ji}^{g}e^{-i\omega L_{ji}}+ \tilde{x}(\omega)_{ij}^{g}+\tilde{x}(\omega)_{ij}^{m}, \tag{16}\]
where \(\tilde{\eta}_{ij}^{N}(\omega)\) is the noise in a single link measurement, the first index \(i\) indicates the spacecraft where the measurement is performed at time \(t\), and the second index \(j\) indicates the distant spacecraft from which light was emitted at time \(t-\tau\), and \(\omega=2\pi f\). Equation 16 implies that each single link measurement contains TM noise terms from the distant and local spacecraft, such that the TM noise appearing in the measurements on the two ends of the same arm is correlated (between the two links):
\[\langle\tilde{\eta}_{ij}^{N}(\omega)\tilde{\eta}_{ji}^{N}(\omega)\rangle\neq 0 \tag{17}\]
From these measurements it is possible to build any TDI channels [6; 34] and therefore the corresponding first generation orthogonal channels \(A_{1}\) and \(E_{1}\)[27] that will be used in this work:
\[\mathrm{A_{1}}=\frac{\mathrm{Z_{1}-X_{1}}}{\sqrt{2}}\;,\qquad\mathrm{E_{1}}= \frac{\mathrm{X_{1}-2Y_{1}+Z_{1}}}{\sqrt{6}}. \tag{18}\]
The \(X_{1}\) variable is defined as:
\[\mathrm{X_{1}}= (D_{13}D_{31}-1)(\eta_{12}+D_{12}\eta_{21})\] \[+(1-D_{12}D_{21})(\eta_{13}+D_{13}\eta_{31})\;, \tag{19}\]
where the delays \(D_{ij}\) corresponds to a constant time shift and thus in frequency to \(\mathcal{F}\{D_{ij}\}=e^{-i\omega L_{ij}}\). The \(Y_{1}\) and \(Z_{1}\) are given from \(X_{1}\) by cyclic permutations of the three satellites. The fully symmetric channel, \(\zeta_{1}\), is defined by:
\[\zeta_{1}=D_{12}(\eta_{31}-\eta_{32})+D_{23}(\eta_{12}-\eta_{13})+D_{31}(\eta _{23}-\eta_{21})\;. \tag{20}\]
The assumed model for the TM acceleration noise is
\[\mathrm{E}\left\langle\tilde{x}_{ij}^{g}(f)\tilde{x}_{lm}^{g*}(f ^{\prime})\right\rangle=\frac{1}{2}\delta_{il}\delta_{jm}\delta(f-f^{\prime}) S_{g_{ij}}(f)\] \[S_{g_{ij}}(f)=\left(3\times 10^{-15}\;\frac{\mathrm{m}}{\mathrm{s}^{ 2}\;\sqrt{\mathrm{Hz}}}\right)^{2} \tag{21}\] \[\times\left(1+\left(\frac{0.4\;\mathrm{mHz}}{f}\right)^{2}\right) \left(1+\left(\frac{f}{8\;\mathrm{mHz}}\right)^{4}\right),\]
and for the OMS noise
\[\mathrm{E}\left\langle\tilde{x}_{ij}^{m}(f)\tilde{x}_{lm}^{m*}( f^{\prime})\right\rangle=\frac{1}{2}\delta_{il}\delta_{jm}\delta(f-f^{\prime})S_{ \mathrm{Oms}_{ij}}(f)\] \[S_{\mathrm{Oms}_{ij}}(f)=\left(15\;\mathrm{pm}/\sqrt{\mathrm{Hz }}\right)^{2}\times\left(1+\left(\frac{2\;\mathrm{mHz}}{f}\right)^{4}\right), \tag{22}\]
This model assumes that individual noise components are uncorrelated. In reality the test masses in the same satellite will share environmental noise, such as temperature fluctuations, so this assumption might not hold. However, this model serves as a reference one, and any variation is captured by the flexible spline model previously presented.
To derive the PSDs and CSDs of the TDI channels \(A\), \(E\) and \(\zeta\), as illustrated in [35], one can express the arm-lengths \(L_{ij}\) in terms of the breathing modes of the LISA triangle, \(\delta_{a}\) and \(\delta_{b}\) as:
\[L_{12}(t) =L\left[1+\frac{1}{2}\left(\sqrt{3}\,\delta_{a}-\delta_{b}\right) \right]\;, \tag{23a}\] \[L_{23}(t) =L\left(1+\delta_{b}\right)\;,\] (23b) \[L_{31}(t) =L\left[1-\frac{1}{2}\left(\sqrt{3}\,\delta_{a}+\delta_{b}\right) \right]\;. \tag{23c}\]
The full expressions are rather long thus we give them in a separate \(Mathematica\) notebook file (noise-analytical-model) and we plot them in Fig. 2. The Amplitude spectral density (ASD) are computed in terms of \(\delta_{a}\) and \(\delta_{b}\). Indeed, while \(L=(L_{12}+L_{23}+L_{31})/3\approx 8.3\,\mathrm{s}\) is the average arm-length, the small parameters \(\delta_{a}\) and \(\delta_{b}\) are typically \(\sim 0.005-0.009\) for realistic ESA orbits [14]. The case \(\delta_{a}=\delta_{b}=0\) corresponds to the equal-arm LISA scenario.
We consider that the six TMs have the same PSD as well as the six OMS noise terms, but this suppresses the contribution in the CSD. It was shown in [26] that if the levels of the noises differ by 20% then the CSD can be 10% of the PSD at low frequencies and several tens of percent at high frequency. This motivates the particular choice of flexible CSD model that we introduced in Eq. (14) and is illustrated in Fig. 4.
### Signal transfer function
The detector response to a stochastic background can be computed by expressing a GW signal as a superposition of plane waves, and by assuming that the LISA constellation has static arm lengths and is in a flat background spacetime. Following [26], it is possible to show that the component of the single link measurement \(\eta_{ij}(t)\) due to a GW is given by:
\[\eta_{ij}^{GW}(t) =i\int_{-\infty}^{\infty}\left\{\frac{f}{f_{ij}}\mathrm{e}^{2\pi if (t-L_{ij})}\right.\] \[\left.\int\left[\mathrm{e}^{-2\pi ifk\cdot\vec{x}_{i}}\sum_{ \mathcal{A}}\xi_{ij}^{\mathcal{A}}(f,\hat{k})\tilde{h}_{\mathcal{A}}(f,\hat{ k})\right]\mathrm{d}\Omega_{\hat{k}}\right\}\mathrm{d}f, \tag{24}\]
where \(i\) stands for imaginary component, \(f_{ij}=(2\pi L_{ij})^{-1}\), \(\vec{x}_{i}\) denotes the position of satellite \(i\), \(\mathcal{A}=+,\,\times\) denotes the GW polarization, \(\tilde{h}_{\mathcal{A}}(f,\hat{k})\) is the Fourier transform of the GW signal, \(f\) is the GW frequency, \(\hat{k}\) the outward vector in the direction of the incoming GW and \(\mathrm{d}\Omega_{\hat{k}}\) is the infinitesimal solid angle.
The above expression quantifies the fractional frequency shift due to a superposition of plane waves coming from different directions \(\hat{k}\).
The term \(\xi_{ij}^{\mathcal{A}}\) projects the incoming wave with polarization \(\mathcal{A}\) onto the detector, and its functional dependence is given by:
\[\xi_{ij}^{\mathcal{A}}\left(f,\hat{k}\right)=e^{-2\pi if\hat{k}\cdot\mathcal{ L}_{ij}}\mathcal{M}_{ij}(f,\hat{k})\;\mathcal{G}^{\mathcal{A}}(\hat{k},\hat{l}_{ ij})\,, \tag{25}\]
where
\[\mathcal{M}_{ij}(f,\hat{k})\equiv\mathrm{e}^{\pi ifL_{ij}(1+\hat{k}\cdot\hat{ l}_{ij})}\;\mathrm{sinc}\left(\pi fL_{ij}(1+\hat{k}\cdot\hat{l}_{ij})\right) \tag{26}\]
Figure 4: Real and Imaginary part of splines-square root cross spectral density for the time delay interferometry channels AE, E\(\zeta\) and A\(\zeta\) considering test mass acceleration and optical metrology noise assuming a constellation of three fix unequal arm-lengths
Figure 3: Real and imaginary part of the reference square root of the cross spectral density for the time delay interferometry channels AE, E\(\zeta\) and A\(\zeta\) considering only TM acceleration and OMS noise and assuming a constellation of three fixed unequal arm-lengths.
Figure 2: Reference Amplitude spectral density for the Time delay interferometry channels A, E and \(\zeta\) considering only test mass acceleration and optical metrology noise and assuming a constellation of three fixed unequal arm-lengths.
and
\[\mathcal{G}^{\mathcal{A}}(\hat{k},\hat{l}_{ij})\equiv\frac{\hat{l}_{ij}^{a}\hat{l} _{ij}^{b}}{2}e_{ab}^{\mathcal{A}}(\hat{k})\,, \tag{27}\]
where \(\hat{l}_{ij}=(\vec{x}_{j}-\vec{x}_{i})/|\vec{x}_{j}-\vec{x}_{i}|\) is a unit vector pointing from spacecraft \(i\) to \(j\) and \(e_{ab}^{\mathcal{A}}(\hat{k})\) denotes the GW polarization tensors.
For an a homogeneous, isotropic and non-chiral, stochastic background, the GW signal is only specified statistically
\[\langle\tilde{h}_{\mathcal{A}}(f,\hat{k})\,\tilde{h}_{B}^{*}(f^{\prime},\hat{ k}^{\prime})\rangle=\delta(f-f^{\prime})\delta(\hat{k}-\hat{k}^{\prime}) \delta_{\mathcal{A}B}\frac{P_{h}^{\mathcal{A}B}(f)}{16\pi}\]
and
\[\langle\tilde{h}_{\mathcal{A}}(f,\hat{k})\,\tilde{h}_{B}(f^{\prime},\hat{k}^{ \prime})\rangle=0\,.\]
Homogeneity and isotropy implies that \(P_{h}^{\mathcal{A}B}(f)\) is diagonal, whereas the non-chirality implies \(P_{h}^{\mathcal{N}\times}=P_{h}^{\mathcal{N}+}\), so that we can define \(P_{h}:=\sum_{\mathcal{A}}P_{h}^{\mathcal{A}\mathcal{A}}\).
We characterise the response of the individual links to a stochastic background statistically
\[\langle\tilde{\eta}_{ij}^{GW}\,\tilde{\eta}_{mn}^{GW}\rangle=\frac{1}{2}S_{ ij,mn}^{\eta,\mathrm{GW}}(f)\delta(f-f^{\prime})\,, \tag{28}\]
where spectral densities for the link measurements are given by
\[S_{ij,mn}^{\eta,\mathrm{GW}}(f)=\frac{f^{2}}{f_{ij}f_{mn}}e^{-2\pi if(L_{ij}- L_{mn})}\sum_{\mathcal{A}}P_{h}^{\mathcal{A}\mathcal{A}}(f)\ \Upsilon_{ij,mn}^{\mathcal{A}}(f)\, \tag{29}\]
with:
\[\Upsilon_{ij,mn}^{\mathcal{A}}(f)=\int\frac{\mathrm{d}\Omega_{\hat{k}}}{4\pi} \ \mathrm{e}^{-2\pi if\hat{k}\cdot(\vec{x}_{i}-\vec{x}_{m})}\ \xi_{ij}^{\mathcal{A}}(f,\hat{k})\,\xi_{mn}^{\mathcal{A}}(f,\hat{k})^{ \ast}\,. \tag{30}\]
The power spectral densities of the signal in the TDI variables described in sec. II.4 can then be computed from
\[\langle\tilde{U}(f)\tilde{V}^{*}(f^{\prime})\rangle =\frac{1}{2}S_{UV}^{\mathrm{GW}}(f)\delta(f-f^{\prime})\] \[S_{UV}^{\mathrm{GW}}(f) =\sum_{ij,mn\in\mathcal{I}}c_{ij}^{U}(f)c_{mn}^{V*}(f)S_{ij,mn}^{ \eta,\mathrm{GW}}(f)\, \tag{31}\]
where \(\tilde{U}\) and \(\tilde{V}\) denote any two TDI variables, which in our case are TDI A, E and \(\zeta\), and \(\mathcal{I}=\{12,13,23,21,31,32\}\) denotes the set of pairs of indices that define the six inter-satellite links. The coefficients \(c_{ij/mn}^{U}\) map the single-link measurements onto the TDI variable \(U\). Refer to the Mathematica code for the computation of such coefficients (noise-analytical-model).
Note that considering each polarization of the SGWB contributes equally to the background, i.e. \(P_{h}^{\times\times}=P_{h}^{\prime+}\), we can rewrite Eq. (31) as a product of the SGWB spectral density \(P_{h}(f)\) and a transfer function \(\mathcal{T}^{GW}(f)\) which takes into account the LISA detector response, i.e. \(S_{UV}^{\mathrm{GW}}(f)=\mathcal{T}^{GW}(f)P_{h}(f)\).
\(S_{UV}^{\mathrm{GW}}(f)\) would correspond to the first term on the right hand side of Eq. (6). The transfer functions for the three TDI channels and their cross correlation are shown in Fig.5.
### SGWB Signal models
There are a large variety of models for stochastic gravitational wave backgrounds that might manifest in the LISA band [1]. In this work we focus on four models, which can be described by their energy density, \(h^{2}\Omega_{GW}\)[15], which is a function of some parameters, \(\theta\):
* **Power law**: \[h^{2}\Omega_{GW}(f)\approx A\left(\frac{f}{f_{p}}\right)^{n},\] (32) where \(f_{p}\) is the pivot frequency, defined as the geometrical mean of the LISA frequency interval
Figure 5: Upper panel: Gravitational wave transfer functions \(\mathcal{T}^{GW}(f)\) of the three time delay interferometry channels A,E and \(\zeta\) assuming a constellation of three fixed unequal armlengths; lower panel: real and imaginary components of the gravitational wave transfer functions \(\mathcal{T}^{GW}(f)\) of the time delay interferometry channels AE, E\(\zeta\),A\(\zeta\).
(\(10^{-4}\) Hz, 0.1 Hz), \(f_{p}=3\)mHz. The model parameters are the log-amplitude, \(A\), and slope, \(n\)[7; 30]. We use reference values of \(n=2/3\) and \(A=7.87\times 10^{-13}\), representing a SGWB from stellar origin black hole binaries that has energy density at 1mHz of \(h^{2}\Omega_{GW}(1mHz)=3.78\times 10^{-13}\). This value was chosen to be compatible with LIGO-VIRGO-KAGRA constraints [7]
* **Gaussian bump**: \[h^{2}\Omega_{GW}=Ae^{-\frac{1}{2\sigma^{2}}\ln(\frac{f}{f_{p}})^{2}},\] (33) where \(f_{p}\) is the pivot frequency as before. The model parameters are the log-amplitude, \(A\), and width, \(\sigma\). We use reference values of \(A=10^{-12.48}\) and \(\sigma=0.3\) whose energy density at 1mHz is \(h^{2}\Omega_{GW}(1mHz)=4.05\times 10^{-16}\). This signal is chosen as a simple way to mimic the one which might arise from particle production taking place for a limited number of e-folds during inflation (as, for instance, required by some models of primordial BH generation). (See e.g. [11; 12; 21] )
* **Power law with running:** \[h^{2}\Omega_{GW}=A\left(\frac{f}{f_{p}}\right)^{n+\alpha\ln(\frac{f}{f_{p}})},\] (34) where \(f_{p}\) is the pivot frequency as before. The model parameters are the log-amplitude, \(A\), the slope, \(n\), and the running index, \(\alpha\). We use reference values of \(A=10^{-12.65}\), \(n=1\) and \(\alpha=-0.1\). This signal is motivated by non-standard inflationary models. For example, gravitational wave generation can be enhanced by sustained particle production during inflation, leading to a power law stochastic GW background, which would deviate from a simple power law at higher frequency when back-reaction kicks in (see e.g. [13]). The energy density at 1mHz is \(h^{2}\Omega_{GW}(1mHz)=6.61\times 10^{-14}\)
* **First Order Phase Transition**: \[h^{2}\Omega_{GW}(f)=h^{2}\Omega_{p}\left(\frac{f}{f_{p}}\right)^{3}\left( \frac{7}{4+3\big{(}\frac{f}{f_{p}}\big{)}^{2}}\right)^{n},\] (35) where \(f_{p}=2\cdot 10^{-4}\) Hz (note this is different to the reference frequency in the previous models). The model parameters are the energy density, \(h^{2}\Omega_{p}\), and spectral index, \(n\). We use reference values of \(A\equiv h^{2}\Omega_{p}=10^{-10}\) and \(n=7/2\) whose energy density at 1mHz is \(h^{2}\Omega_{GW}(1mHz)=2.59\times 10^{-12}\). This signal is motivated by the production of sound waves in the cosmic fluid from colliding phase transition bubbles [17; 28].
In our analysis, we also include the contribution to the spectral density from the foreground of galactic binaries (GB). We use the following model for the foreground [1]:
* **Foreground of Galactic Binaries**: \[S_{GB}(f)= A_{GB}\left(\frac{f}{Hz}\right)^{-\frac{7}{3}}e^{-(f/f_{1})^{ \alpha}}\] \[\times\frac{1}{2}\left[1+\tanh\left(\frac{f_{knee}-f}{f_{2}} \right)\right]\] (36) with: \[f_{1}=10^{a_{1}\log_{10}(T)+b_{1}},f_{knee}=10^{a_{k}\log_{10}(T)+b_{k}}\] \[.\] \[.\] setting \(A=1.15\cdot 10^{-44}\); \(\alpha=1.56\); \(a_{1}=-0.15\); \(b_{1}=-2.72\); \(a_{k}=-0.37\); \(b_{k}=-2.49\); \(f_{2}=6.7\times 10^{-4}\)Hz. When considering the background in conjunction with other SGWBs we allow the amplitude to vary, but keep the other parameters fixed.
The relation between the energy density \(\Omega_{GW}\) and the stochastic GW background power spectral density \(P_{h}(f)\) is given by [15]:
\[\Omega_{GW}(f)=\frac{4\pi^{2}}{3H_{0}^{2}}f^{3}P_{h}(f), \tag{37}\]
where \(H_{0}\) is the Hubble constant fixed to be 67.8 km/s/Mpc, as a consequence \(h=0.678\). The conversion between the energy density \(\Omega_{GW}(f)\) and gravitational power spectral density \(P_{h}(f)\) used to compute Eq.(31) is then [15]:
\[P_{h}(f)=7.98\times 10^{-37}\left(\frac{Hz}{f}\right)^{3}h^{2}\Omega_{GW}(f) \frac{1}{Hz}, \tag{38}\]
We report in Fig. 6 the ASD of the four SGWB models together with the ASD of the reference instrumental noise in TDI channel A. [9]
We also provide the computation of the SNRs of these different backgrounds in the TDI channel A using the following formula [44]:
\[SNR_{A}=\sqrt{T}\left[\int_{0}^{\infty}\frac{S_{AA}^{GW}(f)^{2}}{S_{n}^{A}(f)^ {2}}\mathrm{d}f\right]^{1/2} \tag{39}\]
with an observation time span of \(T=4\) years. Here \(S_{AA}^{GW}(f)\) is the spectral density in channel \(A\) that can be computed from Eq. (31) and \(S_{n}^{A}\) is the PSD of the A channel. The results2 are shown in Table 1.
It is possible to notice that including the foreground as part of the noise (\(S_{n}^{A}(f):=S_{GB}^{A}(f)+S_{n}^{A}(f)\)) leads to a substantial decrease of the SNR for the FOPT background but the SNR does not change very much for the other models.
We plot in fig. 7 the value of the SNR in channel \(A\) vs. the energy density at 1mHz for the different models. The dotted lines assume no presence of the foreground, whereas the continuous lines include the presence of the foreground. As expected there is a direct correlation between increasing the energy density and an increase in the SNR. Moreover, the presence of the foreground mostly affects the SNR of the FOPT. In fact, in the presence of the foreground the energy density must be two times larger to have the same SNR as it would in the absence of a foreground.
## III Results
### Impact of instrumental noise knowledge uncertainty on SGWB recovery
Here, we explore how the measurement precision of the SGWB parameters changes in the presence of instrumental noise knowledge uncertainty, for each of the SGWB models described in Section II.6. We use the Fisher matrix formalism described in Section II.2, which assumes that the noise is uncorrelated at different frequencies. We assume we use three TDI channels in our analysis, A, E and \(\zeta\), as described in Section II.4. We model uncertainties in the PSD and CSD at each frequency following the model described in Section II.3. To build the Fisher matrix we need the following elements:
1. The derivatives of the PSD and CSD at each frequency with respect to the parameters of the SGWB model.
2. The derivatives of the PSD and CSD at each frequency with respect to the parameter (amplitude) of the Galactic Binaries. The addition of this parameter extends the dimension of the Fisher matrix by one.
3. The derivatives of the PSD and CSD at each frequency with respect to the parameters of the instrumental noise model. The instrumental noise model is based on 9 different splines: 3 splines to model the PSD of A, E and \(\zeta\), and 3 splines each for real and imaginary parts of the CSDs for AE, A\(\zeta\) and E\(\zeta\). Each spline has a number of parameters equal to the number of knots, which we take to be 13. The total number of noise parameters is therefore 9 x 13 = 117.
4. The evaluation of the Fisher matrix from these elements using Eq. (9), which is summed over frequency.
5. The choice of a prior on the instrumental noise parameters. We use a Gaussian prior, which is implemented in the Fisher matrix formalism by adding the prior matrix to the Fisher matrix before computing its inverse (see Eq. (11)). For this first study
\begin{table}
\begin{tabular}{|c||c|c|} \hline SGWB Model & SNR w/o GB & SNR w/ GB \\ \hline \hline Power law with running & 14.54 & 13.35 \\ Power law & 48.70 & 42.89 \\ Gaussian bump & 13.51 & 11.65 \\ First order phase transition & 118.68 & 64.18 \\ \hline \end{tabular}
\end{table}
Table 1: Signal to noise ratio in TDI channel A for the four SGWB models, with and without the presence of the galactic foreground as an additional noise component. The galactic foreground here considered has an SNR of 1627.39
Figure 6: Amplitude spectral density of the stochastic GW background models and the amplitude spectral density of the reference test mass and optical metrology noise in the time delay interferometry channel A.
Figure 7: Signal to noise ratio of four different SGWB signals in the TDI channel A, Power law, Power law-with-running, Gaussian bump and First order phase transition, versus the energy density at 1mHz, both considering (continuous lines) or not (dashed lines) the presence foreground
we take the priors on each noise parameter to be independent, with zero mean and equal variance, \(\sigma_{\rm inst}\). In this section we fix \(\sigma_{\rm inst}=1\), which means we are allowing for up to an order of magnitude uncertainty in the instrumental noise at each frequency.
6. We compute the inverse of the Fisher matrix after adding the prior to obtain an estimate of the measurement uncertainty, from the square root of the diagonal elements of the inverse as explained in Sec. II.2. We also compute the inverse of the SGWB-parameter only sub-matrix of the Fisher matrix, which represents the expected uncertainty in the absence of instrumental noise uncertainties.
For each SGWB model we will present the results in two different ways. Firstly, we will show the ratio of the uncertainties in the SGWB parameters in the presence of instrumental noise uncertainties to those uncertainties when perfect knowledge of the instrumental noise is assumed. These results illustrate the impact of lack of noise knowledge on SGWB characterisation. Secondly, we will show the actual uncertainties in the SGWB parameters, as computed from the Fisher matrix. Of particular interest is the uncertainty in the log-energy density of the background. As a rule of thumb, a background will be detectable if the uncertainty \(\Delta\ln(A)<1\) ( \(\Delta\ln(A)=(\Delta A)/A\)). In both cases, we will plot results as a function of the background amplitude/the background energy density at a reference frequency of 1mHz (the logarithm of these quantities are linearly related, so they can be easily represented using bottom/top axes in a single figure). For the second type of plot, solid lines show results in the presence of noise knowledge uncertainty, and dashed lines give results assuming perfect noise knowledge. In both the analysis we consider the foreground amplitude to vary and we consider it as additional source of noise together with the instrumental noise.
#### iii.2.1 Power law
A power law SGWB is described by two parameters: the slope and the amplitude. The full Fisher matrix, including instrumental noise and foreground parameters, is \(120\times 120\). Figure 8 shows the results computed for this model. We see that in the presence of instrumental noise uncertainties, the uncertainty in the SGWB parameters increases by a factor of \(\sim 55\)-60, with the uncertainty in the slope being slightly more affected than that of the amplitude. The increase is lower for high background amplitudes, as expected, but only when the background is one to two orders of magnitude brighter than the reference value. Considering the raw uncertainties, we see that the uncertainty in the log-energy density is typically a factor of \(\sim 50\) larger and the background energy density would have to to be a factor of \(\sim 50\) times higher to be characterised with the same measurement precision when there is instrumental noise uncertainty as it could be without those uncertainties. However, a background with amplitude equal to the reference value should (just) be detectable even allowing for confusion with instrumental noise mismodelling.
#### iii.2.2 Power law with running
For the power law with running SGWB, the fisher matrix is \(121\times 121\), as the SGWB model depends on 3 parameters: slope, amplitude and running index, \(\alpha\). The results for this model are shown in fig. 9. In this case we see that the uncertainties in the SGWB parameters increase by a factor of \(\sim 30\)-75, with the uncertainty on the amplitude being most affected in this case. Once again, the relative increase in the uncertainty is somewhat lower at higher background amplitudes. The lower
Figure 8: Results for the power law SGWB model considering the foreground as additional source of noise. The upper panel shows the ratio of the uncertainties of the SGWB parameters (amplitude and slope) when including instrumental noise uncertainties or assuming perfect noise knowledge. This ratio is plotted versus the amplitude (bottom axis) and SGWB energy density at 1mHz (top axis). The lower panel shows the estimated parameter uncertainties for the two cases. Once again this is as a function of amplitude/energy density, but for a restricted range. The horizontal red dashed line corresponds to an uncertainty of one, which is our threshold on the uncertainty in log-energy density for deciding that a background is detectable. The vertical red dashed line indicates the reference SGWB amplitude given in Section II.6.
panel of fig. 9 shows that the background is not detectable at the reference amplitude. An energy density \(\sim 20\) times higher would be required for a detection. In general, the background has to have an energy density \(\sim 60\) times higher to be characterised with the same measurement precision when there is instrumental noise uncertainty as it could be without those uncertainties and there is a similar increase in the parameter measurement uncertainty at fixed background energy density.
#### iv.2.3 Gaussian bump
As for the power law, the Fisher matrix is a \(120\times 120\) as we have 2 signal parameters: the Gaussian width and the amplitude. The results for this model are shown in Figure 10. In this case, the degradation in the precision of parameter measurement is a factor of \(\sim 2\)-8 when allowing for lack of knowledge of the instrumental noise. This difference in behaviour is related to the different shapes of the SGWBs being considered. A Gaussian is more distinct from the spline model being used to represent the instrumental noise uncertainties than a power law, and hence the degree of confusion between the two models is less in this case. From the lower panel of Figure 10: we see that the energy density in a Gaussian bump SGWB has to be a factor of \(\sim 10\) times higher for it to be characterised with the same measurement precision when there is instrumental noise uncertainty as it could be in the absence of those uncertainties. A Gaussian bump background at the reference amplitude would be detectable, and the width of such a Gaussian could be measured to a few tens of percent precision. This measurement precision improves approximately linearly with the background energy density.
#### iv.2.4 First order phase transition
The FOPT model is again characterised by two parameters, an amplitude and a spectral index, and has a \(120\times 120\) Fisher matrix. The results for this model are shown in Figure 11. When allowing for instrumental noise knowledge uncertainties, the precision with which the SGWB log-energy density can be characterised degrades by a factor of \(\sim 20\). The degradation in the determination of the spectral index is even larger, \(\sim 35\). Once again, to achieve the same measurement precision, the background energy density would have to be \(\sim 20\) times larger than it would need to be in the absence of noise knowledge uncertainties. Nonetheless, a FOPT background at the reference amplitude would still be detectable and provide a measurement of the spectral index at the level of \(\sim\pm 0.8\).
The previous results were computed considering the presence of the galactic foreground. In the appendix C we report similar results, computed without taking into consideration the Galactic foreground. Redoing
Figure 10: As Figure 8, but now for the Gaussian bump SGWB model
Figure 9: As Figure 8, but now for the power-law with running SGWB model
these analyses ignoring the foreground we do not see big differences in the uncertainty ratio, nor in the absolute uncertainties, when these are compared at fixed SNR, i.e., when the signal-to-noise ratio is recomputed without the galactic binaries included in the spectral density. To illustrate this, we show in Fig.12 the precision of the measurement of the log-energy density of the background, as a function of the SNR in TDI channel A, for all SGWB models and both including and not including the galactic binary foreground. We see that the uncertainty is typically larger when the foreground is present, but this is typically less than a factor of a few. The Gaussian bump and power law background are most affected, with the uncertainty at fixed SNR and the SNR required for detection both decreasing by a factor of a few when the galactic binary background is removed from the spectral density. For the Gaussian bump, the uncertainty decreases by a factor of a little more than two when the Galactic background is excluded, and the SNR needed to reach the \(\Delta\ln(A)<1\) threshold for detection decreases by a similar factor. For the power law, the uncertainty decreases by about a factor of 4, and the \(\Delta\ln(A)<1\) threshold required for detection is reached at an SNR that is a factor of \(\sim 4\) smaller. For the power law with running and the FOPT backgrounds, the uncertainty at fixed SNR is almost unchanged, and the threshold SNR for detection is within a factor of 1.5 and 2, respectively.
This behaviour can be understood by looking at the shapes of the various SGWBs in Figure 6. Figure 7 demonstrates that the removal of the foreground does not affect the SNR very much. The only SGWB that shows a significant change is the FOPT, for which most of the power is at frequencies where the foreground is significant. However, in the region around \(300\mu\)Hz, where the majority of the SNR is generated, the shape of the FOPT is very different to the foreground. This is also true for the power-law-with running model around 5mHz, where the majority of its SNR is generated. The power-law model, on the other hand, is quite parallel to the foreground at low frequency, and the Gaussian bump is quite parallel to the foreground at a few mHz. This most likely explains why the latter two backgrounds are more difficult to distinguish from a galactic foreground, and therefore more affected by its inclusion.
### Setting a noise knowledge requirement
In this section we will now explore how the amount of uncertainty in the instrumental noise impacts the results. In practice we will not be completely ignorant of the instrumental noise. Measurements on-board the satellites will provide an indication of the size of certain noise components. In principle, it might therefore be possible to place a requirement on how well the instrumental noise must be known in order to not degrade the science output of the mission. To assess this, we will recompute the results while changing the variance of the Gaussian prior on the instrumental noise spline parameters. We will vary the prior on the spline weights from very small values (\(\log_{10}(\sigma_{inst})=-10\)), representing near-perfect knowledge of the noise, to very high values \(\log_{10}(\sigma_{inst})=6\), representing no knowledge of the noise.
Figure 11: As Figure 8, but now for the first order phase transition SGWB model
Figure 12: Signal to noise ratio of four different SGWB signals in the TDI channel A, Power law, Power law-with-running, Gaussian bump and First order phase transition, versus the error in log amplitude both considering (continuous lines) or not (dashed lines) the presence foreground
We fix the amplitude of the background for each SGWB model, so that it corresponds to an SNR of \(\sim 120-140\) in each case 3. This choice of SNR was motivated by Figure 12, which shows that an SNR greater than 100 is required in order to ensure all types of SGWB are detectable. For the case of the power law we also show results with the amplitude set to the reference energy density, which shows that the exact choice of background amplitude does not make a significant difference to the qualitative behaviour, only to the absolute value of the uncertainty.
Footnote 3: Specifically the SNR of the power law with running is 135, the SNR of the power law is 138, the SNR of the FOPT is 142 and the SNR of the Gaussian bump is 120.
For all SGWB models we again present the results in two different ways -- as a ratio of the SGWB parameter measurement uncertainties when instrumental noise uncertainties are considered to those assuming perfect noise knowledge, and as the absolute measurement uncertainty. Results for the power law model are shown in Figure 13, for the Power-law-with-running model in Figure 14, for the Gaussian bump model in Figure 15 and for the FOPT in Figure 16. The results for all four SGWB models are qualitatively similar. For very low prior uncertainties the ratio of the uncertainties tends to unity. This is expected as this limit corresponds to the limit in which the instrumental noise is perfectly known. As the prior uncertainty is increased beyond \(\sim 10^{-6}\) the measurement precision in the presence of noise knowledge uncertainties starts to increase. When the noise knowledge uncertainty reaches \(\sim 10^{-2}\) for power law and Gaussian bump, \(\sim 10^{-1}\) for power law-with-running and 10 for FOPT, the measurement precision ratio saturates. This final value reflects the expected uncertainty in the absence of any noise knowledge. The results given in Section III.1 were all computed in this regime.
The main conclusion from these results is that if we wanted to ensure that there was no degradation in LISA science due to lack of noise knowledge, the necessary requirement on the noise knowledge would be \(<<10\%\). In the LISA Pathfinder mission, which was designed to accurately characterize the free-fall performance of test masses in a space-based environment, the observed noise could only be explained within some margin: the physical origin of the measured sub-mHz acceleration is only partially understood, as more than 50% of its PSD is still unmodeled [19; 41].
It is therefore unrealistic to expect that a noise requirement at the \(\sim 1\)-10% level could be met. At noise uncertainties above this threshold, there is little difference between some and no noise-knowledge, at least within the model for instrumental noise variations considered here. We conclude that no useful and achievable noise knowledge requirement could be implemented in practice.
While we will not be able to achieve the precision that would be possible under ideal circumstances, it is important to emphasise that this does not mean we will not be able to detect and characterise modelled SGWBs. In all cases, at SNR of \(\sim>100\), the amplitude can be constrained to a few tens of percent, even without any knowledge of the instrumental noise.
### Signal reconstruction
To finish this section we will use our Fisher matrix results to illustrate how well we can reconstruct the Power law, the foreground, and the instrumental noise. To do this, we will approximate the posterior distribution on the model parameters using a multi-variate Gaussian with covariance matrix equal to the inverse of the Fisher matrix. We can then take random draws from this fake4 posterior distribution and plot the PSD
Figure 13: As Figure 8, but now for fixed background amplitude and varying the variance of the Gaussian prior on the instrumental noise spline model. This plot is for a power law background, and the amplitude has been fixed such that the SNR in TDI channel A is 138 (continous line) and 43.7 (dashed lines).
of the SGWB, the foreground and instrumental noise corresponding to the drawn parameters. In fig. 17 we follow this procedure for a power law signal with an SNR of 872. The choice of the SNR is driven to avoid the breakdown of the Fisher matrix approximation for the SGWB parameters, because the SGWB parameter uncertainties are large and no longer in the linear signal regime as explained and, shown in details, in Appendix C.1.
The three panels show the reconstructed ASDs for the Power law, for the foreground and for the instrumental noise and the total, which is the sum of the three.
What we would expect is that our ability to measure the total spectral density is roughly independent of the relative amplitudes of the two components, since this is what we actually see and measure in the data. Our model attempts to split that measurement into constituent components. If one of those components is much weaker than the other we would not expect to recover it as well as when the components are making comparable contributions to the data. Top panel of Fig. 17 (and the Figure 26 in the appendix C.1) is consistent with this expectation. We see indeed in Fig. 17 that the galactic binaries are well recovered. Moreover, the only noise reconstruction for the TDI A and E suffers from the presence of the GW background and foreground in the regime \(0.4mHz\) - \(4mHz\) where these two signals have the majority of power. As a final point, it is clear that the SGWB is being best constrained around a frequency of 4mHz where the power of the GB is less and the uncertainty in the instrumental noise is also largest (although still small) at this point. This can be understood from Figure 6, which shows that the power law background is closest to the instrumental noise ASD at that frequency and the Galactic binaries pick at 1mHz, and so this frequency range dominates the SNR in the signal. We expect to be able to measure the background best in the frequency range where it is most dominant relative to the instrumental noise and distinguishable from the galactic binaries.
## IV Discussion and Conclusion
We have explored the impact of noise knowledge uncertainty on measuring the parameters of various mod
Figure 14: As Figure fig. 13 but now for the power-law-with-running model. The background amplitude has been fixed to give an overall SNR of 135.
Figure 15: As Figure fig. 13 but now for the Gaussian bump model. The background amplitude has been fixed to give an overall SNR of 122.
elled stochastic gravitational wave backgrounds. This was done by modelling instrumental noise uncertainties using cubic splines to represent deviations away from the design PSDs and CSDs for the three TDI channels \(A\), \(E\) and \(\zeta\). We then used a Fisher matrix analysis to evaluate the expected uncertainties in the measurements of the model parameters when fitting a model including the instrumental noise uncertainties and compared it to fitting a model without those uncertainties. The degree of uncertainty was characterized by including a Gaussian prior on the instrumental noise parameters, allowing us to quantify the impact of imposing a requirement on our noise knowledge.
This analysis showed that, for all SGWB models, allowing for instrumental noise uncertainties leads to a significant increase in the uncertainty in our measurements of the background parameters. The increase in uncertainty was a factor of \(2-8\) for the Gaussian bump model, which reduces to \(2-4\) when not including GB as foreground, \(55-60\) for the power law (\(15-30\) without GB as foreground), \(20-35\) for the First order phase transition (\(20-50\) without GB as foreground) and \(30-75\) for the power law with running (\(20-75\) without GB as foreground). These increased uncertainties correspond to the threshold background energy density required for detection increasing by a factor of 10 (5 without GB)
Figure 16: As Figure fig. 13 but now for the FOPT model. The background amplitude has been fixed to give an overall SNR of 142.
Figure 17: We show the Power law signal, Galactic binaries and noise ASDs corresponding to random draws from the posterior, approximated using the Fisher matrix as described in the text. In each panel the curves correspond to the three TDI channels: A (blue), E (red) and \(\zeta\) (green). Upper first panel: reconstructed SGWB; upper second panel: reconstructed TM and OMS instrumental noise; middle panel: reconstructed Galactic binaries; lower panel: total reconstructed ASD (signal + noise + GB).
for the Gaussian bump model, a factor of 60 with and without GB for the Power-with-running and a factor of 20 with and without GB for all other models (50 for the power law when including GBs). The threshold energy density at 1mHz at which the backgrounds start to be detectable are \(4\times 10^{-13}\), \(4\times 10^{-13}\), \(2\times 10^{-16}\) and \(10^{-12}\) (\(10^{-13}\), \(2.5\times 10^{-13}\), \(8\times 10^{-17}\) and \(5\times 10^{-13}\) if we do not include the GB foreground) for the power law, power law with running, Gaussian bump and FOPT models respectively. Comparing these to the reference background amplitudes introduced in Section II.6 we see that the power law, the Gaussian bump and FOPT backgrounds are detectable at the reference amplitudes, while the power law with running is not as the threshold energy density is 1.5 times higher than the reference. However, for this latter background the amplitudes were specified based on the SNR and not on a physical model. The reference amplitudes for the power law and FOPT backgrounds are based on physical model predictions, so it is more important that these backgrounds are still detectable. We note that this result does depend on the particular choice of model we used for representing the instrumental noise uncertainty. If this model is made even more flexible, for example by increasing the number of spline knots used, the threshold would increase further and potentially also make the reference backgrounds undetectable.
When we vary our assumed level of knowledge of the instrumental noise, we find that the uncertainties on the SGWB parameters show a similar trend for all models, starting to degrade at relative small uncertainties, increasing and then saturating after a certain point. The point at which the sensitivity starts to degrade is when the uncertainty in the log-spectral density of the noise reaches \(\sim 10^{-6}\)-\(10^{-5}\), depending on the SGWB model. The uncertainty saturates at log-spectral density uncertainties of \(\sim 10^{-2}/10^{-1}/10^{-2}/10^{1}\) (\(10^{-2}/10^{-1}/10^{-3}/10^{-1}\) in the absence of the GB foreground) for the power law/power-law with running/Gaussian bump/FOPT backgrounds respectively. This means that if we wanted to limit the degradation in the science that arises from lack of noise knowledge we would have to impose a very stringent requirement on our knowledge of the noise. This is likely to be impossible to implement in practice, so we will have to accept that our ability to resolve SGWBs will not be as good as calculations that assume perfect noise knowledge predict.
It is important to note that these results are based on some assumptions which might not hold in practice. In particular, we have considered only modelled SGWBs and we have assumed a particular form for variations in the PSD that forces variations to be smoothly varying as a function of frequency. If the number of knots is increased to obtain a more flexible instrumental noise model, with potentially faster variations of the PSD as a function of frequency, we already see a degradation of 2 or 3 orders of magnitude in the estimation of the log-energy density.
It is the distinguishability of the models that allows us to measure the parameters of the SGWBs. In the extreme picture where we do not want to make any assumption at all about the form of the instrumental or SGWB spectral densities, then spectral separation will not be possible. We will be able to report measured power spectral density in all channels, and cross-spectral densities between them, and translate these into upper limits on the SGWB amplitude; but any interpretation of this as an actual detection will require independent confirmation from another detector [33].
All previous studies of the separation of instrumental noise and stochastic backgrounds have required assumptions: in [10] it was assumed that the instrumental uncertainty is a spline and that the SGWB has a power law spectrum; in [26] it was assumed that the instrumental noise is determined by 12 individual noise levels; and in this paper we are assuming something similar to [10], although with a bit more flexibility, a wider variety of SGWB models and a different noise model for single satellite links. The SGWBinner [16] is agnostic on the spectrum of the background but it can only work because it assumes a specific model for the instrumental noise. That is not going to be possible in practice. SGWBinner could be adapted to use a more flexible noise model, similar to the model used here, but the precision on the background recovery will be degraded. If we have a completely general instrumental noise model and a completely general SGWB model then we won't be able to separate them. In that case, the only hope would be that the SGWB is above the design sensitivity and we trust that the instrumental noise meets the mission requirements, in which case the best interpretation of such an observation would be a SGWB. However, even then an assumption would be made that the mission had met the design sensitivity requirements. An exploration of how our ability to separate instrumental noise as the spectral models of the SGWB and the instrumental noise are made more complicated should be the focus of future work.
## V Acknowledgement
M.M gratefully acknowledges the support of the German Space Agency, DLR. The work is supported by the Federal Ministry for Economic Affairs and Climate Action based on a resolution of the German Bundestag (Project Ref. No. FKZ 50 OQ 2301). The authors thank Chiara Caprini for her review and suggestions regarding the use of SGWB templates in the text. Additionally, we acknowledge Mauro Pieroni for providing feedback on the presented results and the implementation of spline models. Furthermore, we thank Olaf Hartwig for discussions on the accurate computation of the GW transfer function
and on the noise models and Marc Lilley for comparisons on the results obtained with the Fisher matrix formalism. We thank Lorenzo Sala for feedbacks on the recent analysis on the LISA Pathfinder performance results. The scientific discussions with Quentin Baghi, Jean Baptiste Bailey, Germano Nardini, Nikolaos Karnesis, Jesus Torrado, Nam Dam Quam, Henry Hinshauspe, Antoine Petitteau, and Riccardo Buscicchio have also been highly appreciated. Their feedback on the results and methodology presented in the paper has been valuable for the final outcome
## Appendix A Likelihood derivation
We derive the likelihood starting from the noise properties and explain why it takes the form shown in Sec. II. If we assume that the real time series \(n(t)\) is a stationary, zero-mean, Gaussian and ergodic random process, then the Fourier transform of the noise \(\tilde{n}_{k}=\tilde{n}(f_{k})\) at each frequency \(f_{k}\) is normally distributed with zero-mean and variance \(\sigma_{k}^{2}\). Thus, the natural log-likelihood at each frequency takes the form of a two-dimensional normal distribution
\[\ln p(\tilde{n}_{k})=-\frac{1}{2}\Big{(}\frac{\Re[\tilde{n}_{k}]^{2}}{\sigma_{ k}^{2}}+\frac{\Im[\tilde{n}_{k}]^{2}}{\sigma_{k}^{2}}\Big{)}-\frac{1}{2}\ln[(2 \pi\sigma_{k}^{2})^{2}] \tag{10}\]
where we assumed that the real \(\Re\) and imaginary \(\Im\) part of the noise are not correlated and have the same variance. If we further assume that the variance of the noise at different frequencies follow a one-sided power spectral density \(S_{n}(f)\) then :
\[<\tilde{n}^{*}(f^{\prime})\tilde{n}(f)> =\frac{1}{2}S_{n}(f)\delta(f-f^{\prime}) \tag{11}\] \[<\tilde{n}(f^{\prime})\tilde{n}(f)> =0 \tag{12}\]
where we used the expectation value of \(<>\) over the data generating process. For a set of frequencies the first relation can be written as
\[<\tilde{n}_{k}^{*}\tilde{n}_{j}> =\frac{T}{2}S_{n}(f_{k})\delta_{jk} \tag{13}\]
Therefore the variance of the real and imaginary part of the noise is given by
\[<\Re[\tilde{n}_{k}]^{2}>=<\Im[\tilde{n}_{k}]^{2}>=\frac{T}{4}S_{n}(f_{k})= \sigma_{k}^{2}\,. \tag{14}\]
We can write the natural log-likelihood for all the measured frequencies as:
\[\sum_{k}\ln p(\tilde{n}_{k})=-\sum_{k=1}^{n}\ln[2\pi\frac{T}{4}S_{n}(f_{k})]- \frac{1}{2}\sum_{k=1}^{n}\frac{|\tilde{n}(f)|^{2}}{\frac{T}{4}S_{n}(f_{k})} \tag{15}\]
which becomes in the continuum limit:
\[\ln p(\tilde{n})=-\ln\bigg{[}2\pi\frac{1}{4}\det[S_{n}(f)\delta(f-f^{\prime}) ]\bigg{]}-\frac{1}{2}4\int_{0}^{\infty}\frac{|\tilde{n}(f)|^{2}}{S_{n}(f)} \mathrm{df} \tag{16}\]
Note that in the continuum limit the variance of the noise can be thought as an operator. In fact, one can define the inner product
\[(a(t)|b(t))=4\Re\int_{0}^{\infty}\int_{0}^{\infty}\tilde{a}^{*}(f)\,\Sigma^{- 1}(f,f^{\prime})\tilde{b}(f^{\prime})\,\mathrm{df}\,\mathrm{df}^{\prime} \tag{17}\]
with \(\Sigma^{-1}\) defined through the relation:
\[\int_{0}^{\infty}\Sigma^{-1}(f,f^{\prime})\Sigma(f^{\prime},f^{\prime\prime}) \mathrm{df}^{\prime}=\delta(\mathrm{f}-\mathrm{f}^{\prime\prime})\,, \tag{18}\]
where if we set in Eq.18 that \(\Sigma(f^{\prime},f^{\prime\prime})=\delta(f^{\prime}-f^{\prime\prime})S_{n}(f ^{\prime})\) we obtain
\[\Sigma^{-1}(f,f^{\prime\prime})S_{n}(f^{\prime\prime})=\delta(f-f^{\prime \prime}) \tag{19}\]
and the inner product becomes
\[(a(t)|b(t)) =4\Re\int_{0}^{\infty}\int_{0}^{\infty}\frac{\tilde{a}^{*}(f)\, \delta(f-f^{\prime})\tilde{b}(f^{\prime})}{S_{n}(f^{\prime})}\,\mathrm{df}\, \mathrm{df}^{\prime}\] \[=4\Re\int_{0}^{\infty}\frac{\tilde{a}^{*}(f)\tilde{b}(f)}{S_{n}(f )}\,\mathrm{df} \tag{20}\]
## Appendix B Fisher matrix derivation
### Single detector
To compute the Fisher matrix of Eq. 8 we need the first derivative of the log-likelihood \(l\) with respect to the parameters of the power spectral density.
Here we present the derivation of the Fisher matrix for the noise parameters \(\vec{\lambda}\) affecting the one-sided spectral density \(S_{n}(f|\vec{\lambda})\), but this can be easily extended also including the gravitational wave background parameters \(S_{n}(f|\vec{\lambda})\to S_{n}(f|\vec{\lambda})+S_{\mathrm{GW}}(f|\vec{ \theta})\). We differentiate the log-likelihood of equation 15 with respect the parameters \(\vec{\lambda}\):
\[\frac{\partial l}{\partial\lambda^{i}}= \sum_{k=1}^{n}[-\frac{1}{S_{n}(f_{k})}\frac{\partial S_{n}(f_{k} )}{\partial\lambda^{i}}+\frac{1}{2}\frac{|\tilde{n}(f_{k})|^{2}}{\frac{T}{4}S_ {n}(f_{k})^{2}}\frac{\partial S_{n}(f_{k})}{\partial\lambda^{i}}], \tag{21}\]
where we have omitted the dependency from \(\vec{\lambda}\) to have a lighter notation. The second derivative of the likelihood is then
\[\frac{\partial^{2}l}{\partial\lambda^{i}\partial\lambda^{j}}= \sum_{k=1}^{n}[\frac{1}{S_{n}^{2}(f_{k})}\frac{\partial S_{n}(f_{k })}{\partial\lambda^{i}}\frac{\partial S_{n}(f_{k})}{\partial\lambda^{j}}\] \[-\frac{1}{S_{n}(f_{k})}\frac{\partial^{2}S_{n}(f_{k})}{\partial \lambda^{i}\partial\lambda^{j}}\] \[-\frac{1}{2}\frac{2|\tilde{n}(f_{k})|^{2}}{\frac{T}{4}S_{n}(f_{k})^ {3}}\frac{\partial S_{n}(f_{k})}{\partial\lambda^{j}}\frac{\partial S_{n}(f_{k })}{\partial\lambda^{i}}\] \[+\frac{1}{2}\frac{|\tilde{n}(f_{k})|^{2}}{\frac{T}{4}S_{n}(f_{k})^ {2}}\frac{\partial^{2}S_{n}(f_{k})}{\partial\lambda^{j}\partial\lambda^{i}}] \tag{22}\]
using the definition of eq. (101) the second and last term cancels and we get:
\[\Gamma_{ij}=\sum_{k=1}^{n}\frac{1}{S_{n}(f_{k})^{2}}\frac{\partial S_{n}(f_{k})}{ \partial\lambda^{i}}\frac{\partial S_{n}(f_{k})}{\partial\lambda^{j}}\,. \tag{110}\]
If we want to get the continuum limit we need to recast a factor of \(T\mathrm{df}\):
\[\Gamma_{ij}=T\int_{0}^{\infty}\frac{1}{S_{n}(f)^{2}}\frac{\partial S_{n}(f)}{ \partial\lambda^{i}}\frac{\partial S_{n}(f)}{\partial\lambda^{j}}\mathrm{df} \tag{111}\]
### Multiple detectors: real and imaginary part as separate random variables
If we want to generalize our derivation to multiple detectors or channels we need to define the noise properties of each channel. For simplicity let us consider two channels \(A\) and \(E\) with 4 independent random variables \(X(f_{k})=X_{k}=\{\Re[\hat{X}_{k}^{A}],\Im[\hat{X}_{k}^{A}],\Re[\hat{X}_{k}^{E }],\Im[\hat{X}_{k}^{E}]\}\) at each frequency \(f_{k}\). Since the final likelihood will be given by the product over all the frequencies, we consider only one frequency and we drop the subscript "\({}_{k}\)". We can specify the spectral densities of each channel and the cross-spectral densities with:
\[<\tilde{X}^{cs}(f^{\prime})\tilde{X}^{c}(f)> =\frac{1}{2}S_{c}(f)\delta(f-f^{\prime}) \tag{112a}\] \[<\tilde{X}^{c}(f^{\prime})\tilde{X}^{c}(f)> =0\] (112b) \[<\tilde{X}^{E*}(f^{\prime})\tilde{X}^{A}(f)> =\frac{1}{2}S_{AE}^{*}(f)\delta(f-f^{\prime})\] (112c) \[<\tilde{X}^{A*}(f^{\prime})\tilde{X}^{E}(f)> =\frac{1}{2}S_{AE}(f)\delta(f-f^{\prime})\] (112d) \[<\tilde{X}^{A}(f^{\prime})\tilde{X}^{E}(f)> =0 \tag{112e}\]
where the first two rows are valid for both channels \(c=A,E\), and \(S_{c}\) is real and \(S_{AE}\) is complex. From the above expression we can deduce:
\[<\Re[\tilde{X}_{c}]\Re[\tilde{X}_{c}]>+ <\Im[\tilde{X}_{c}]\Im[\tilde{X}_{c}]>=\frac{S_{c}}{2} \tag{113a}\] \[<\Re[\tilde{X}_{c}]\Re[\tilde{X}_{c}]-\Im[\tilde{X}_{c}]\Im[ \tilde{X}_{c}]> =0\] (113b) \[<\Re[\tilde{X}_{c}]\Im[\tilde{X}_{c}]> =0\] (113c) \[<\Re[\tilde{X}_{A}]\Im[\tilde{X}_{E}]>+ <\Im[\tilde{X}_{A}]\Im[\tilde{X}_{E}]> =\frac{\Re[S_{AE}]}{2}\] (113d) \[<\Re[\tilde{X}_{A}]\Re[\tilde{X}_{E}]>- <\Im[\tilde{X}_{A}]\Im[\tilde{X}_{E}]> =0\] (113e) \[<\Re[\tilde{X}_{A}]\Im[\tilde{X}_{E}]>- <\Im[\tilde{X}_{A}]\Re[\tilde{X}_{E}]> =\frac{\Im[S_{AE}]}{2}\] (113f) \[<\Re[\tilde{X}_{A}]\Im[\tilde{X}_{E}]>+ <\Im[\tilde{X}_{A}]\Re[\tilde{X}_{E}]> =0\,, \tag{113g}\]
where in the last four rows \(c=A,E\). Note that these are in total 10 independent conditions (3 equations for A, 3 equations for E and 4 equations for AE) that specify uniquely the 10 independent elements of a symmetric covariance matrix.
For a single frequency we can generalize the likelihood to two channels as:
\[p(X)=\frac{1}{\sqrt{(2\pi)^{2\times N_{c}}\det(\Sigma)}}e^{-\frac{1}{2}X^{T} \Sigma^{-1}X} \tag{114}\]
where \(N_{c}=2\) are the number of channels, X is a quadri-dimensional vector define above, \(\Sigma\) is the multiple-channels covariance matrix:
\[\Sigma=\left(\begin{array}{cccc}\frac{S_{A}}{4}&0&\frac{\Re(S_{AE})}{4}& \frac{\Im(S_{AE})}{4}\\ 0&\frac{S_{A}}{4}&\frac{\Im(S_{AE})}{4}&\frac{\Re(S_{AE})}{4}\\ \frac{\Re(S_{AE})}{4}&\frac{\Im(S_{AE})}{4}&\frac{\Re}{4}&0\\ \frac{\Im(S_{AE})}{4}&\frac{\Im(S_{AE})}{4}&0&\frac{\Re}{4}\end{array}\right) \tag{115}\]
where here each element is evaluated at the fixed frequency. It can be shown that the expectation value of \(X^{T}\Sigma^{-1}X\) equals the degrees of freedom, in this case 4. We have two channels, where each one has two degrees of freedom associated with the real and imaginary part of \(\tilde{X}\).
We can then derive the Fisher matrix for the multiple channel case. Taking the first derivative of the log-likelihood
\[\frac{\partial\ln p(X)}{\partial\lambda^{i}}=-\frac{1}{2}\frac{1}{\det(\Sigma)} \frac{\partial\det(\Sigma)}{\partial\lambda^{i}}-\frac{1}{2}X^{T}\frac{ \partial\Sigma^{-1}}{\partial\lambda^{i}}X \tag{116}\]
where we can use the following property of the determinant:
\[\frac{\partial\det(\Sigma)}{\partial\lambda^{i}} =\det(\Sigma)\;\mathrm{Tr}[\Sigma^{-1}\frac{\partial\Sigma}{ \partial\lambda^{i}}]\] \[=\det(\Sigma)\;\Big{[}\Sigma^{-1}\Big{]}_{lm}\Big{[}\frac{\partial \Sigma}{\partial\lambda^{i}}\Big{]}^{ml} \tag{117}\]
to obtain
\[\frac{\partial\ln p(X)}{\partial\lambda^{i}}=-\frac{1}{2}\Big{[}\Sigma^{-1} \Big{]}_{lm}\Big{[}\frac{\partial\Sigma}{\partial\lambda^{i}}\Big{]}^{ml}- \frac{1}{2}X^{T}\frac{\partial\Sigma^{-1}}{\partial\lambda^{i}}X\,. \tag{118}\]
Then, the second derivative of the log-likelihood takes the form:
\[\frac{\partial^{2}\ln p(X)}{\partial\lambda^{i}\partial\lambda^{j}} =-\frac{1}{2}\frac{\partial(\Sigma^{-1})^{lm}}{\partial\lambda^{i}} \frac{\partial\Sigma_{ml}}{\partial\lambda^{j}}\] \[-\frac{1}{2}\Sigma_{lm}^{-1}\frac{\partial^{2}\Sigma^{ml}}{ \partial\lambda^{i}\partial\lambda^{j}}-\frac{1}{2}X^{T}\frac{\partial^{2} \Sigma^{-1}}{\partial\lambda^{i}\lambda^{j}}X \tag{119}\]
We can finally compute the Fisher matrix for a single frequency with:
\[\Gamma_{ij}=\frac{1}{2}\Big{[}\frac{\partial\Sigma_{lm}^{-1}}{\partial \lambda^{i}}\frac{\partial\Sigma^{ml}}{\partial\lambda^{j}}+\Sigma_{lm}^{-1} \frac{\partial^{2}\Sigma^{ml}}{\partial\lambda^{i}\partial\lambda^{j}}+\Sigma_{ml} \frac{\partial^{2}(\Sigma^{-1})^{lm}}{\partial\lambda^{i}\lambda^{j}}\Big{]}, \tag{120}\]
where we have considered \(<X_{l}^{T}\frac{\partial^{2}\Sigma_{lm}^{-1}}{\partial\lambda^{i}\lambda^{j}}X_ {m}>=\frac{\partial^{2}\Sigma_{lm}^{-1}}{\partial\lambda^{i}\lambda^{j}} \Sigma_{ml}\). If we use the property
\[\frac{\partial(\Sigma)_{lm}^{-1}}{\partial\lambda}=-(\Sigma)_{ln}^{-1}\frac{ \partial(\Sigma)_{ng}}{\partial\lambda}(\Sigma)_{qm}^{-1} \tag{121}\]
we obtain the following expression:
\[\Gamma_{ij}= \frac{1}{2}\text{Tr}\Big{[}-\Sigma^{-1}\frac{\partial\Sigma}{ \partial\lambda^{i}}\Sigma^{-1}\frac{\partial\Sigma}{\partial\lambda^{i}}+ \Sigma^{-1}\frac{\partial^{2}\Sigma}{\partial\lambda^{i}\partial\lambda^{j}}+ \Sigma\frac{\partial^{2}\Sigma^{-1}}{\partial\lambda^{i}\partial\lambda^{j}} \Big{]} \tag{155}\]
which can be further simplified if we use the following properties:
\[\partial(\Sigma\Sigma^{-1}) =0 \tag{156a}\] \[\partial\Sigma\Sigma^{-1}+\Sigma\partial\Sigma^{-1} =0\] (156b) \[\partial^{2}\Sigma\Sigma^{-1}+2\partial\Sigma\partial\Sigma^{-1}+ \Sigma\partial^{2}\Sigma^{-1} =0\] (156c) \[\partial^{2}\Sigma\Sigma^{-1}-2\partial\Sigma\,\Sigma^{-1}\, \partial\Sigma\,\Sigma^{-1}+\Sigma\partial^{2}\Sigma^{-1} =0 \tag{156d}\]
Note that in the above expression the first and last term of Eq.156 correspond to the last two terms in the Fisher matrix expression (155). The final expression for all frequencies can be easily obtained by taking the sum over all the frequencies:
\[\Gamma_{ij}=\frac{1}{2}\sum_{k=1}^{n}[(\Sigma_{k}^{-1})_{lr}\frac{\partial \Sigma_{k}^{rp}}{\partial\lambda^{i}}(\Sigma_{k}^{-1})_{pm}\frac{\partial \Sigma_{k}^{ml}}{\partial\lambda^{j}}]\,. \tag{157}\]
Note that there is an additional factor of \(1/2\) with respect to Eq. 14. If we insert only the first 2 columns and rows of \(\Sigma\) we obtain the previous equation for the single channel as expected.
### Multiple detectors: complex random variables
Equivalently, the likelihood can be written in terms of complex variables \(\tilde{X}_{A}\) and \(\tilde{X}_{E}\)[38]
\[p(\tilde{X}_{A},\tilde{X}_{E})=\frac{e^{-[\tilde{X}_{A},\tilde{X}_{E}]^{n} \Sigma^{-1}[\tilde{X}_{A},\tilde{X}_{E}]}}{(2\pi)^{N_{c}}\det(\Sigma_{k})} \tag{158}\]
where "\({}^{\text{H}\pi}\)" indicates the Hermitian conjugate, the factor of \(1/2\) disappeared because it is a complex distribution and must match Eqs (147)-(148), and the new complex covariance matrix is defined as
\[\Sigma=\frac{1}{2}\begin{pmatrix}S_{A}&S_{AE}\\ S_{AE}^{*}&S_{E}\end{pmatrix}\,, \tag{159}\]
where \(\Sigma\) is now an Hermitian matrix and can be obtained from the conditions imposed in Eqs. 145. The expectation value of \([\tilde{X}_{A},\tilde{X}_{E}]^{n}\Sigma^{-1}[\tilde{X}_{A},\tilde{X}_{E}]\) over complex variable realizations \([\tilde{X}_{A},\tilde{X}_{E}]\) is now 2. However, since the exponential does not have any factor of \(1/2\) for a complex distribution, we recover the same number of degrees of freedom in the argument of the exponent as in the previous derivation, i.e. we got \(\exp\left[\frac{1}{2}\,4\right]\) for the case of multiple detectors with real and imaginary part as separate random variables' and \(\exp\left[2\right]\) for the case considered here.
The derivation of the Fisher matrix differs from the previous one (Eq. 157) only by the factor \(1/2\):
\[\Gamma_{ij}=\sum_{k=1}^{n}\left[(\Sigma_{k}^{-1})_{lr}\frac{\partial\Sigma_{k} ^{rp}}{\partial\lambda^{i}}(\Sigma_{k}^{-1})_{pm}\frac{\partial\Sigma_{k}^{ml }}{\partial\lambda^{j}}\right], \tag{160}\]
where the matrix \(\Sigma_{k}\) is given by \(\Sigma\) with spectral densities evaluated at given frequency \(f_{k}\).
Note that we can recover the single channel realization by using the first element of \(\Sigma\).
The continuum limit of the Fisher matrix in this formulation is given by
\[\Gamma_{ij}=T\int_{0}^{\infty}(\Sigma_{k}^{-1})_{lr}\frac{\partial\Sigma_{k}^ {rp}}{\partial\lambda^{i}}(\Sigma_{k}^{-1})_{pm}\frac{\partial\Sigma_{k}^{ml }}{\partial\lambda^{j}}\,\text{df}\,. \tag{161}\]
### Deterministic sources and noise cross-correlation
In the presence of a deterministic source, \(\tilde{h}(f_{k}|\vec{\mu})\), the derivative of the log-likelihood in Eq. (7) with respect to the source parameters, \(\vec{\mu}\), is:
\[\frac{\partial l}{\partial\mu^{i}}=-\sum_{k=1}^{n}\frac{|\tilde{s}(f_{k})- \tilde{h}(f_{k}|\vec{\mu})|}{\frac{T}{4}S_{n}(f_{k}|\vec{\lambda})}\frac{ \partial\tilde{h}(f_{k}|\vec{\mu})}{\partial\mu^{i}}\,, \tag{162}\]
and the derivative with respect to the parameters characterising the spectral density, \(\vec{\lambda}\), is:
\[\frac{\partial l}{\partial\lambda^{i}}= \sum_{k=1}^{n}[-\frac{1}{2}\frac{T}{\det(S_{n}(f_{k}|\vec{\lambda}) )}+\] \[+\frac{|\tilde{s}(f_{k})-\tilde{h}(f_{k}|\vec{\mu})|^{2}}{\frac{T}{ 2}S_{n}(f_{k}|\vec{\lambda})^{2}}]\frac{\partial S_{n}(f_{k}|\vec{\lambda})}{ \partial\lambda^{i}}\,. \tag{163}\]
The first of these expressions is odd in the noise component, \(\tilde{n}(f_{k})=\tilde{s}(f_{k})-\tilde{h}(f_{k}|\vec{\mu})\), while the second term is even. Since \(\mathbb{E}[\tilde{n}(f_{k})]=0\), from this we deduce that
\[\mathbb{E}_{\mathcal{L}}\left[\frac{\partial l}{\partial\mu^{i}}\frac{ \partial l}{\partial\lambda^{j}}\right]=0, \tag{164}\]
i.e., at this level of approximation the terms in the Fisher matrix that mix signal and noise parameters vanish. We conclude that the estimation of the noise parameters and of the signal parameters is, at leading order, independent. Lack of knowledge of the noise should therefore not significantly affect measurements of the parameters of deterministic signals, except indirectly through the change in the spectral density that enters the likelihood for the deterministic sources.
Appendix C Impact of instrumental noise knowledge uncertainty on SGWB recovery in absence of galactic foreground
Below we show the same computations we did in Sec. III but in case we do not consider the presence of the foreground.
#### vi.2.1 Power law
Figure 18 shows the results computed for the power law model. We see that in the presence of instrumental noise uncertainties, the uncertainty in the SGWB parameters increases by a factor of \(\sim 19\)-36, with the uncertainty in the slope being more affected than that of the amplitude. Considering the raw uncertainties, to achieve the same measurement precision, the background energy density would have to be \(\sim 33\) times larger than it would need to be in the absence of noise knowledge uncertainties. However, a background with amplitude equal to the reference value should be detectable even allowing for confusion with instrumental noise mis-modelling.
#### vi.2.2 Power law with running
The results for this model are shown in Figure 19. In this case we see that the uncertainties in the SGWB parameters increase by a factor of \(\sim 21\)-72, with the uncertainty on the log-amplitude being most affected in this case. Once again, the relative increase in the uncertainty is somewhat lower at higher background amplitudes. The lower panel of fig. 19 shows that the background is not detectable at the reference amplitude. An energy density \(\sim 5\) times higher would be required for a detection. In general, the background again has to have an energy density \(\sim 100\) times higher to be characterised with the same measurement precision when there is instrumental noise uncertainty as it could be without those uncertainties.
#### vi.2.3 Gaussian bump
The results for this model are shown in Figure 20. In this case, the degradation in the precision of parameter measurement is a factor of \(\sim 2.3\)-4 when allowing for lack of knowledge of the instrumental noise. From the lower panel of Figure 20 we see that the energy density in a Gaussian bump SGWB has to be just a small factor of \(\sim 2.5\) times bigger to achieve the same measurement precision when the instrumental noise is not known perfectly. Moreover a Gaussian bump background at the reference amplitude can be measured to percent precision at the reference amplitude. The width of the Gaussian can be measured to a few tens of percent precision at the reference amplitude, improving approximately linearly with the background energy density.
#### vi.2.4 First order phase transition
The results for First order phase transition are shown in Figure 21. The results for this SGWB model are quite
Figure 19: Results for the power-law with running SGWB model without foreground.
Figure 18: Results for the power law SGWB model without foreground
similar to those for the power law background. When allowing for instrumental noise knowledge uncertainties, the precision with which the SGWB log-amplitude can be characterised degrades by a factor of \(\sim 18\). The degradation in the determination of the spectral index is even larger, \(\sim 50\). Once again, to achieve the same measurement precision, the background energy density would have to be \(\sim 50\) times larger than it would need to be in the absence of noise knowledge uncertainties. Nonetheless, a FOPT background at the reference amplitude would still be detectable and provide a measurement of the spectral index at the level of about 0.5 percent.
We can do the same analysis we did in section III.2 but without including the foreground. The results for all four SGWB models are qualitatively similar among themselves and also to the previous case with foreground in Sec. III.2. Figure 22 consider the power law case, Fig. 23 the case of power law with running, Fig. 24 the case of gaussian model and Fig. 25 the case of FOPT model.
The main conclusion from these results is that again if we wanted to ensure that there was no degradation
Figure 21: Results for first order phase transition SGWB model without foreground.
Figure 22: As Figure 18, but now for fixed background amplitude and varying the variance of the Gaussian prior on the instrumental noise spline model. This plot is for a power law background, and the amplitude has been fixed such that the SNR in TDI channel A is 136 (continous lines) and 43 (dashed lines)
in LISA science due to lack of noise knowledge, the necessary requirement on the noise knowledge would be \(\ll 10\%\).
### Signal reconstruction without foreground of Galactic binaries
We consider a power law signal with an SNR of 48.70 in Fig. 26. The three panels show the reconstructed ASDs for the SGWB and for the instrumental noise and the total, which is the sum of the three. No foreground has been considered in this case. In Fig 27 we show corresponding results for a power law with a higher SNR, of 862.
We see that our ability to reconstruct the signal component of the data stream is poor when the SNR is low. However, we are able to obtain good measurements of the instrumental noise and the total spectral density. We note that the total ASD reconstruction in Figure 26 is somewhat poorer than the noise-only component, which does not fit with the expectation that we are actually measuring the total. This happens due to the breakdown in the Fisher matrix approximation for the SGWB parameters in this case, because the SGWB parameter uncertainties are large and no longer in the linear signal regime. At higher SNR, we start to be able to reconstruct the SGWB more precisely, shown by a reduction in the scatter in Figure 27. As the SNR is increased we would expect the scatter to reduce further. The reconstruction of the noise spectral density is comparable to what is seen in the lower SNR case, but we would eventually expect it to degrade as the SGWB becomes more dominant in the data. The reconstruction of the total spectral density is similar to the low SNR case, as expected. However, this higher SNR case does not show the noise at low frequency that arises from the breakdown of the Fisher matrix approximation, presumably because the measurement uncertainties are within the linear regime in this case. although, what is interesting to notice in the total reconstruction is that the channels A and E are affected by the SGWB where between \(1mHz\) and \(4mHz\) the noise level deviated from the expected one shown in Fig 2.
Figure 23: As Figure 22 but now for the power-law-with-running model. The background amplitude has been fixed to give an overall SNR of 145.
Figure 24: As Figure 22 but now for the Gaussian bump model. The background amplitude has been fixed to give an overall SNR of 135. |
2301.02054 | Positivity problem of three-term recurrence sequences | We present some necessary and/or sufficient conditions for the positivity
problem of three-term recurrence sequences. As applications we show the
positivity of diagonal Taylor coefficients of some rational functions in a
unified approach. We also establish a criterion for the positivity and
log-convexity of such sequences. | Yanni Pei, Yaling Wang, Yi Wang | 2023-01-05T13:05:03Z | http://arxiv.org/abs/2301.02054v1 | # Positivity problem of three-term recurrence sequences
###### Abstract
We present some necessary and/or sufficient conditions for the positivity problem of three-term recurrence sequences. As applications we show the positivity of diagonal Taylor coefficients of some rational functions in a unified approach. We also establish a criterion for the positivity and log-convexity of such sequences.
keywords: three-term recurrence sequence, totally nonnegative matrix, continued fraction, log-convex sequence, Apery-like number Msc: [2010] 05A20, 15B48, 40A15, 39A21 +
Footnote †: journal: arXiv
## 1 Introduction
Let \((u_{n})_{n\geq 0}\) be a sequence of real numbers satisfying the three-term recurrence relation
\[a(n)u_{n+1}=b(n)u_{n}-c(n)u_{n-1},\qquad n=1,2,\ldots, \tag{1.1}\]
where \(a(n),b(n),c(n)\) take positive values for all \(n\geq 1\). We say also that \(u_{n}\) is a _solution_ of the difference equation (1.1). The positivity problem naturally arises: in which case, the three-term recurrence sequence is positive? Such a problem is closely related to the total nonnegativity of matrices. Following [7], we say that a (finite or infinite) matrix is _totally nonnegative_ (TN for short), if its minors of all orders are nonnegative. We have the following characterization.
**Theorem 1.1** (Characterization).: _Let \(u_{n}\) be a solution of the difference equation (1.1). Then \((u_{n})_{n\geq 0}\) is positive if and only if \(u_{0}>0\) and the tridiagonal matrix_
\[M_{0}=\left(\begin{array}{cccccc}u_{1}&c(1)&&&&\\ u_{0}&b(1)&c(2)&&&\\ &a(1)&b(2)&c(3)&&\\ &&&a(2)&b(3)&\ddots\\ &&&&\ddots&\ddots\end{array}\right).\]
_is totally nonnegative._
Our interest in the positivity problem of three-term recurrence sequences is motivated by the positivity of diagonal Taylor coefficients of multivariate rational functions (see Example 3.5). Some of diagonal coefficients are the so-called Apery-like numbers that satisfy three-term recurrence relations, in which \(a(n),b(n),c(n)\) are all quadratic polynomials in \(n\) or are all cubic polynomials in \(n\).
Throughout this paper, we always assume that \(a(n),b(n),c(n)\) in (1.1) are polynomials in \(n\) with the same degree \(\delta\) and
\[a(n)=an^{\delta}+a^{\prime}n^{\delta-1}+\cdots,\quad b(n)=bn^{\delta}+b^{\prime }n^{\delta-1}+\cdots,\quad c(n)=cn^{\delta}+c^{\prime}n^{\delta-1}+\cdots,\]
where the leading coefficients \(a,b,c\) are positive.
Following Elaydi [6], a nontrivial solution \(u_{n}\) of (1.1) is said to be _oscillatory_ (around zero) if for every positive integer \(N\) there exists \(n\geq N\) such that \(u_{n}u_{n+1}\leq 0\). Otherwise, the solution is said to be _nonoscillatory_. In other words, a solution is nonoscillatory if it is _eventually sign-definite_, i.e., either eventually positive or eventually negative. We say that a nontrivial solution \(u_{n}^{*}\) is a _minimal solution_ of (1.1) if \(\lim_{n\to+\infty}u_{n}^{*}/u_{n}=0\) for arbitrary solution \(u_{n}\) of (1.1) that is not a multiple of \(u_{n}^{*}\). Clearly, a minimal solution is unique up to multiplicity. Minimal solutions play a central role in the convergence of continued fractions and the asymptotics of orthogonal polynomials. For convenience, we also write (1.1) as
\[u_{n+1}=\beta_{n}u_{n}-\gamma_{n}u_{n-1},\qquad n=1,2,\ldots \tag{1.2}\]
where \(\beta_{n}=b(n)/a(n)\) and \(\gamma_{n}=c(n)/a(n)\). For simplicity, we denote the continued fraction in a compact form
\[\frac{\gamma_{1}}{\beta_{1}-}\,\frac{\gamma_{2}}{\beta_{2}-}\,\frac{\gamma_{ 3}}{\beta_{3}-}\,\cdots:=\frac{\gamma_{1}}{\beta_{1}-\frac{\gamma_{2}}{\beta_ {2}-\frac{\gamma_{3}}{\beta_{3}-\cdots}}} \tag{1.3}\]
**Theorem 1.2** (Necessity).: _Let \((u_{n})_{n\geq 0}\) be a solution of (1.1)._
* _If_ \((u_{n})\) _is eventually sign-definite, then_ \(b^{2}\geq 4ac\)_._
* _If_ \((u_{n})_{n\geq 0}\) _is positive, then the continued fraction (_1.3_) converges to a finite positive limit_ \(\rho_{0}\) _and_ \(u_{1}\geq\rho_{0}u_{0}\)_. Moreover, the solution_ \((u_{n}^{*})_{n\geq 0}\) _of (_1.1_) decided by_ \(u_{0}^{*}=1\) _and_ \(u_{1}^{*}=\rho_{0}\) _is a positive and minimal solution of (_1.1_)._
**Theorem 1.3** (Sufficiency).: _If \(b^{2}>4ac\), then each nontrivial solution \((u_{n})\) of (1.1) is eventually sign-definite._
Denote the characteristic polynomial of the difference equation (1.1) by \(Q(\lambda)=a\lambda^{2}-b\lambda+c\) and the characteristic roots by
\[\lambda_{1}=\frac{b-\sqrt{b^{2}-4ac}}{2a},\quad\lambda_{2}=\frac{b+\sqrt{b^{2 }-4ac}}{2a}.\]
Denote \(Q_{n}(\lambda):=a(n)\lambda^{2}-b(n)\lambda+c(n)\). Then \(Q_{n}(\lambda)=Q(\lambda)n^{\delta}+\cdots\). Assume that \(b^{2}>4ac\). Then for \(\lambda_{1}<\lambda_{0}<\lambda_{2}\), we have \(Q(\lambda_{0})<0\), and so \(Q_{n}(\lambda_{0})<0\) for sufficiently large \(n\).
**Theorem 1.4** (Criterion).: _Let \((u_{n})_{n\geq 0}\) be a solution of (1.1). Assume that these exists a positive number \(\lambda_{0}\) such that \(Q_{n}(\lambda_{0})\leq 0\) for all \(n\geq m\) and \(u_{m+1}\geq\lambda_{0}u_{m}>0\). Then \((u_{n})_{n\geq m}\) is positive._
Clearly, \(Q_{n}(\lambda_{0})\leq 0\) for all \(n\geq m\) implies that \(Q(\lambda_{0})\leq 0\), and so that \(b^{2}\geq 4ac\) and \(\lambda_{1}\leq\lambda_{0}\leq\lambda_{2}\). In this sense, the conditions in Theorem 1.4 are "almost" necessary.
This paper is organized as follows. In SS2, we show Theorems 1.1 and 1.2 by means of the total nonnegativity of tridiagonal matrices and the theory of continued fractions. In SS3, we first present the proofs of Theorems 1.3 and 1.4, and then apply them to show the positivity of diagonal Taylor coefficients of some famous rational functions. We also establish a criterion for the positivity and log-convexity of three-term recurrence sequences. In SS4, we illustrate that the difference equation (1.1) may be either oscillatory or nonoscillatory in the case \(b^{2}=4ac\). We also propose a couple problems for further work.
## 2 Proof of Theorems 1.1 and 1.2
Asymptotic behavior of solutions of second-order difference equations has been extensively and deeply investigated (see [6, Chapter 8] for instance). However, as will be seen below, the total nonnegativity of (tridiagonal) matrices is a more natural approach to the positivity problem.
Following [7], we say that a (finite or infinite) matrix is _totally nonnegative_ (TN for short), if its minors of all orders are nonnegative. Let \((a_{n})_{n\geq 0}\) be an infinite sequence of nonnegative numbers. It is called a _Polya frequency_ (PF for short) sequence if the associated Toeplitz matrix
\[[a_{i-j}]_{i,j\geq 0}=\left[\begin{array}{ccccc}a_{0}&&&&\\ a_{1}&a_{0}&&&\\ a_{2}&a_{1}&a_{0}&&\\ a_{3}&a_{2}&a_{1}&a_{0}&\\ \vdots&&&&\ddots\end{array}\right]\]
is TN. We say that a finite sequence \((a_{0},a_{1},\ldots,a_{n})\) is PF if the corresponding infinite sequence \((a_{0},a_{1},\ldots,a_{n},0,0,\ldots)\) is PF. A classical result of Aissen, Schoenberg and Whitney states that a finite sequence of nonnegative numbers is PF if and only if its generating function has only real zeros (see [10, p. 399] for instance). For example, the sequence \((r,s,t)\) of nonnegative numbers is PF if and only if \(s^{2}\geq 4rt\).
To prove Theorem 1.1, we need the following result (see [16, Example 2.2, p.149] for instance).
**Lemma 2.1**.: _An irreducible nonnegative tridiagonal matrix is totally nonnegative if and only if all its leading principal minors are positive._
Proof of Theorem 1.1.: Let \((u_{n})_{n\geq 0}\) be a solution of the difference equation (1.2). Then \(u_{n+1}=\beta_{n}u_{n}-\gamma_{n}u_{n-1}\), where \(\beta_{n}=b(n)/a(n)\) and \(\gamma_{n}=c(n)/a(n)\). Denote the infinite tridiagonal matrix
\[M_{1}=\left(\begin{array}{ccccc}u_{1}&\gamma_{1}&&&\\ u_{0}&\beta_{1}&\gamma_{2}&&&\\ &1&\beta_{2}&\gamma_{3}&\\ &&&1&\ddots&\ddots\\ &&&\ddots&\ddots\end{array}\right).\]
Then for \(n\geq 1\), the \(n\)th leading principal minor of \(M_{1}\) is precisely \(u_{n}\), since they satisfy the same three-term recurrence relation. So, if \(u_{0}\) and \(\gamma_{n}\) are positive for all \(n\geq 1\), then the sequence \((u_{n})_{n\geq 1}\) is positive if and only if the tridiagonal matrix \(M_{1}\) is totally nonnegative by Lemma 2.1. Clearly, \(M_{0}\) is TN if and only if \(M_{1}\) is TN. Hence the positivity of the sequence \((u_{n})_{n\geq 1}\) is equivalent to the total nonnegativity of the tridiagonal matrix \(M_{0}\). This completes the proof of Theorem 1.1.
There are characterizations for the total nonnegativity of tridiagonal matrices besides Lemma 2.1.
**Lemma 2.2** ([16, Example 2.1, p.147]).: _A nonnegative tridiagonal matrix is totally nonnegative if and only if all its principal minors are nonnegative._
**Lemma 2.3** ([18, Theorem 4.3]).: _A nonnegative tridiagonal matrix is totally nonnegative if and only if all its principal minors containing consecutive rows and columns are nonnegative,_
We also refer the reader to [3, 4, 12, 23] for some criteria for the total nonnegativity of tridiagonal matrices.
**Proof of Theorem 1.2 (i).** Clearly, it suffices to consider the case the total sequence \((u_{n})_{n\geq 0}\) is positive. By Proposition 1.1, to prove Theorem 1.2 (i), it suffices to prove that the total nonnegativity of the matrix \(M_{0}\) implies \(b^{2}\geq 4ac\). In other words, we need to prove that the sequence \((c,b,a)\) is a Polya frequency sequence, or equivalently, the tridiagonal matrix
\[\left(\begin{array}{ccccc}b&c&&&\\ a&b&c&&\\ &a&b&\ddots\\ &&&\ddots&\ddots\end{array}\right)\]
is totally nonnegative. By Lemma 2.3, it suffices to show that the determinants
\[D_{k}=\det\left(\begin{array}{ccccc}b&c&&&\\ a&b&c&&\\ &a&b&\ddots&\\ &&\ddots&\ddots&c\\ &&&a&b\end{array}\right)_{k\times k}\]
are nonnegative for all \(k\geq 1\).
Suppose the contrary and assume that \(D_{m}<0\) for some \(m\geq 1\). Consider the determinants
\[D_{m}(n)=\det\left(\begin{array}{ccccc}b(n+1)&c(n+2)&&&\\ a(n+1)&b(n+2)&c(n+3)&&\\ &a(n+2)&b(n+3)&\ddots&\\ &&&\ddots&\ddots&c(n+m)\\ &&&a(n+m-1)&b(n+m)\end{array}\right)_{m\times m}.\]
Clearly, \(D_{m}(n)\geq 0\) for all \(n\geq 0\) since they are minors of the totally nonnegative matrix \(M_{0}\). On the other hand, note that \(D_{m}(n)\) are polynomials in \(n\) of degree \(m\delta\) with the leading coefficient \(D_{m}\):
\[D_{m}(n)=D_{m}n^{m\delta}+\cdots.\]
It follows that \(D_{m}(n)<0\) for sufficiently large \(n\), a contradiction.
Thus \(D_{k}\geq 0\) for all \(k\geq 1\), as desired. This completes the proof of Theorem 1.2 (i).
To prove Theorem 1.2 (ii), we need the following classical determinant evaluation rule.
**Desnanot-Jacobi Determinant Identity**.: _Let the matrix \(M=[m_{ij}]_{0\leq i,j\leq k}\). Then_
\[\det M\cdot\det M_{0,k}^{0,k}=\det M_{k}^{k}\cdot\det M_{0}^{0}-\det M_{0}^{k }\cdot\det M_{k}^{0},\]
_where \(M_{J}^{I}\) denote the submatrix obtained from \(M\) by deleting those rows in \(I\) and columns in \(J\)._
Let \(\beta=(\beta_{n})_{n\geq 0}\) and \(\gamma=(\gamma_{n})_{n\geq 1}\) be two sequences of positive numbers. Denote
\[J_{i}=\left(\begin{array}{ccccc}\beta_{i}&\gamma_{i+1}&&&\\ 1&\beta_{i+1}&\gamma_{i+2}&&\\ &1&\beta_{i+2}&\gamma_{i+3}&\\ &&1&\beta_{i+3}&\ddots\\ &&&\ddots&\ddots\end{array}\right),\quad i=0,1,2,\ldots.\]
**Lemma 2.4**.: _If the tridiagonal matrix \(J_{0}\) is totally nonnegative, then the continued fraction_
\[\beta_{0}-\frac{\gamma_{1}}{\beta_{1}-}\ \frac{\gamma_{2}}{\beta_{2}-}\ \frac{\gamma_{3}}{\beta_{3}-}\ \frac{\gamma_{4}}{\beta_{4}-}\ \dots \tag{2.1}\]
_is convergent._
Proof.: Let \(A(n)\) and \(B(n)\) be the \(n\)th partial numerator and the \(n\)th partial denominator of the continued fraction (2.1). Then we have
\[A(n)=\beta_{n}A(n-1)-\gamma_{n}A(n-2),\ \ \ \ \ A(-1)=1,\ A(0)=\beta_{0};\] \[B(n)=\beta_{n}B(n-1)-\gamma_{n}B(n-2),\ \ \ \ \ B(-1)=0,\ B(0)=1\]
by the fundamental recurrence formula for continued fractions (see [6, Theorem 9.2] for instance). To show that the continued fraction (2.1) is convergent, it suffices to show that \(A(n)/B(n)\) is convergent.
For \(n\geq i\geq 0\), denote
\[u_{i,n}=\det\left(\begin{array}{ccccc}\beta_{i}&\gamma_{i+1}&&&\\ 1&\beta_{i+1}&\gamma_{i+2}&&\\ &1&\beta_{i+2}&\ddots&\\ &&\ddots&\ddots&\gamma_{n}\\ &&&1&\beta_{n}\end{array}\right).\]
If \(J_{0}\) is TN, then so is \(J_{i}\) for each \(i\geq 0\). Thus \(u_{i,n}>0\) by Lemma 2.1.
Applying the Desnanot-Jacobi determinant identity to the determinant \(u_{i,n+1}\), we obtain
\[u_{i,n+1}u_{i+1,n}=u_{i+1,n+1}u_{i,n}-\gamma_{i+1}\cdots\gamma_{n}.\]
It follows that \(u_{i,n+1}u_{i+1,n}<u_{i+1,n+1}u_{i,n}\). Thus \(u_{i,n}/u_{i+1,n}\) is decreasing in \(n\) and is therefore convergent. Let \(\lim_{n\to+\infty}u_{i,n}/u_{i+1,n}=\ell_{i}\). Clearly, \(\ell_{i}\geq 0\). Note that \(A(n)=u_{0,n}\) and \(B(n)=u_{1,n}\). Hence \(A(n)/B(n)\) is convergent, and \(\lim_{n\to+\infty}A(n)/B(n)=\ell_{0}\).
**Remark 2.5**.: We have showed that the continued fraction (2.1) converges to \(\ell_{0}\). More generally, we have
\[\ell_{i}=\beta_{i}-\frac{\gamma_{i+1}}{\beta_{i+1}-}\ \frac{\gamma_{i+2}}{ \beta_{i+2}-}\ \frac{\gamma_{i+3}}{\beta_{i+3}-}\ \frac{\gamma_{i+4}}{\beta_{i+4}-}\ \dots \tag{2.2}\]
for \(i\geq 0\). Clearly, \(\ell_{i}=\beta_{i}-\frac{\gamma_{i+1}}{\ell_{i+1}}\). Hence \(\ell_{i+1}\neq 0\), and so \(\ell_{i+1}>0\) for \(i\geq 0\). Denote
\[\rho_{i}=\frac{\gamma_{i+1}}{\beta_{i+1}-}\ \frac{\gamma_{i+2}}{\beta_{i+2}-}\ \frac{\gamma_{i+3}}{\beta_{i+3}-}\ \frac{\gamma_{i+4}}{\beta_{i+4}-}\ \dots. \tag{2.3}\]
Then \(\rho_{i}=\frac{\gamma_{i+1}}{\ell_{i+1}}\). Thus \(\rho_{i}>0\) for \(i\geq 0\). On the other hand, \(\ell_{i}=\beta_{i}-\rho_{i}\). Hence \(\beta_{0}\geq\rho_{0}\) and \(\beta_{i+1}>\rho_{i+1}\) for \(i\geq 0\).
The following classic result was given by Pincherle in his fundamental work on continued fractions (see [6, Theorem 9.5] for instance).
**Pincherle Theorem**.: _The continued fraction_
\[\frac{\gamma_{1}}{\beta_{1}-}\ \frac{\gamma_{2}}{\beta_{2}-}\ \frac{\gamma_{3}}{\beta_{3}-}\ \frac{\gamma_{4}}{\beta_{4}-}\ \dots\]
_converges if and only if the difference equation \(u_{n+1}=\beta_{n}u_{n}-\gamma_{n}u_{n-1}\) has a minimal solution \(u_{n}^{*}\) with \(u_{0}^{*}=1\). In case of convergence, moreover, one has_
\[\frac{u_{n+1}^{*}}{u_{n}^{*}}=\frac{\gamma_{n+1}}{\beta_{n+1}-}\ \frac{\gamma_{n+2}}{\beta_{n+2}-}\ \frac{\gamma_{n+3}}{\beta_{n+3}-}\ \frac{\gamma_{n+4}}{\beta_{n+4}-}\ \dots\.\]
**Proof of Theorem 1.2 (ii).** Let \((u_{n})_{n\geq 0}\) be a positive solution of the difference equation \(u_{n+1}=\beta_{n}u_{n}-\gamma_{n}u_{n-1}\) and \(\beta_{0}=u_{1}/u_{0}\). Then the tridiagonal matrix \(J_{0}\) is totally nonnegative. By Remark 2.5, we have \(\beta_{0}\geq\rho_{0}>0\), and so \(u_{1}\geq\rho_{0}u_{0}\).
On the other hand, we have \(u_{n+1}^{*}=\rho_{n}u_{n}^{*}\) by Lemma 2.4 and Pincherle Theorem, and \(\rho_{n}>0\) again by Remark 2.5. Thus the solution \((u_{n}^{*})_{n\geq 0}\) of (1.1) decided by \(u_{0}^{*}=1\) and \(u_{1}^{*}=\rho_{0}\) is a positive and minimal solution of (1.1). This completes the proof of Theorem 1.2 (ii).
## 3 Proofs and applications of Theorems 1.3 and 1.4
We say that (1.1) is a difference equation of _Poincare type_ in the sense that both the sequences \(b(n)/a(n)\) and \(c(n)/a(n)\) have finite limit. The following Poincare theorem marks the beginning of research in the qualitative theory of linear difference equations (see [6, Theorem 8.9] for instance).
**Poincare Theorem**.: _Suppose that (1.1) is a difference equation of Poincare type and that the characteristic roots have distinct moduli. If \(u_{n}\) is a solution of (1.1), then either \(u_{n}=0\) for all large \(n\), or \(\lim_{n\to+\infty}\frac{u_{n+1}}{u_{n}}=\lambda_{i}\) for some characteristic root \(\lambda_{i}\)._
**Proof of Theorem 1.3.** By Poincare theorem, \(u_{n+1}/u_{n}\to\lambda_{i}\) for some \(i\). Now \(0<\lambda_{1}<\lambda_{2}\). Hence there exists a positive integer \(N\) such that \(u_{n+1}/u_{n}>0\) for \(n\geq N\). The sequence \((u_{n})\) is therefore sign-definite.
**Proof of Theorem 1.4.** Assume that \(u_{n}\geq\lambda_{0}u_{n-1}>0\). Then by (1.1),
\[u_{n+1}=\frac{b(n)}{a(n)}u_{n}-\frac{c(n)}{a(n)}u_{n-1}\geq\frac{b(n)}{a(n)}u_ {n}-\frac{c(n)}{a(n)}\frac{u_{n}}{\lambda_{0}}=\left[\frac{b(n)\lambda_{0}-c(n )}{a(n)\lambda_{0}}\right]u_{n}\geq\lambda_{0}u_{n}>0.\]
Thus \((u_{n})_{n\geq m}\) is positive by induction.
A particular interest special case of Theorem 1.4 is the following.
**Corollary 3.1**.: _If \(b(n)\geq a(n)+c(n)\) for all \(n\geq 1\) and \(u_{1}\geq u_{0}>0\), then \((u_{n})_{n\geq 0}\) is positive._
Proof.: The statement follows from Theorem 1.4 by taking \(\lambda_{0}=1\).
A preferred candidate for \(\lambda_{0}\) in Theorem 1.4 is \(\lambda_{1}\). Note that \(Q_{n}(\lambda_{1})\) is a polynomial in \(n\) of degree less than \(\delta\) and is easier to estimate.
**Corollary 3.2**.: _Suppose that \((an+a_{0})u_{n+1}=(bn+b_{0})u_{n}-(cn+c_{0})u_{n-1}\). Then \((u_{n})_{n\geq 0}\) is positive if \(b^{2}\geq 4ac\), \(a_{0}\lambda_{1}^{2}-b_{0}\lambda_{1}+c_{0}\leq 0\), and \(u_{1}\geq\lambda_{1}u_{0}\)._
Proof.: We have \(Q_{n}(\lambda_{1})=a_{0}\lambda_{1}^{2}-b_{0}\lambda_{1}+c_{0}\), and so the statement follows from Theorem 1.4 by taking \(\lambda_{0}=\lambda_{1}\).
The following folklore result is an immediate consequence of Theorem 1.2 and Theorem 1.4, which can be found in [8] for instance.
**Corollary 3.3**.: _Suppose that \(au_{n+1}=bu_{n}-cu_{n-1}\), where \(a,b,c\) are positive number. Then \((u_{n})_{n\geq 0}\) is positive if and only if \(b^{2}\geq 4ac\) and \(u_{1}\geq\lambda_{1}u_{0}>0\)._
Proof.: The "if" part follows from Theorem 1.4. Now assume that \((u_{n})_{n\geq 0}\) is positive. Then \(b^{2}\geq 4ac\) and \(u_{1}\geq\rho u_{0}\) from Theorem 1.2, where \(\beta=b/a,\gamma=c/a\) and
\[\rho=\frac{\gamma}{\beta-}\ \frac{\gamma}{\beta-}\ \frac{\gamma}{\beta-}\ \frac{ \gamma}{\beta-}\ \cdots.\]
It follows that \(\rho=\frac{\beta-\sqrt{\beta^{2}-4\gamma}}{2}=\frac{b-\sqrt{b^{2}-4ac}}{2a}= \lambda_{1}\). This completes the proof of the "only if" part.
**Example 3.4**.: Let \(b,c>0\) and the ration function
\[\frac{1}{1-bx+cx^{2}}=\sum_{n\geq 0}u_{n}x^{n}.\]
Then \(u_{0}=1,u_{1}=b\) and \(u_{n+1}=bu_{n}-cu_{n-1}\). Thus all \(u_{n}\) are positive if and only if \(b^{2}\geq 4c\), a folklore result.
Similarly, let \(b,c,d>0\) and
\[\frac{1-dx}{1-bx+cx^{2}}=\sum_{n\geq 0}u_{n}x^{n}.\]
Then \(u_{0}=1,u_{1}=b-d\) and \(u_{n+1}=bu_{n}-cu_{n-1}\). Thus all \(u_{n}\) are positive if and only if \(b^{2}\geq 4c\) and \(d\leq(b+\sqrt{b^{2}-4c})/2\).
**Example 3.5**.: The question of determining whether Taylor coefficients of a given rational function are all positive, has been investigated by many authors [1, 2, 11, 17, 19, 21, 22]. In order to show the positivity of the rational functions, it is necessary even suffices to prove that diagonal Taylor coefficients are positive. The diagonal coefficients of some important rational functions are arithmetically interesting sequences and satisfy three-term recurrence relations. Straub and Zudilin [22] showed that these diagonal coefficients are positive by expressed them in terms of known hypergeometric summations. Here we show their positivity from the viewpoint of three-term recurrence sequences.
(1) Consider the rational function
\[\frac{1}{1-(x+y)+axy}=\sum_{n,m\geq 0}u_{n,m}x^{n}y^{m}. \tag{3.1}\]
The diagonal terms \(u_{n}:=u_{n,n}\) of the Taylor expansion satisfy the recurrence relation
\[(n+1)u_{n+1}=(2-a)(2n+1)u_{n}-a^{2}nu_{n-1}\]
with \(u_{0}=1\) and \(u_{1}=2-a\). The characteristic function \(Q(\lambda)=\lambda^{2}-2(2-a)\lambda+a^{2}\) and the discriminant \(\Delta=16(1-a)\). If \((u_{n})\) is positive, then \(\Delta\geq 0\) by Theorem 1.2, i.e., \(a\leq 1\). Conversely, if \(a\leq 1\), then \(\lambda_{1}=2-a-2\sqrt{1-a}\). Clearly, \(u_{1}=2-a\geq\lambda_{1}=\lambda_{1}u_{0}\) and \(Q_{n}(\lambda_{1})=\lambda_{1}[\lambda_{1}-(2-a)]\leq 0\). It follows that \((u_{n})_{n\geq 0}\) is positive from Theorems 1.4 by taking \(\lambda_{0}=\lambda_{1}\). Thus we conclude that \((u_{n})_{n\geq 0}\) is positive if and only if \(a\leq 1\). It is also known that
\[u_{n}=\sum_{k=0}^{n}\frac{(2n-k)!}{k!(n-k)!^{2}}(-a)^{k}.\]
The positivity is not apparent when \(0<a\leq 1\).
Straub [21, Proposition 4] showed that \(u_{n,m}\) in (3.1) are all positive if and only if \(a\leq 1\). In other words, the rational function (3.1) is positive if and only if its diagonal terms are positive.
(2) Consider the Szego rational function
\[S(x,y,z)=\frac{1}{1-(x+y+z)+\frac{3}{4}(xy+yz+zx)}.\]
Denote the diagonal terms \(s_{n}=[(xyz)^{n}]S(2x,2y,2z)\). It is known that
\[s_{n}=\sum_{k=0}^{n}(-27)^{n-k}2^{2k-n}\frac{(3k)!}{k!^{3}}\binom{k}{n-k},\]
the positivity is not apparent here. On the other hand, the diagonal terms satisfy the three-term recurrence relation
\[2(n+1)^{2}s_{n+1}=3(27n^{2}+27n+8)s_{n}-81(3n-1)(3n+1)s_{n-1},\]
with \(s_{0}=1,s_{1}=12\) and \(s_{2}=198\). The characteristic equation \(2\lambda^{2}-81\lambda+729=0\) has two roots \(\lambda_{1}=27/2\) and \(\lambda_{2}=27\). Also, \(s_{2}>\lambda_{1}s_{1}\) and
\[Q_{n}(\lambda_{1})=2(2n+1)\lambda_{1}^{2}-3(27n+8)\lambda_{1}-81=-\frac{729}{ 2}n-\frac{81}{2}<0\]
for \(n\geq 1\). The positivity of \((s_{n})_{n\geq 1}\) follows from Theorem 1.4 by taking \(\lambda_{0}=\lambda_{1}\). Thus the total sequence \((s_{n})_{n\geq 0}\) is positive.
(3) Consider the Lewy-Askey rational function
\[h(x,y,z,w)=\frac{1}{1-(x+y+z+w)+\frac{2}{3}(xy+xz+xw+yz+yw+zw)}.\]
Let \(t_{n}=9^{n}[(xyzw)^{n}]h(x,y,z,w)\) and \(t_{n}={2n\choose n}h_{n}\). Then \(h_{0}=1,h_{1}=24\) and
\[3(n+1)^{2}h_{n+1}=4(28n^{2}+28n+9)h_{n}-64(4n-1)(4n+1)h_{n-1}.\]
The characteristic equation \(3\lambda^{2}-112\lambda+1024=0\) has two roots \(\lambda_{1}=16\) and \(\lambda_{2}=64/3\). Also,
\[Q_{n}(\lambda_{1})=3(2n+1)\lambda_{1}^{2}-4(28n+9)\lambda_{1}-64=-256(n-1)-128<0\]
for \(n\geq 1\), and \(h_{1}>\lambda_{1}h_{0}\). The positivity of \((h_{n})_{n\geq 0}\) follows from Theorem 1.4 by taking \(\lambda_{0}=\lambda_{1}\).
(4) Consider the Kauers-Zeilberger rational function
\[D(x,y,z,w)=\frac{1}{1-(x+y+z+w)+2(xyz+xyw+xzw+yzw)+4xyzw}.\]
Let \(d_{n}=[(xyzw)^{n}]D(x,y,z,w)\). Then
\[(n+1)^{3}d_{n+1}=4(2n+1)(3n^{2}+3n+1)d_{n}-16n^{3}d_{n-1}\]
with \(d_{0}=1\) and \(d_{1}=4\). The characteristic equation \(\lambda^{2}-24\lambda+16=0\) has two roots \(\lambda_{1}=12-8\sqrt{2}<1<\lambda_{2}=12+8\sqrt{2}\). The positivity of \((d_{n})_{n\geq 0}\) follows from Theorem 1.4 by taking \(\lambda_{0}=1\) since \(b(n)\geq a(n)+c(n)\).
**Remark 3.6**.: The Apery numbers
\[A_{n}=\sum_{k=0}^{n}{n\choose k}^{2}{n+k\choose k}^{2}\]
play an important role in Apery's proof of the irrationality of \(\zeta(3)=\sum_{n\geq 1}1/n^{3}\). The Apery numbers are diagonal Taylor coefficients of the rational function
\[\frac{1}{1-(xyzw+xyw+xy+xz+zw+y+z)}\]
and satisfy the three-term recurrence relation
\[(n+1)^{3}A_{n+1}=(2n+1)(17n^{2}+17n+5)A_{n}-n^{3}A_{n-1}. \tag{3.2}\]
We refer the reader to [20, A005259] and references therein for the Apery numbers. The Apery numbers are closely related to modula forms or supercongruences and have been generalized to various Apery-like numbers, which satisfy three-term recurrence relations similar to (3.2) (see [5, 15] for instance). The diagonal terms \(s_{n},h_{n}\) and \(d_{n}\) in Example 3.5 are all Apery-like numbers. Not all Apery-like numbers are positive. For example, consider the Apery-like numbers \((u_{n})_{n\geq 0}\) defined by
\[u_{n}=\sum_{k=0}^{\lfloor n/3\rfloor}(-1)^{k}3^{n-3k}\binom{n}{3k}\frac{(3k)!}{ k!^{3}},\]
which are diagonal Taylor coefficients of the rational function
\[\frac{1}{1+x^{3}+y^{3}+z^{3}-3xyz}\]
and satisfy the recurrence relation
\[(n+1)^{2}u_{n+1}=(9n^{2}+9n+3)u_{n}-27n^{2}u_{n-1}.\]
See [20, A006077] and references therein. Note that the discriminant of the characteristic equation \(\lambda^{2}-9\lambda+27=0\) is negative. Hence the sequence \((u_{n})\) is oscillatory by Theorem 1.2 (i).
A sequence \((u_{n})_{n\geq 0}\) of positive numbers is said to be _log-convex_ if \(u_{n-1}u_{n+1}\geq u_{n}^{2}\) for all \(n\geq 1\). The log-convexity of combinatorial sequences have been extensively investigated (see [14] for instance). Here we present a new criterion, which can be simultaneously used for the positivity and log-convexity of three-term recurrence sequences.
Denote
\[B(n)=\left|\begin{array}{cc}b(n+1)&b(n)\\ a(n+1)&a(n)\end{array}\right|=Bn^{2\delta-2}+\cdots,\quad C(n)=\left|\begin{array} []{cc}c(n+1)&c(n)\\ a(n+1)&a(n)\end{array}\right|=Cn^{2\delta-2}+\cdots\]
where
\[B=\left|\begin{array}{cc}b&b^{\prime}\\ a&a^{\prime}\end{array}\right|=ba^{\prime}-b^{\prime}a,\quad C=\left|\begin{array} []{cc}c&c^{\prime}\\ a&a^{\prime}\end{array}\right|=ca^{\prime}-c^{\prime}a.\]
**Proposition 3.7** (Log-convexity).: _Let \((u_{n})_{n\geq 0}\) be a sequence satisfying the recurrence relation (1.1). Suppose that \(B,C>0\) and let \(\lambda_{0}=C/B\)._
* _Assume that_ \(u_{1}\geq\lambda_{0}u_{0}>0\) _and_ \(Q_{n}(\lambda_{0})\leq 0\) _for_ \(n\geq 1\)_. Then the sequence_ \((u_{n})_{n\geq 0}\) _is positive._
* _Assume that the sequence_ \((u_{n})_{n\geq 0}\) _is positive and_ \(CB(n)\geq BC(n)\geq 0\) _for_ \(n\geq 1\)_. If_ \(u_{2}/u_{1}\geq u_{1}/u_{0}\geq\lambda_{0}\)_, then the sequence_ \((u_{n})_{n\geq 0}\) _is log-convex._
Proof.: (i) The positivity of \((u_{n})_{n\geq 0}\) is obvious by Theorem 1.4.
(ii) Let \(x_{n}=u_{n+1}/u_{n}\) for \(n\geq 0\). Then \((u_{n})_{n\geq 0}\) is log-convex if and only if \((x_{n})_{n\geq 0}\) is nondecreasing. We next show that \(x_{n+1}\geq x_{n}\geq\lambda_{0}\) for \(n\geq 0\). We proceed by induction on \(n\). Clearly, \(x_{1}\geq x_{0}\geq\lambda_{0}\). Assume now that \(x_{n}\geq x_{n-1}\geq\lambda_{0}\). We need to show that \(x_{n+1}\geq x_{n}\geq\lambda_{0}\).
By the recurrence relation (1.1), we have
\[x_{n}=\frac{b(n)}{a(n)}-\frac{c(n)}{a(n)}\frac{1}{x_{n-1}}. \tag{3.3}\]
Thus
\[x_{n+1}-x_{n} = \left[\left(\frac{b(n+1)}{a(n+1)}-\frac{b(n)}{a(n)}\right)-\left( \frac{c(n+1)}{a(n+1)}-\frac{c(n)}{a(n)}\right)\frac{1}{x_{n}}\right]+\frac{c(n) }{a(n)}\left(\frac{1}{x_{n-1}}-\frac{1}{x_{n}}\right) \tag{3.4}\] \[= \frac{B(n)x_{n}-C(n)}{a(n+1)a(n)x_{n}}+\frac{c(n)}{a(n)}\left( \frac{1}{x_{n-1}}-\frac{1}{x_{n}}\right).\]
By the assumption \(x_{n}\geq\lambda_{0}\) and the condition \(CB(n)\geq BC(n)\), we obtain \(B(n)x_{n}\geq B(n)\lambda_{0}\geq C(n)\). It follows from (3.4) that \(x_{n+1}\geq x_{n}\), as required. Thus the sequence \((x_{n})_{n\geq 0}\) is nondecreasing, and the sequence \((u_{n})_{n\geq 0}\) is therefore log-convex.
By means of Proposition 3.7, we may prove that the log-convexity of the diagonal terms \(s_{n},h_{n},d_{n}\) in Example 3.5, as well as the Apery numbers \(A_{n}\). We omit the proofs for brevity. Instead we give a somewhat more complex example to illustrate Proposition 3.7.
**Example 3.8**.: Consider the Apery-like numbers \((u_{n})_{n\geq 0}\) defined by
\[(n+1)^{3}u_{n+1}=(2n+1)(14n^{2}+14n+6)u_{n}-n(192n^{2}-12)u_{n-1} \tag{3.5}\]
with \(u_{0}=1\) and \(u_{1}=6\). Such Apery-like numbers are introduced by Cooper in [5]. It is known that
\[u_{n}=\sum_{k=0}^{\lfloor n/3\rfloor}(-1)^{k}\binom{n}{k}\binom{2k}{k}\binom{ 2(n-k)}{n-k}\left[\binom{2n-3k-1}{n}+\binom{2n-3k}{n}\right].\]
We next apply Proposition 3.7 to obtain the positivity and log-convexity simultaneously.
We have
\[B(n)=42n^{4}+200n^{3}+330n^{2}+220n+54\]
and
\[C(n)=576n^{4}+2328n^{3}+2952n^{2}+1200n+180.\]
Hence \(B=42,C=576\) and
\[CB(n)-BC(n)=17424n^{3}+66096n^{2}+76320n+23544\]
for \(n\geq 1\). On the other hand, \(\lambda_{0}=C/B=96/7\) and
\[Q_{n}(\lambda_{0})=\frac{12}{49}(-16n^{3}-48n^{2}+799n+432)<0\]
for \(n\geq 7\). Also, \(u_{12}/u_{11}\geq u_{11}/u_{10}>\lambda_{0}\). The sequence \((u_{n})_{n\geq 10}\) is therefore positive and log-convex by Proposition 3.7. It is not difficult to check that \((u_{n})_{0\leq n\leq 11}\) is also positive and log-convex. Thus the total sequence \((u_{n})_{n\geq 0}\) is positive and log-convex.
We also refer the interested reader to [27] for the log-convexity of three-term recursive sequences and [9] for the asymptotic log-convexity of \(P\)-recursive sequences.
## 4 Concluding remarks and further work
We have seen that if \(b^{2}<4ac\), then the difference equation (1.1) is oscillatory; and if \(b^{2}>4ac\), then the difference equation (1.1) is nonoscillatory. In the case \(b^{2}=4ac\), the asymptotic behavior of solutions of the second-order difference equations can be very complicated. The interested reader is referred to Wong and Li [24; 25]. Here we illustrate that the difference equation (1.1) may be either oscillatory or nonoscillatory.
**Example 4.1**.: Consider the difference equation
\[(n+1)L_{n+1}(x)=(2n+1-x)L_{n}(x)-nL_{n-1}(x). \tag{4.1}\]
Clearly, the corresponding discriminant \(b^{2}-4ac=0\).
When \(x=0\), we have \((n+1)L_{n+1}(0)=(2n+1)L_{n}(0)-nL_{n-1}(0)\). Every solution of this difference equation is nonoscillatory. Actually, solve the difference equation to obtain
\[L_{n}(0)=\left(1+\frac{1}{2}+\cdots+\frac{1}{n}\right)(L_{1}(0)-L_{0}(0))+L_{0} (0). \tag{4.2}\]
Recall that \(1+\frac{1}{2}+\cdots+\frac{1}{n}\sim\ln n+\gamma\), where \(\gamma\) is the Euler constant. Hence if \(L_{1}(0)<L_{0}(0)\), then \(L_{n}(0)\) is eventually negative; if \(L_{1}(0)=L_{0}(0)\), then \(L_{n}(0)\) are identically equal to \(L_{0}(0)\); and if \(L_{1}(0)>L_{0}(0)\), then \(L_{n}(0)\) is eventually positive. In case of positive, it immediately follows from (4.2) that the sequence \((L_{n}(0))\) is concave, and therefore log-concave.
When \(x=1\), we have
\[(n+1)L_{n+1}(1)=2nL_{n}(1)-nL_{n-1}(1). \tag{4.3}\]
We next show that every solution of the difference equation (4.3) is oscillatory.
Suppose the contrary and let \(L_{n}\) be an eventually positive solution of (4.3). We may assume, without loss of generality, that \(L_{n}>0\) for all \(n\geq 0\). Let \(x_{n}=L_{n+1}/L_{n}\) for \(n\geq 0\). Then
\[x_{n}=\frac{2n}{n+1}-\frac{n}{n+1}\frac{1}{x_{n-1}}=\frac{n}{n+1}\left(2- \frac{1}{x_{n-1}}\right). \tag{4.4}\]
Note that \(x_{1}=1-\frac{1}{2x_{0}}<1\). Assume that \(x_{n-1}<1\). Then \(x_{n}<\frac{2n}{n+1}-\frac{n}{n+1}=\frac{n}{n+1}<1\) by (4.4). Thus \(x_{n}<1\) for all \(n\geq 1\). On the other hand, since \(a+1/a\geq 2\) for \(a>0\), we have
\[x_{n}=\frac{n}{n+1}\left(2-\frac{1}{x_{n-1}}\right)\leq\frac{n}{n+1}x_{n-1}<x_ {n-1}.\]
The sequence \((x_{n})_{n\geq 1}\) is therefore decreasing. Thus the sequence \((x_{n})\) is convergence. Let \(x_{n}\to x\). Then \(x=2-1/x\) by (4.4), and so \(x=1\). On the other hand, \(x<x_{1}<1\) since \((x_{n})\) is decreasing, which leads to a contradiction.
The classic Laguerre polynomials
\[L_{n}^{(0)}(x)=\sum_{k=0}^{n}(-1)^{k}\binom{n}{k}\frac{x^{k}}{k!}\]
satisfy the recurrence relation (4.1) with \(L_{0}^{(0)}(x)=1\) and \(L_{1}^{(0)}(x)=1-x\). It is well known that
\[L_{n}^{(0)}(x)=\pi^{-1/2}e^{x/2}(nx)^{-1/4}\cos\left(2(nx)^{1/2}-1/4\right)+O \left(n^{-3/4}\right)\]
(see [6, Example 8.38] for instance). This can also explain why \(L_{n}^{(0)}(1)\) is oscillatory.
For the difference equation (1.1) with \(b^{2}=4ac\), we feel that either all solutions are oscillatory or all solutions are nonoscillatory. However, we can not prove it.
We have seen from Theorem 1.2 (ii) that if the difference equation (1.1) has a positive solution, then it has a positive and minimal solution \(u_{n}^{*}\). We conjecture that the solution \(u_{n}^{*}\) is log-convex and the ratio \(u_{n+1}^{*}/u_{n}^{*}\) converges to the smaller characteristic root \(\lambda_{1}\).
**Acknowledgement**
This work was partially supported by the National Natural Science Foundation of China (Nos. 11771065, 12171068). |
2310.13618 | Zero-Knowledge Proofs for Questionnaire Result Verification in Smart
Contracts | We present an implementation of a Web3 platform that leverages the Groth16
Zero-Knowledge Proof schema to verify the validity of questionnaire results
within Smart Contracts. Our approach ensures that the answer key of the
questionnaire remains undisclosed throughout the verification process, while
ensuring that the evaluation is done fairly. To accomplish this, users respond
to a series of questions, and their answers are encoded and securely
transmitted to a hidden backend. The backend then performs an evaluation of the
user's answers, generating the overall result of the questionnaire.
Additionally, it generates a Zero-Knowledge Proof, attesting that the answers
were appropriately evaluated against a valid set of constraints. Next, the user
submits their result along with the proof to a Smart Contract, which verifies
their validity and issues a non-fungible token (NFT) as an attestation of the
user's test result. In this research, we implemented the Zero-Knowledge
functionality using Circom 2 and deployed the Smart Contract using Solidity,
thereby showcasing a practical and secure solution for questionnaire validity
verification in the context of Smart Contracts. | Carlos Efrain Quintero-Narvaez, Raul Monroy-Borja | 2023-10-20T16:10:14Z | http://arxiv.org/abs/2310.13618v1 | # Zero-Knowledge Proofs for Questionnaire Result Verification in Smart Contracts
###### Abstract
We present an implementation of a Web3 platform that leverages the Groth16 Zero-Knowledge Proof schema to verify the validity of questionnaire results within Smart Contracts. Our approach ensures that the answer key of the questionnaire remains undisclosed throughout the verification process, while ensuring that the evaluation is done fairly. To accomplish this, users respond to a series of questions, and their answers are encoded and securely transmitted to a hidden backend. The backend then performs an evaluation of the user's answers, generating the overall result of the questionnaire. Additionally, it generates a Zero-Knowledge Proof, attesting that the answers were appropriately evaluated against a valid set of constraints. Next, the user submits their result along with the proof to a Smart Contract, which verifies their validity and issues a non-fungible token (NFT) as an attestation of the user's test result. In this research, we implemented the Zero-Knowledge functionality using Cireom 2 and deployed the Smart Contract using Solidity, thereby showcasing a practical and secure solution for questionnaire validity verification in the context of Smart Contracts.
zero-knowledge, proof, web3, circum, questionnaire, solidity, smart contract, blockchain
## I Introduction
Blockchain technologies and Smart Contracts have gained significant traction recently, offering transparent and decentralized solutions to problems in different domains. Doing trustless verification and attestation of facts while preserving privacy is one of these domains. In particular, verification of questionnaire evaluation results presents a unique challenge, as this calls for preserving confidentiality of privileged information such as the answers to the questionnaire. This paper introduces an implementation of a Web3 platform that leverages Zero-Knowledge Proofs (ZKPs) and Smart Contracts to address this challenge.
By employing the Groth16 Zero-Knowledge Proof schema, we establish a robust framework that enables users to receive proofs of the validity of their test results without revealing the underlying evaluation methods or answers. Furthermore, users can then produce attestations of their results in the form of Non-Fungible Tokens (NFTs) generated by a Smart Contract that verifies the ZKP we just mentioned. This way, we obtain an end-to-end questionnaire answering, validating and attesting protocol.
The significance of this research lies in enhancing trust and reliability in the evaluation of questionnaire-based assessments. By leveraging ZKPs in this way, test appliers can securely show that their evaluations are fair without compromising the integrity of the test. In turn, users can obtain reliable evidence of their performance in these assessments and share it with third parties without worrying about suspicions of forgery or other dishonest practices.
We provide a comprehensive overview of our implementation, including the design and architecture of the Smart Contracts using Solidity and the implementation details using Cireom 2 for the ZKP functionality. Additionally, we present the evaluation results, discussing the system's performance, security, and privacy aspects.
By introducing this innovative approach to questionnaire validity verification in smart-contracts, we aim to open avenues for integrity-preserving attestation mechanisms in a range of applications. The subsequent sections delve into the background, methods, results, and discussion, highlighting the significance and potential implications of our research, as well as future directions for improvement.
## II Background
### _Blockchain and Smart Contracts_
Blockchain technologies are being increasingly adopted in different domains for their features regarding transparency and decentralization. Bitcoin [8] was the first proposed protocol based on this architecture, combining in its functioning techniques such as Proof-of-Work (PoW) [3], SHA-256 hashing, Merkle Trees, and the Elliptic Curve Digital Signature Algorithm (ECDSA). It is specially powerful because of its decentralized nature, provided through the Proof-of-Work validation done with each block of transactions added to a public registry known as the Blockchain. In the following years, variations on the protocol proposed by Bitcoin appeared, collectively known as cryptocurrencies, each having their own blockchain where the transactions are registered. Ethereum [4], Monero [12], Polkadot [13], Polygon [6], NEAR, Solana [14] and Avalanche [11] are some examples of these derivate protocols.
Of the protocols we just mentioned, Ethereum is a critical one to talk about as it was the first to work with the Ethereum Virtual Machine (EVM) [4] architecture, an essential feature
on which many other blockchains are built upon. The EVM allows protocols to execute code in a decentralized manner and register all of its execution steps on the blockchain, making the results of that code transparent and unbiased. This code is then deployed to an _address_ stored in the blockchain and referred to as a Smart Contract, i.e. a contract that enforces itself automatically.
A simple example of a task that can be performed using a Smart Contract is that of currency exchange. Usually, when two entities, Alice and Bob, each holding a different digital currency, want to exchange one for the other at a certain rate, a level of trust is needed. Indeed, either Alice has to transfer its currency to Bob first or vice versa, making the first entity that transfers vulnerable to the other not fulfilling their part of the deal. One way to balance this trust is by introducing a third-party entity, Charly, to which both parties transfer their funds so that Charly is tasked with ensuring that the deal is fulfilled. However, this only changes the protocol so that now both Alice and Bob have to trust Charly to act correctly. Here is where Smart Contracts come in, instead of having the parties transfer to a third one, transfer to a Smart Contract entity that automatically verifies that the deal is fulfilled without Alice and Bob having to trust in it, as it is executed on a decentralized network. This is illustrated in Figure 1. These trustless features of Smart Contracts then lead the way to a variety of applications to be built.
Decentralized Finance (DeFi) [1], Automated Market Makers [2], and decentralized social media [10], are among the many uses of Smart Contracts that have developed in recent years. Once again, the decentralized and trustless nature in the execution of the code for each of these applications is what allows them to provide an enhanced trustless performance in comparison to their traditional counterparts.
As we have said before, the essential base on which the Smart Contract infrastructure is built is the Blockchain. Let us note some of the implications of one particular attribute of the Blockchain, its functioning as a public registry. Every time a new block of valid transactions appears, it needs to be "mined", i.e. finding a valid "nonce" value that when added at the end of the transaction block makes its SHA-256 hash have certain characteristics. These characteristics are often that the hash has a number of zeros at the beginning, a number determined by the "difficulty" imposed by the network. This mining process is non-trivial by design, so that "work" needs to be performed before adding any transaction to the public ledger, hence the Proof-of-Work name. Notice that in order for the network to reach consensus that the work has been performed and the transactions validated accordingly, it is necessary for everyone to have complete access to each new transaction block. This makes any transfer of funds in the Blockchain public, including its receivers and senders. For this reason, it would be convenient to have a method through which the network could validate transactions without receiving compromising knowledge about its users, this is where Zero-Knowledge Proofs come in to play.
### _Zero-Knowledge Proofs_
In the context of Blockchain transactions, one problem that arises is that, due to the public consensus nature of the protocol, all transactions are public. It is straightforward to check the movements of an entities funds by just checking the Blockchain registry. Although this can be partially avoided due to the easiness with which one can create new addresses (if an entity's address is disclosed, it can just send its funds to a new one), it is still possible to just trace back the origins of any funds. A possible solution to this could work if there was a way for the network to verify the validity of transactions without them knowing the sender or the receiver, i.e. if there was a way to prove that a transaction is valid without disclosing any additional knowledge. Indeed, Zero-Knowledge Proofs are capable of doing this, as we will explain next.
Fig. 1: Diagram of three different protocols for exchanging two different currencies. From top to bottom, in the first one Alice transfers to Bob first, meaning she has to trust Bob not to keep her funds and without fulfilling his part of the deal. In the second one, Charly, a third party is involved having the task of enforcing that both Alice and Bob fulfill their parts of the deal, meaning now both Alice and Bob have to trust Charly to enforce correctly. In the third diagram, Charly is replaced by a Smart Contract executed on a blockchain network, meaning now Alice and Bob do not have to place trust on anyone except on the stability of the network.
A Zero-Knowledge Proof (ZKP) is a method through which one entity can prove a statement to another one without disclosing more information than the fact that particular statement is true. For example, given a fixed function \(f:X\longrightarrow Y\) and a fixed value \(y\in Y\). If we wanted to prove to a third party that we know a value \(x\in X\) such that \(f(x)=y\), a trivial solution would be to just disclose the value of \(x\) we know. However, that shares more information than the fact that the statement is true. A ZKP would entail generating a proof generator function \(P\) and a proof verifier \(V\). Such that we can generate a _witness_\(w=P(x)\) that does not disclose the value \(x\), and that another entity can then evaluate with the verifier function \(V(w)\) and obtain a true or false value that determines whether the _witness_ was created from a valid solution or not. This way, we can prove to another entity that we know a value of \(x\) such that \(f(x)=y\), while disclosing zero knowledge about which specific value of \(x\) we know.
For a more intuitive example, consider a hashing function \(\operatorname{hash}:I\longrightarrow H\) from an input space \(I\) to a hash space \(H\), this function could be SHA-256, SHA-1, etc. Also, consider two entities, Alice \(A\) and Bob \(B\). Imagine that Alice guards an entrance to a treasure cave, to which one can only enter by having a password \(\rho\) that is verified using its hash, i.e. Alice has the hash of the password \(h=\operatorname{hash}(\rho)\). However, Alice herself does not have the password and cannot enter the cave. Now consider that Bob does have the password \(\rho\) and wants to enter the cave, but he does not want to disclose the password to Alice as to not have to share it with her. For most use cases, this can be achieved by having Bob compute the hash of \(\rho\), and show it to Alice, having her verify that it is the same as \(h\), the one she has. However, say that \(h\) is not disclosed to Alice through a private channel but through some public method as to ensure that she verifies passwords correctly and does not commit any dishonesty. This renders just showing the hash value \(h\) of the password not a valid method for verification. Usually, the only method left would be for Bob to disclose \(\rho\) to Alice and have her compute its hash, but there is another way to do this, which involves Zero-Knowledge Proofs.
Groth16[5] is a Zero-Knowledge Proof system designed to generate a proof-verifier pairing schema for satisfiability of any _arithmetic circuit_. What is important about the arithmetic circuit satisfiability problem is the fact that it is NP-Complete and thus allows for a wide arrange of statements to be translated into that form, including those which are often of interest in the context of ZKPs. Note that, arithmetic circuits are comprised of gates that compute arithmetic operations (addition and multiplication) on a field \(\mathbb{F}\), with wires connecting the gates to represent results from one going to another. Thus, recall the SAT problem, which is NP-Complete and formulated as "given a statement with boolean variables, is it possible to find an assignment for them such that the statement is true?". Similarly, the _arithmetic circuit satisfiability_ problem is NP-Complete too and formulated as "given a statement with variables with values from a field \(\mathbb{F}\), is it possible to find an assignment for them such that the statement is true?". This makes Groth16 an ideal system for working when generating ZKPs, being used for ZK languages like Circon 2 [7].
Coming back to our conundrum with Alice and Bob in the treasure cave. As it happens, hashing functions such as SHA-256 can be expressed in terms of arithmetic circuit satisfiability and can thus have a corresponding Zero-Knowledge Proof generated. Therefore, in the situation we had, it is enough for Alice to have a SHA-256 ZK proof verifier \(V\) and for Bob to use the prover function \(P\) to generate a witness \(w=P(\rho)\). Bob then shares \(w\) with Alice, so she evaluates it with the verifier \(V(w)\) and determines that the witness is valid. Thus, Alice can be sure that Bob does have the password without receiving any information about what is its exact value.
Finally, it is straightforward to see how this can be applied to Blockchain transaction verification. Suppose that we have a transaction \(t\) represented in some space \(T\), along with a validation function \(v\) that tells whether \(t\) is valid with the current state of the Blockchain, having \(0\) or \(1\) as output. Then for the transaction to be added by a miner into the registry, it would suffice to send a proof that \(v(t)=1\). Then, we could translate the computation of \(v\) into the Groth16 schema by using a compiler such as Circon 2.This would then allow us to generate a proof-verifier pair \(P\) and \(V\). And finally, as was done with the Alice and Bob case, the sender would just generate a witness of the transaction \(w=P(t)\) and send it to the miners which would verify it by evaluating \(V(w)\). This schema is used by cryptocurrencies such as Monero and Zcash.
Now, see that in the context that concerns this paper, a ZKP could be generated for the validity of the evaluation of a questionnaire according to some set of constraints. This would allow the user or any third party to get a convincing proof of the evaluation having been performed fairly, without revealing any information about the answers that give certain results.
## III Design
Our implementation makes heavy usage of Circon 2 and its features for generating Solidity code for ZK proof verifiers. This code is then integrated into an ERC-721 Smart Contract that allows the user to mint an NFT when a result of the questionnaire with the corresponding proof is provided. A basic implementation was made for a new Web3 platform called P3rsonalities, intended to have a personality test with results validated with a ZK proof and attested through a generated ERC-721 NFT. The deployed contract code with the Circon 2 generated Solidity verifier can be found at the P3rsonalities GitHub Repository.
We designed the Circon 2 code in such a way that it receives a two bit masks and one integer as inputs, one bit mask representing the user's answers for each question, the other representing the answer key, and the integer representing the result of the test, having a total of \(10\) questions. The questions are divided into two groups so that each one represents an attribute of the final result, i.e. the final result will be a two bits integer, each bit representing an attribute. After compiling, Circon 2 generates two files we use, one WebAssembly script for the generation of the ZK witness and another Solidity script for verification of said witness on a Smart Contract
executed on an EVM Blockchain. It is important to note that the generated Solidity code makes heavy use of the assembly functionalities available on the EVM, as to ensure that gas costs for executing the verification are as low as possible.
The witness generator along with the questionnaire evaluator are deployed to a centralized server, in this case an AWS Lambda function, for easy deployment and access from a REST API endpoint. The user then makes a request to this API, sending its answers to the questionnaire. The API then executes the witness generator code and returns the result of the test together with the ZK witness.
The user then makes a call, with the generated witness as an input, to the deployed ERC-721 Smart Contract for imiting the NFT, attesting the result returned by the API. This Smart Contract is modified so that the Solidity verifier script generated by Circum 2 is used as a required check before minting the NFT to the user's address.
At the end of the procedure, the user obtains a Soulbound NFT (an NFT that cannot be transferred) representing the result of their answers to the questionnaire. As this NFT can only be generated when the user possesses a valid witness for that result, it holds a special value as evidence for anyone interested in verifying the results of such a test.
## IV Discussion
The significance of the use of the described approach for the P3rsonalities platform lies in the resulting Soulbound NFT. This NFT is unique and cannot be transferred, making it a valuable piece of evidence for anyone interested in verifying the results of the personality test. By combining ZK proofs, ERC-721 NFTs, and the Circum 2 generated Solidity verifier, we have created a secure and tamper-evident solution for result validation and attestation.
Our implementation demonstrates the practical application of ZK proofs and NFTs in the context of questionnaire evaluation. The use of Circum 2 and its integration with ERC-721 Smart Contracts allowed us to achieve efficient and secure result validation. The results provide an immutable and verifiable record of the user's test result, which holds value for various purposes, such as research and identity verification.
Further research and experimentation can explore scalability, performance optimization, and potential extensions of this approach. Furthermore, it is also worth it to research the potential vulnerabilities of the protocol implemented here. Indeed, although this protocol is good at first glance, it still has some inherent vulnerabilities, such as users being able to "cheat" by sharing the answers that led to a certain result of the test in the past. However, this issue can be resolved by having a larger bank of questions such that it is improbable for two users to get the same set of questions. Thus making it infeasible to cheat by sharing the answers to an instance of the test.
On the matter of scalability, this protocol is mainly limited by the capacity of the centralized server that executes the evaluation of the questionnaire and generates the corresponding proof. On the contrary, the verification part of the protocol executed on the EVM blockchain scales pretty well in comparison, due to the decentralized nature of the blockchain network. A possible solution to the bottleneck caused by the centralized server could be to decentralize that part too. However, this is difficult as executing the questionnaire evaluation on the blockchain requires publishing the corresponding code, compromising the integrity of the evaluation. Decentralized access control protocols like Lit Protocol [9] could offer a solution to this issue, but further research is needed.
## V Conclusions
In conclusion, our implementation showcases the successful utilization of ZK proofs and ERC-721 NFTs to create a robust and tamper-evident system for result validation and attestation in the context of questionnaire evaluation. By integrating Circum 2's features for generating Solidity code and integrating it into our NFT, we achieved efficient verification of user-provided answers using ZK proofs, resulting in the minting of a Soulbound NFT that serves as immutable evidence of the test results. This approach holds promise not only for personality tests, as we did here, but also for various applications requiring secure result validation and attestations. Further research can explore scalability, optimization, and potential vulnerabilities, in order to expand the usability of this solution across different Web3 platforms and domains beyond questionnaire evaluation.
|
2303.13720 | Performance investigations of two channel readout configurations on the
cross-strip cadmium zinc telluride detector | Multiple application-specific integrated circuits (ASIC) are required for the
detectors if their readout channels are larger than that of ASIC channels. For
a system with such a readout scheme, there is a need to configure channels
among ASICs to achieve the lowest electronics noise and highest count rate. In
this work, experiments were performed to investigate the performance of two
different readout configurations between two ASICs in a cross-strip cadmium
zinc telluride detector. A lower electronic noise level, better FWHM energy
resolution performance, and higher count rate was found for the anode electrode
strips with each ASIC allocating half of the detector area when compared to
allocating each ASIC channel to alternate anode channels. | Yuli Wang | 2023-03-24T00:17:55Z | http://arxiv.org/abs/2303.13720v1 | Performance investigations of two channel readout configurations on the cross-strip cadmium zinc telluride detector
###### Abstract
Multiple application-specific integrated circuits (ASIC) are required for the detectors if their readout channels are larger than that of ASIC channels. For a system with such a readout scheme, there is a need to configure channels among ASICs to achieve the lowest electronics noise and highest count rate. In this work, experiments were performed to investigate the performance of two different readout configurations between two ASICs in a cross-strip cadmium zinc telluride detector. A lower electronic noise level, better FWHM energy resolution performance, and higher count rate was found for the anode electrode strips with each ASIC allocating half of the detector area, when compared to allocating each ASIC channel to alternate anode channels. The average electronics noise levels were reduced to \(12.61\pm 0.48\,keV\) units (anode) and \(26.16\pm 3.03\,keV\) units (cathode) for the half-half configuration. The energy resolution of the half-half configuration is \(1.65\%\pm 0.05\%\) compared to that of the alternate configuration around \(1.90\%\pm 0.06\%\). Charge sharing and scattering play a role in the different count rates, and the count rate of the half-half configuration is \(43.9\%\) higher than that of the alternate configuration.
## I Introduction
Cadmium Zinc Telluride (CZT) detector has gained considerable popularity for applying in positron emission tomography (PET) systems to detect the ionizing radiation [1, 2, 3, 4, 5]. The attention towards CZT is attracted mainly due to their properties of good energy resolution, high inherent spatial resolution, the possibility to easily achieve high packing fraction (\(\sim\) 99%), and the direct detection of gamma photon [6, 2, 7, 8]. So far, by stacking a few CZT detectors together, \(\sim\) 1 mm spatial resolution of CZT-based small animals or organ-dedicated PET systems have been demonstrated [2, 4]. However, the performance of a large volume of CZT detectors-based PET systems are still not well studied, especially the quantitative analysis of the crosstalk issue among a large amount of CZT detectors, the extra electronics noise introduced by flexible circuit and its corresponding bonding issue.
Currently, our lab is developing an ultra-high-resolution dedicated head-and-neck PET system using a large volume of CZT detectors [9, 10, 11, 12, 13, 14]. A two-panel geometry system design is proposed, which aims to improve the detection sensitivity and make the patients feel more comfortable during the scan. Each panel is with 20\(\times\)15 cm \({}^{2}\) geometry dimension, which is formed by stacking 150 pieces of 40 mm \(\times\) 40 mm \(\times\) 5 mm monolithic CZT detectors together in 5 columns \(\times\) 30 rows with "edge-on" detector arrangement. The "edge-on" arrangement could significantly improve each panel's packing fraction to 99% and 40 mm thickness of the CZT detector could results in a greater 86% intrinsic detection efficiency for 511 keV photon [15, 16].
Besides, compared to the scintillation detector, the spatial resolution of the CZT detector is not limited by the manufacturing ability of minuscule crystal elements, but is directly determined by the size of deposited electrode patterns. After embracing this advantage of CZT, we developed our CZT detector with cross-strip electrode patterns [17, 18], consisting of 39 anode strips with 100 \(\mu\)m width, 38 steering strips with 100 \(\mu\)m width and 8 cathode strips with 4900 \(\mu\)m width. The cross-strips design could further help us dramatically reduce the number of readout channels [18] when compared to the pixilated electrode CZT detector with a similar spatial resolution (2n versus n\({}^{2}\)). The less required number of readout channels could bring benefits in terms of data acquisition bandwidth and electronics thermal management.
A large number of research has focused on optimizing the coincidence timing performance of CZT detector for PET [19], investigating the induction, propagation, and collection of charge within CZT [20, 21, 22] and developing efficient readout electronics with large channel numbers for CZT [4, 23, 9]. In this paper, we will focus on investigating the performance of the two different readout configurations in cross-strips CZT.
Overall, in this paper, we are going to investigate the aforementioned three challenges to improve the detection performance of our large-volume CZT-based PET system and aim to provide guidance for future works regarding building a large-volume CZT system.
## II Materials and Methods
This section summarizes the design of the CZT detector module and electronic readout system. Then the system follows the presented methods for the system-level characterization and investigation.
### _Cross-strips CZT detector_
Fig. 2 (a) shows the design of the CZT detector. Each detector is a monolithic CZT crystal with dimensions of 40 mm \(\times\) 40 mm \(\times\) 5 mm. Orthogonal 8 cathode electrodes (4900 \(\mu\)m width) and 39 anode electrodes (100 \(\mu\)m width) are
deposited to the two opposite 40 mm \(\times\) 40 mm crystal faces. 38 steering electrodes are interspersed with anodes with the same pitch and with a large width (400 \(\mu\)m width). During the experiments, a bias of -500V and -80V with respect to anodes is applied to the cathodes and steering electrodes, respectively. The cross-strip electrode pattern was chosen to use fewer electronic readout channels (2n versus n\({}^{2}\)) [24] while still providing high spatial resolution. The steering electrode was designed for enhancing the anode charge collection.
Two CZT crystals are assembled together using the flexible circuits based on an anode-cathode-cathode-anode (ACCA) stacking structure to form a 40 mm \(\times\) 40 mm \(\times\) 10 mm CZT module (shown in Fig.2 (b)). Conductive silver epoxy is used as the material to facilitate the electrical connections between the CZT crystal and the flexible circuit. This ACCA stacking structure could decrease the dead space between CZT crystals and increase the packing fraction.
### _Modular readout electronics system_
The architecture schematic of the readout electronic system for one panel is shown in Fig. 3. The readout system comprised primarily the front-end signal readout part (including the intermediate board and RENA board) and the back-end signal readout part (including the fan-in board and PicoZed board). For investigating the two-channel readout configuration on the cross-strips cadmium zinc telluride detector, the measurement is completed by the red dashed electronic readout system. The experimental setup is shown in Fig. 4.
Each CZT detector has 47 output channels in total (39 anodes and 8 cathodes), which are read by two RENA-3 (Readout Electronics for Nuclear Applications developed by NOVA R&D Inc., Riverside, CA) application integrated circuits (ASIC). The RENA-3 ASIC is implemented in a custom-designed front-end board called RENA board. The intermediate board provides the connection to high voltage and steering voltages and also works as the "bridge" to define different readout configurations (half-half or alternative configuration which is presented in detail in II-C) between the CZT detector and the RENA-3 ASIC, which is the focused point of this paper.
The fan-in board connects to a 1-column by a 30-row array of RENA boards. The fan-in board is responsible for generating the system-wide 50 MHz clock, receiving RENA-3 signals, and distributing the clock and UV signal to all 30 RENA boards. A data acquisition (DAQ) chain, capable of 2 Gbps data transmission, is connected to a DAQ computer via an SFP connector. The DAQ chain was implemented by the Picozed board, which is attached to the backside of the fan-in board.
### _Two intermediate boards with different readout configuration_
Two RENA-3 ASICs (36 channels individually) are used to read 39 anodes and 8 cathodes of each CZT detector. Two different readout configurations are applied to anodes: half-half configuration and alternative configuration, which is shown in Fig. 5. Alternative readout configuration was the same used in [2] and [4], where anode 1 was read by RENA-3 ASIC 1 and anode 2 was readout by RENA-3 ASIC 2, and so on. In half-half configuration, anode 1 to anode 20 was read by RENA-3 ASIC 1 (half of the CZT crystal), and RENA-3 ASIC 2 readout anode 21 to anode 39. For the cathode readout configuration, due to the edge-on incident photons, cathodes are with the optimized alternative configuration to share the load between the two RENA-3, which is studied in [25] by the GATE simulation.
Fig. 1: Schematic of one-panel system with 30-row by 5-column of 4 cm \(\times\)4 cm CZT detectors. Each panel incorporates the CZT detectors assembled with flexible circuits, the front-end electronics, and the back-end electronics. The two-panel system contains two adjustable panels, an axial head holder, and other mechanical supports.
Fig. 2: (a) Schematic of CZT crystal with cross-strip electrode pattern showing anodes, cathode, and steering electrodes. (b) Two CZT are assembled into a flexible circuit and stacked based on the anode-cathode-anode configuration to form a CZT module (4 cm \(\times\) 4 cm \(\times\) 1 cm).
In order to achieve the two different readout configurations, two intermediate boards were designed, shown in Fig. 6. Fig. 6 (a) shows the intermediate board with alternate readout configuration, which is with the size of 9.0 cm \(\times\) 20.9 cm to connect one array of 1\(\times\)3 CZT crystals to three RENA boards. The intermediate boards with half-half configurations are presented in Fig. 6 (b) to connect 1\(\times\)5 CZT crystals to five RENA boards. The same CZT connectors and RENA connectors are applied to the two intermediate boards, thus we investigate two aforementioned readout configurations by replacing two intermediate boards.
In order to have a successful operation of the DAQ chain and minimize the time jitters and cross-talk among ASICs channels, the data acquisition trigger thresholds (V\({}_{thresh}\)) for the digital to analog converter (DAC) of each channel on each RENA ASIC was optimized manually. We increased the V\({}_{thresh}\) value of each channel to above the noise level and thus prevent the signal triggering by the noise. After completing the adjustment of triggering levels for all ASICs, the system is ready for the measurements. The V\({}_{thresh}\) value of each channel would be monitored and tuned repeatedly when we adjust the readout electronics of the system.
### _Electronics noise with test pulse_
To quantify the internal electronic noise contribution to the energy resolution of the two different intermediate boards (i.e. two different readout configurations), we first used a square wave as a test pulse to provide charge injection to each channel. To simulate equivalent charge injection by a 511 keV photon in a CZT detector, a square wave with 1 kHz with 250 mV peak-to-peak amplitude without offset was used during the experiment. An example of the experimental setup is shown in Fig. 4, which also presents how different boards are connected together. In our studies, data were acquired when:
* CZT detector, intermediate board (with half-half readout configuration or with alternative readout configuration), RENA board, and fan-in board are connected together.
* HV bias and steering bias of CZT are turned on.
Each data acquisition time was set as 5 minutes. The experiment was repeated 5 times to get the standard deviation. The same experimental process is applied to the two intermediate boards with different readout configurations. During the experiments, the whole system is packaged into a light-tight Faraday cage to reduce interference from outside light and external electronic noise. The results of a spectral peak with full width at half maximum (FWHM) were reported in keV units. The corresponding results are shown in section III-A.
### _Experiments with Ge-68 as the point source_
The anode energy resolution and anode channels' count rate are studied based on the edge-on irradiation configuration (as described here) with a Ge-68 point source. A 50 \(\mu\)Ci 250 \(\mu\)m diameter Ge-68 point source was placed 10 mm away from the center of the 40 mm \(\times\) 5mm edge-face of the CZT detector. Therefore, the photons from the point source enter the CZT
Fig. 4: An example picture of the experimental setup.
Fig. 3: Architecture of the DAQ chain of one panel for the head and neck PET system. The red dashed box marked electronic system works as one modular DAQ electronics. Five modular DAQ electronics form the DAQ chain of one panel.
detector by the edge-on configuration and encounter at least 40 mm CZT material, which could yield a detection efficiency greater than 86% [16]. The anode strips of the CZT detector were oriented toward to the point source and the CZT detector was assembled to the head-and-neck system as described in Fig.3.
The data acquisition time of each experiment was set as 10 minutes. The experiment was repeated 5 times to get the standard deviation. The same experimental process is applied to the two intermediate boards with different readout configurations.
The calibration of signal amplitude from the ADC unit to the keV unit was performed for each anode channel. By recording the photopeaks in the ADC unit corresponding to the energy peak at known energies (Ge-68 for 511 keV and Cs-137 for 662 keV), a linear ADC-to-keV calibration map and the ADC/keV conversion factors of each channel were developed. All results reported in the paper were in keV units.
#### Ii-B1 Anode energy resolution experiments
The anode energy resolution refers to an energy spectrum containing "multiple interaction" events deposited energy to each anode channel, which was acquired from the aforementioned experiment process. The "multiple interaction" event contains the photoelectric events (511 keV energy deposition happens to each anode corresponding CZT detection volume), Compton scatters (a large number of interaction events will have energy deposition much less than 511 keV, which corresponds to small amplitude signals and is the dominating events noted in CZT), as well as charge-shared events (interaction happens along the charge detection volume boundary between anodes electrodes).
Compton scatter is prevalent in CZT detectors, specially designed low-noise, high-sensitivity readout electronics are preferred which enables the signal triggering at lower energy deposition. Moreover, the energy resolution with "multiple interactions" is of particular interest which could allow the system to have higher photon detection sensitivity [26]. Thus, we should compare the anode energy resolution performance of the two different readout configurations. The results are summarized in section III-B.
In addition, the DAC value of each channel indicates the noise level and thus the lowest detectable energy ability of each channel. The lower the DAC value, the higher the detection sensitivity of the system (i.e. the lower the debatable energy of the "multiple interaction" event). Therefore, we also investigate the lowest DAC value of each readout configuration by lowering the DAC values to just above the noise level. The results are shown in section III-B.
#### Ii-B2 Anode count rate investigations
Readout electronics with high counts rate allow the recording of more valid events per unit of time, which further improve the detection sensitivity. In our study, the CZT detector is with readout channel number larger than that of the RENA ASCIs and thus multiple ASICs are required. For a system with such a readout scheme, load balance is another consideration to achieve a higher count rate. Therefore, we compare the count rate performance of these two readout configurations. Since the interaction with deposited energy between 450 keV to 600 keV is of interest event in PET applications, we only focus on the event in the aforementioned energy threshold. A python code was developed to filter the interested events from the completed events of an anode energy spectrum, which is shown in Fig. 11 (d).
To quantitatively analyze the count rate of two different readout configurations, we calculated the count rate value of each anode channel and the total count rate value of all anode channels for each readout configuration. We also calculate the count rate value of each anode channel and the total count rate value of all anode channels for a half-half readout configuration with lower DAC values.
## III Results
### _Electronics noise with test pulse_
The result examples of the electronics noise with test pulse using alternate or half-half readout configuration discussed in section II-D are shown in Fig. 7 and Fig. 8, respectively. The summary of energy spectra peak with FWHM in keV units for all 39 anode channels with test pulse is shown in Fig. 9 Fig. 10 presents the energy spectra peak with FWHM in keV units summary for all 8 cathode channels with test pulse.
Half-half readout configuration produces the average spectra peak with FWHM of 12.61 \(\pm\) 0.48 keV units (anode) units and
Fig. 5: Schematic of one CZT electrode distribution to two RENA-3 ASICs readout (a) previous design and for (b) new design in this paper. (a) For the previous design, anodes have the alternative configuration between two RENA-3 ASICs. (b) For the new design, anodes have the half-half configuration where anode 1 to anode 20 are read out with RENA-3 ASIC 1 and anode 21 to anode 39 are readout with RENA-3 ASIC 2.
26.16 \(\pm\) 3.03 keV units (cathode). The average width of the spectra peak broadens to 16.34 \(\pm\) 1.37 keV units (anode) and 34.45 \(\pm\) 4.92 keV units for alternate readout configuration. Overall, for every readout configuration, anode channels are with lower electronic noise level (smaller average FWHM value of the spectra) than cathode channels. The anode/cathode channels from the half-half readout configuration are with lower electronic noise level (smaller average FWHM value of spectra) than the corresponding anode/cathode channels of alternate readout configuration, which is discussed in section IV in detail.
### _Energy resolution_
Fig. 11 (a) and (b) show the examples of energy spectra for alternate and half-half readout configurations using Ge-68 as the point source. The DAC values of Fig. 11 (a) and (b) are set as 65 ADC units, which are the lowest achievable DAC values of alternate readout configuration. The energy spectra with lower DAC values (55 ADC units, the lowest achievable DAC values for the half-half readout configurations) using Ge-68 for the half-half readout configuration are presented in Fig. 11 (c). The lowest detectable energy ability of alternate readout configuration is around 200 keV units, while that of half-half readout configuration is lower than 50 keV units.
The anode channels' FWHM energy resolution summary of the above-mentioned three experiments scenarios is reported
Fig. 8: Results examples of energy resolution with test pulse for half-half readout configuration in (a) cathode channel and (b) anode channel
Fig. 6: (a) Schematics of the intermediate board with alternate readout configuration with dimensions of 9.0 cm \(\times\) 20.9 cm. (b) Schematics of the intermediate board with half-half readout configuration with dimensions of 12.7 cm \(\times\) 44.5 cm. (c) Channel distribution map between RENA ASICs and CZT anode/cathode readout channels for alternate readout configuration. (d) Channel distribution map between RENA ASICs and CZT anode/cathode readout channels for half-half readout configuration.
Fig. 7: Results examples of energy resolution with test pulse for alternate readout configuration in (a) cathode channel and (b) anode channel.
in Fig. 12. All presented results are without tailing correction. From Fig. 12, we could know that the FWHM energy resolution is 9.72 \(\pm\) 0.33 in keV units (1.90% \(\pm\) 0.06%) for alternate readout configuration, 8.45 \(\pm\) 0.21 in keV units (1.65% \(\pm\) 0.05%) for half-half readout configuration and 8.68 \(\pm\) 0.31 in keV units (1.69% \(\pm\) 0.06%) for half-half readout configuration with lower DAC values. It's noticeable that half-half readout configurations (with or without lower DAC values) have better performance on energy resolution than the alternate readout configuration. Moreover, The energy resolution performance of center anode channels (anode channel No. 18 to No. 22) for all three tested readout conditions is better than the sides anode channels, which is further discussed in section IV.
### _Count rate_
According to the method described in section II-E2, we acquired Fig. 13, which shows the count rate for alternate, half-half, and half-half with lower DAC threshold readout configurations. From Fig. 13, it can be seen that half-half or the half-half with lower DAC values readout configurations are with comparable count rate results, which are markedly higher than that of alternate readout configurations. This point will be discussed in section IV. The average count's rate values for all 39 anode channels are 14972.56 \(\pm\) 826.29 (alternate), 21550.58 \(\pm\) 409.94 (half-half), 22669.62 \(\pm\) 561.63 (half-half with lower DAC).
The comparison between the previous simulation work [25] and the experimental studies in this paper will also be
Fig. 11: (a) Example of one channel energy spectra plot using Ge-68 point source with half-half readout configuration. (b) Example of one channel energy spectra plot using Ge-68 point source with alternate readout configuration. (c) Example of one channel energy spectra plot using Ge-68 point source with half-half readout configuration and lower DAC threshold. (d) Examples showing the count rate between 450 keV and 600 keV.
Fig. 12: Summary of energy resolution for half-half readout configuration, alternate readout configuration, and half-half readout configuration with lower DAC threshold.
Fig. 10: Energy resolution comparison for cathode channels between alternate readout configuration and half-half readout configuration.
Fig. 9: Energy resolution comparison for 30 anode channels between alternate readout configuration and half-half readout configuration.
discussed in detail in section IV.
## IV Discussion
### _Analysis on electronics noise_
From the internal comparison of Fig. 7 (a) and (b) or Fig. 8 (a) and (b), we could see a lower electronics noise from the anode electrode than that from the cathode electrode. The width of cathode electrode strips is 8 times the width of anode electrode strips. For the collection of the mobile holes, larger cathode electrodes are preferable to maintain high photon sensitivity, while at the cost of introducing higher leakage or dark current of the CZT detector. The cathode electrode also has higher sensitivity to random fluctuation in the current amplitude due to the electronic noise. Therefore, a larger electronic noise is expected in the cathode electrode.
All presented data (Fig. 9 Fig. 10) were acquired with exactly the same experimental conditions, except for the different readout configurations and the dimension of intermediate boards. For the factor of the intermediate board, since the intermediate board with a half-half readout configuration is with a larger dimension than that board with an alternate readout configuration, a higher electronic noise level should be expected in the intermediate board with a half-half readout configuration. However, from Fig. 9 and Fig. 10, we could see a lower electronics noise level (small FWHM value) in alternate readout configuration and higher electronics noise level in half-half readout configuration, for both anode electrodes and cathode electrodes. Therefore, we could conclude a much lower electronics noise level of the half-half readout configuration than that of the alternate readout configuration.
### _Analysis on energy resolution_
As shown in Fig. 11, DAC values setting in half-half readout configuration (65 ADC units in Fig. 11 (c)) is lower than that of alternate readout configuration (55 ADC units in Fig. 11 (a)). DAC value refers to the trigger threshold level of each ASIC's digital-to-analog converter. The DAC values can be found by the empirical mean and should be set just above the electronic noise level, specifically, the valid trigger data could be taken at the chosen trigger threshold level while all electronic noise could be avoided. Therefore, the noise level of each channel is the decisive factor of its DAC value. From the section III-A, the half-half readout configuration with a lower noise level, thus a lower DAC value.
From Fig. 12, a better energy resolution performance is shown in the half-half readout configuration when with the same DAC values as the alternate readout configuration, which is mainly due to a low electronics noise level of half-half readout configuration when compared to the alternate readout configuration. By comparing the results of half-half/half-half with lower DAC in Fig. 12, we could see a smaller variance in the result of half-half readout configuration with lower DAC. ASIC readout channels maintain a higher detection sensitivity and higher count rate with a lower DAC value setting, therefore, the variance of the results could be relatively reduced. From Fig. 12, we also could observe that the best energy resolution performance for each readout configuration is always from No. 18 to No. 22 anode channels. The location of the point isotope source, which is placed in the detector center, corresponding to the anode channels around No. 20, contributes to this phenomenon.
### _Analysis on count rate_
Two observations can be made from Fig. 13. Firstly, it is clear that the count rate, in the half-half configuration, is higher than the that of alternate configuration. Half-half configurations are with close count rates with different DAC values. Secondly, the alternate configuration showed a serrated shape and a large variance between neighboring channels. The count rate performance in the half-half configuration is more consistent. These two aforementioned observations echos the results of our previous simulation paper [25].
Compared to the count rate of the alternate configuration, the count rate of the half-half configuration improved by 43.9% and 51.4% (with low DAC values), respectively. Charge sharing and scattering are two main factors contributing to the different count rates. In the CZT detector, due to the carrier density gradient and electrostatic repulsion, the induced charge cloud will expand during the drift to the collection electrode. The possible largest root means the square radius of the charge cloud can be 100 \(\mu\)m [27]. In our designed CZT detector, the distance of neighboring anodes was 1 mm, thus the charge sharing should contribute to the count rate and needs to be considered. Compare to the alternate configuration, the only charge-sharing situation happens to the two anodes in the middle where two neighboring anodes were monitored by two different RENA-3 ASICs. Therefore, the half-half configuration is much less susceptible to charge sharing than the alternate configuration. For the scattering, since the average scattering length of a 511 keV photon was about 7 mm, thus it was difficult for a scattered photon to escape from one half to the other half (20 mm length for each half of the detector). As a result, the alternate configuration was more vulnerable to scattering.
Fig. 13: Summary of count rate for half-half readout configuration, alternate readout configuration and half-half readout configuration with lower DAC threshold.
## V Conclusion
In this work, the influence of channel configuration between two ASICs on the electronics noise, FWHM energy resolution, and count rate of a CZT detector with a cross-strip pattern was studied. A lower electronics noise and better FWHM energy resolution were observed in the detector with a half-half readout configuration. Due to charge sharing and scattering, the half-half configuration of anode channels showed a higher count rate than the alternate configuration. With the half-half anode configuration, the count was 43.9% higher than that of the alternate anode configuration.
|
2308.11245 | Fully consistent rotating black holes in the cubic Galileon theory | Configurations of rotating black holes in the cubic Galileon theory are
computed by means of spectral methods. The equations are written in the 3+1
formalism and the coordinates are based on the maximal slicing condition and
the spatial harmonic gauge. The black holes are described as apparent horizons
in equilibrium. It enables the first fully consistent computation of rotating
black holes in this theory. Several quantities are extracted from the
solutions. In particular, the vanishing of the mass is confirmed. A link is
made between that and the fact that the solutions do not obey the zeroth-law of
black hole thermodynamics. | Philippe Grandclément | 2023-08-22T07:41:09Z | http://arxiv.org/abs/2308.11245v3 | # Fully consistent rotating black holes in the cubic Galileon theory
###### Abstract
Configurations of rotating black holes in the cubic Galileon theory are computed by means of spectral methods. The equations are written in the 3+1 formalism and the coordinates are based on the maximal slicing condition and the spatial harmonic gauge. The black holes are described as apparent horizons in equilibrium. It enables the first fully consistent computation of rotating black holes in this theory. Several quantities are extracted from the solutions. In particular, the vanishing of the mass is confirmed. A link is made between that and the fact that the solutions do not obey the zeroth-law of black hole thermodynamics.
_Keywords:_ modified gravity, cubic Galileon, hairy black hole, rotating black hole, spectral methods
## 1 Introduction
In the last decade, observations have provided strong evidences that black holes are true astronomical objects. One cannot help mentioning the first observations of the gravitational waves resulting from the coalescence of two black holes in 2015 [1] by the LIGO-Virgo collaboration [2, 3]. Since that first breakthrough, several tens of such events have been detected. Another strong evidence comes from the high angular resolution observations of the environment of the supermassive objects located at the center of galaxies. Those observations are made either in radio by the EHT collaboration [4] or in the infrared with the Gravity instrument [5].
If all the current observations are consistent with the compact objects being black holes described by General Relativity, it is believed that the next generation of gravitational wave detectors, LISA [6, 7] or the Einstein Telescope [8], will put more stringent constraints on their nature. Deviations from General Relativity could then be detected where black holes differ from the Kerr solution [9]. In order to prepare the future observations and maximize the scientific payback, many research projects aim at computing black hole solutions in various alternative theories of gravity.
In a previous work [10], black holes in the cubic Galileon theory were constructed. This theory of gravity belongs to the class of scalar-tensor theories known as Horndeski theories [11], theories which lead to second-order field equations. It has been shown that black holes constructed in the cubic Galileon theory could differ from those obtained in General Relativity [12]. Using quasi-isotropic coordinates, solutions of rotation black holes in this context were first obtained in [10]. However it turned out that the choice of quasi-isotropic coordinates was inconsistent because of a violation of the circularity condition (see Sec. 3.5 of [10] for a detailed discussion). It followed that the rotating solutions obtained were only approximate.
In order to cure the limitations of [10] one must move away from the quasi-isotropic coordinates. So, in this paper, one relies on numerical coordinates based on the maximal slicing condition and the spatial harmonic gauge. This choice has proven to enable the computation of various models of black holes, once appropriate boundary conditions are enforced on the fields at the horizon [13]. Here, the formalism of [13] is applied to the rotating black holes in the cubic Galileon theory and leads, for the first time, to exact (up to the numerical precision) solutions.
The paper is organized as follows. Section 2 presents the theory and the various equations in the 3+1 framework. The formalism described in [13] is then briefly recalled. A detailed presentation of the resolution of the equation for the scalar-field is given, as it turned out to be the most difficult part in obtaining the solutions. Section 3 presents various aspects of the computed black holes. After explaining the numerical setup, the achieved precision is assessed by showing the behavior of error indicators when increasing resolution. Last, some mathematical aspects of the configurations are discussed, in particular the fact that the black holes do not obey the zeroth-law of thermodynamics.
Throughout this paper Greek indices are four-dimensional ones, ranging from 0 to 3 whereas Latin indices are spatial ones, ranging from 1 to 3. Units such that \(G=c=1\) are used.
## 2 Equations
### Model
Gravity in the cubic Galileon model involves a metric field \(\mathbf{g}\) and a scalar-field \(\phi\). The action contains an Einstein-Hilbert term, a kinetic one for the scalar-field and a non-standard contribution of higher order in \(\phi\):
\[S\left(\mathbf{g},\phi\right)=\int\left[\xi\left(R-2\Lambda\right)-\eta\left( \partial\phi\right)^{2}+\gamma\left(\partial\phi\right)^{2}\Box\phi\right] \sqrt{-g}\mathrm{d}x^{4}. \tag{1}\]
\(g\) denotes the four-dimensional metric and \(\nabla\) the covariant derivative associated to it. \(\Lambda\) is the cosmological constant and \(\xi\), \(\eta\) and \(\gamma\) some coupling constants. The kinetic term is \(\left(\partial\phi\right)^{2}=\nabla_{\mu}\phi\nabla^{\mu}\phi\) and \(\Box\phi=\nabla_{\mu}\nabla^{\mu}\phi\).
Following [10], the results presented in this paper are restricted to the case with \(\Lambda=0\) and \(\eta=0\). The action then only depend on one coupling constant \(\gamma\) and reduces to
\[S\left(\mathbf{g},\phi\right)=\int\left[R+\gamma\left(\partial\phi\right)^{2} \square\phi\right]\sqrt{-g}\mathrm{d}x^{4}. \tag{2}\]
The \(\gamma\) appearing in Eq. (2) relates to the one in Eq. (1) by a simple rescaling by \(\xi\) and the same notation is used for simplicity. The stress energy-tensor then reads
\[T_{\mu\nu}=\gamma\left[\partial_{(\mu}\phi\partial_{\mu)}\partial\phi^{2}- \square\phi\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\partial^ {\rho}\phi\partial_{\rho}\partial\phi^{2}\right]. \tag{3}\]
The variation of the action with respect to the scalar-field leads to an equation that can be cast into the form of a current conservation \(\nabla_{\mu}J^{\mu}=0\) with
\[J_{\mu}=\gamma\left[\partial_{\mu}\phi-\frac{1}{2}\partial_{\mu}\left(\partial \phi\right)^{2}\right]. \tag{4}\]
Black holes that differ from the Kerr solution can be obtained by demanding that the scalar-field depends on time, in a linear manner
\[\phi=qt+\Psi, \tag{5}\]
where \(q\) is a constant and \(\Psi\) is a time-independent field. The stress-energy tensor (3) contains only derivative of \(\phi\) so that the form (5) leads to stationary solutions for which \(q\) is a parameter.
### 3+1 decomposition
Throughout this work, a 3+1 decomposition of spacetime is used (see, for instance, [14] for a detailed presentation of the formalism). Spacetime is foliated by spatial hypersurfaces of constant time \(\Sigma_{t}\). On each slice, spatial coordinates \(x^{i}\) are defined. In this context, the geometry of the full spacetime can be given in terms of several spatial quantities: a scalar the lapse \(N\), a vector the shift \(B^{i}\) and a three-dimensional metric \(\gamma_{ij}\). The four-dimensional line-element then reads
\[\mathrm{d}s^{2}=(-N^{2}+B_{i}B^{i})\mathrm{d}t^{2}+2B_{i}\mathrm{d}x^{i} \mathrm{d}t+\gamma_{ij}\mathrm{d}x^{i}\mathrm{d}x^{j}. \tag{6}\]
The normal to each hypersurface is given by \(n_{\mu}=(-N,0,0,0)\) and \(n^{\mu}=(1/N,-B^{i}/N)\). All spatial indices (Latin) are manipulated by means of \(\gamma\). The second fundamental form, the extrinsic curvature tensor, is given by
\[K_{ij}=\frac{1}{2N}\left(D_{i}B_{j}+D_{j}B_{i}-\partial_{t}\gamma_{ij}\right), \tag{7}\]
where \(D\) denotes the covariant derivative with respect to \(\gamma_{ij}\).
The various parts of the stress-energy tensor can be expressed in terms of \(\Psi\) and the 3+1 quantities.
\[\square\phi=\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}T^{\mu}\right)= \frac{1}{N}D_{i}\left(NT^{i}\right), \tag{8}\]
with
\[T^{i}=g^{i\alpha}\partial_{\alpha}\phi=\frac{B^{i}}{N^{2}}q+\left(\gamma^{ij}- \frac{B^{i}B^{j}}{N^{2}}\right)D_{j}\Psi, \tag{9}\]
so that
\[\Box\phi=\frac{1}{N}D_{i}\ \left[\frac{B^{i}}{N}q+N\left(\gamma^{ij}-\frac{B^{i}B ^{j}}{N^{2}}\right)D_{j}\Psi\right]. \tag{10}\]
One also has
\[\partial\phi^{2}=g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi =-\frac{q^{2}}{N^{2}}+2\frac{B^{i}}{N^{2}}qD_{i}\Psi+\left(\gamma^{ij}- \frac{B^{i}B^{j}}{N^{2}}\right)D_{i}\Psi D_{j}\Psi. \tag{11}\]
The last term appearing in Eq. (3) is
\[\partial^{\rho}\phi\partial_{\rho}\partial\phi^{2}=\frac{B^{i}}{N^{2}}qD_{i} \left(\partial\phi^{2}\right)+\left(\gamma^{ij}-\frac{B^{i}B^{j}}{N^{2}} \right)D_{i}\Psi D_{j}\left(\partial\phi^{2}\right). \tag{12}\]
The 3+1 projections of the stress-energy tensor can be written in terms of \(\Psi\) also. The computation of \(E=n^{\mu}n^{\nu}T_{\mu\nu}\) involves the following terms
\[n^{\mu}n^{\nu}\partial_{(\mu}\phi\partial_{\mu)}\left(\partial \phi^{2}\right) = -q\frac{B^{i}}{N^{2}}D_{i}\left(\partial\phi^{2}\right)+\frac{B^ {i}B^{j}}{N^{2}}D_{i}\Psi D_{j}\left(\partial\phi^{2}\right) \tag{13}\] \[n^{\mu}n^{\nu}\partial_{\mu}\phi\partial_{\nu}\phi = \left(\frac{q}{N}-\frac{B^{i}}{N}D_{i}\Psi\right)^{2}\] (14) \[n^{\mu}n^{\nu}g_{\mu\nu} = -1. \tag{15}\]
It leads to
\[E=\gamma\left[-q\frac{B^{i}}{N^{2}}D_{i}\left(\partial\phi^{2}\right)+\frac{B ^{i}B^{j}}{N^{2}}D_{i}\Psi D_{j}\left(\partial\phi^{2}\right)-\Box\phi\left( \frac{q}{N}-\frac{B^{i}}{N}D_{i}\Psi\right)^{2}+\frac{1}{2}\partial^{\rho} \phi\partial_{\rho}\partial\phi^{2}\right]. \tag{16}\]
The computation of \(P_{i}=-n^{\mu}\gamma_{i}^{\nu}T_{\mu\nu}\) involves the following terms
\[n^{\mu}\gamma_{i}^{\nu}\partial_{(\mu}\phi\partial_{\mu)}\partial \phi^{2} = \frac{q}{2N}D_{i}\left(\partial\phi^{2}\right)-\frac{B^{j}}{2N} \left(D_{i}\Psi D_{j}\left(\partial\phi^{2}\right)+D_{j}\Psi D_{i}\left( \partial\phi^{2}\right)\right) \tag{17}\] \[n^{\mu}\gamma_{i}^{\nu}\partial_{\mu}\phi\partial_{\nu}\phi = \left(\frac{q}{N}-\frac{B^{j}}{N}D_{j}\Psi\right)D_{i}\Psi\] (18) \[n^{\mu}\gamma_{i}^{\nu}g_{\mu\nu} = 0. \tag{19}\]
One then finds
\[P_{i}=-\gamma\left[\frac{q}{2N}D_{i}\left(\partial\phi^{2}\right) - \frac{B^{j}}{2N}\left(D_{i}\Psi D_{j}\left(\partial\phi^{2}\right)+D _{j}\Psi D_{i}\left(\partial\phi^{2}\right)\right)\] \[- \Box\phi\left(\frac{q}{N}-\frac{B^{j}}{N}D_{j}\Psi\right)D_{i} \Psi\right].\]
The last projection is \(S_{ij}=\gamma_{i}^{\mu}\gamma_{j}^{\nu}T_{\mu\nu}\) with terms
\[\gamma_{i}^{\mu}\gamma_{j}^{\nu}\partial_{(\mu}\phi\partial_{\mu)} \phi^{2} = \frac{1}{2}\left(D_{i}\Psi D_{j}\left(\partial\phi^{2}\right)+D_{j} \ \Psi D_{i}\left(\partial\phi^{2}\right)\right) \tag{21}\] \[\gamma_{i}^{\mu}\gamma_{j}^{\nu}\partial_{\mu}\phi\partial_{\nu} \phi = D_{i}\Psi D_{j}\Psi\] (22) \[\gamma_{i}^{\mu}\gamma_{j}^{\nu}g_{\mu\nu} = \gamma_{ij}, \tag{23}\]
so that
\[S_{ij}=\gamma\left[\frac{1}{2}\left(D_{i}\Psi D_{j}\left(\partial\phi^{2} \right)+D_{j}\ \Psi D_{i}\left(\partial\phi^{2}\right)\right)-\Box\phi D_{i}\Psi D_{j}\Psi- \frac{1}{2}\gamma_{ij}\partial^{\rho}\phi\partial_{\rho}\partial\phi^{2}\right]. \tag{24}\]
The equation for the scalar-field can also be expressed in terms of the 3+1 quantities. The components of the conserved current (4) read
\[J_{0} = \gamma q\Box\phi \tag{25}\] \[J_{i} = \gamma\left[D_{i}\Psi\Box\phi-\ \frac{1}{2}D_{i}\left(\partial \phi^{2}\right)\right], \tag{26}\]
so that
\[J^{i}=g^{i\mu}J_{\mu}=\gamma\left[\frac{B^{i}}{N^{2}}q\Box\phi+\left(\gamma^{ ij}-\frac{B^{i}B^{j}}{N^{2}}\right)\left(D_{j}\Psi\Box\phi-\ \frac{1}{2}D_{j}\left(\partial\phi^{2}\right)\right)\right]. \tag{27}\]
Beware that \(\mathbf{J}\) is not a three-dimensional vector so that \(J^{i}\) doesn't simply relate to \(J_{i}\) by a contraction with \(\gamma_{ij}\). The conservation-law for the current is then
\[D_{i}\left(NJ^{i}\right)=0. \tag{28}\]
### Gravitational sector
In [10] the field equations were solved by making use of quasi-isotropic coordinates. The unknowns of the numerical code were then the non-vanishing components of the various tensors (i.e. the \(g_{rr}=g_{\theta\theta}\), \(g_{\varphi\varphi}\) and \(B^{\varphi}\) ones). This is to be contrasted with what is used here, which is an application of the method presented in [13]. The unknowns are the tensors \(N\), \(B^{i}\) and \(\gamma_{ij}\) themselves and not their individual components. Maximal slicing and spatial harmonic gauge are used. Such choice of coordinates is enforced by modifying the original system of the 3+1 equations. Maximal slicing is a condition on the choice of time-coordinate. It amounts to maximizing the volume of the foliation hypersurfaces (see Sec. 9.2.2 of [14]). From the mathematical point of view it translates in the fact that the trace of the extrinsic curvature tensor vanishes : \(\gamma^{ij}K_{ij}\equiv K=0\). This condition is enforced by removing all the occurrences of \(K\) in the equations.
The spatial harmonic gauge defines the choice of spatial coordinates. It translates in the condition that \(V^{i}\equiv\gamma^{kl}\left(\Gamma_{kl}^{i}-\bar{\Gamma}_{kl}^{i}\right)=0\), where \(\Gamma_{kl}^{i}\) denotes the Christoffel symbols of \(\gamma_{ij}\) and \(\bar{\Gamma}_{kl}^{i}\) the ones of a background metric. In this paper the background metric is chosen to be the flat one \(f_{ij}\) (see Sec. II.A of [13] for more details). In the equations, the occurrences of the Ricci tensor are replaced by \(R_{ij}-\frac{1}{2}\left(D_{i}V_{j}+D_{j}V_{i}\right)\)
This modification ensures that the second order derivatives of the metric appear as a Laplacian-like operator \(\gamma^{kl}\partial_{k}\partial_{l}\gamma_{ij}\). The resulting system of equations is
\[R-D_{k}V^{k}-K_{ij}K^{ij} = 16\pi E \tag{29}\] \[D^{j}K_{ij} = 8\pi P_{i}\] (30) \[\mathcal{L}_{\vec{B}}K_{ij}-D_{i}D_{j}N+ N\ \left(R_{ij}-\frac{1}{2}\left(D_{i}V_{j}+D_{j}V_{i}\right)-2K_{ik}K_{j}^{k}\right)\] \[= 4\pi N\left(2S_{ij}-\left(\gamma^{kl}S_{kl}-E\right)\gamma_{ij} \right),\]
where \(K_{ij}=\frac{1}{2N}\left(D_{i}B_{j}+D_{j}B_{i}\right)\) (no time derivative in that case) and \(\mathcal{L}\) denotes the Lie derivative. Equation (29) is the Hamiltonian constraint, Eq. (30) the momentum one and Eq. (31) the evolution equation for \(K_{ij}\). The modified system Eqs. (29-31) is expected to be well posed. However, for the solution to be a true solution of Einstein's equations, one must check, a posteriori, that the quantities \(K\) and \(V^{i}\) are indeed zero. This criterion has proven, in the past, to be a very strong test to assess the validity of solutions (see the tests performed in [13]).
The presence of the black hole is imposed by enforcing appropriate boundary conditions on an inner sphere of radius \(r_{H}\) and by solving Eqs. (29-31) outside that sphere. The boundary conditions used are those proposed in [13]. Only the main features of those conditions are recalled here and the reader should refer to [13] for a detailed presentation of the method. The boundary conditions encode the fact that the inner sphere is an apparent horizon, in equilibrium, and rotating at a velocity \(\Omega\), a parameter that enters the condition of the shift. Some quantities are freely specifiable on the horizon, due to the fact that the coordinates used are defined by differential conditions (i.e. the equations \(K=0\) and \(V^{i}=0\) lead to differential equations on the fields). In this work, those quantities are chosen, at the inner boundary, as follows: \(N=0.5\) and \(\gamma_{r\theta}=\gamma_{\varphi\varphi}=0\). The spherically symmetric part of \(\gamma_{rr}\) (i.e. its \(Y_{0}^{0}\) component) is also free and chosen to be \(8\). Those values have proven in the past (see [13]) to facilitate the convergence of the numerical code. Changing them would only lead to different choices of coordinates and not to new configurations.
The boundary condition on the shift radial component is \(B^{r}=N\vec{s}^{r}\), where \(\vec{s}^{i}\) denotes the unit normal to the horizon (with respect to the metric \(\gamma_{ij}\)). This condition comes from the fact the horizon is not expanding. It has an important implication because it makes some components of Eq. (31) degenerate. This means that the factor in front of the highest order radial derivatives term \(\partial_{r}^{2}\gamma_{ij}\) vanishes on the horizon, implying that the equation becomes first order there and so does not require any boundary conditions to be solved. This in particular the case for the components \((\theta\theta)\), \((\theta,\varphi)\) and \((\varphi,\varphi)\). This point is also relevant for the scalar-field equation as discussed in Sec. 2.4. Once again, a detailed analysis can be found in [13].
of view, the use of a compactification in \(1/r\) allows for those conditions to be enforced at exact spatial infinity (see Sec. 3.1).
### Field equation
The equations presented in Sec. 2.2 involve only derivatives of the field \(\Psi\). This is true for the various projections of the stress-energy tensor and also for the scalar-field equation (28). In all those cases, the only relevant quantity is \(D_{i}\Psi\). For axially symmetric solutions (the ones considered here) one has \(D_{\varphi}\Psi=0\). So, from the numerical point of view, one can consider the two quantities \(\Psi_{r}=\partial_{r}\Psi\) and \(\Psi_{\theta}=\frac{1}{r}\partial_{\theta}\Psi\) as being the unknowns of the problem (a similar technique is used in [10]). The factor \(1/r\) appearing in \(\Psi_{\theta}\) comes from the fact that an orthonormal spherical tensorial base is used ; it ensures that \(D_{i}\Psi=(\Psi_{r},\Psi_{\theta},0)\).
Let us first consider the non-rotating case for which the solution is spherically symmetric. It follows that \(\Psi_{\theta}=0\) and only one unknown remains: \(\Psi_{r}\). One needs to assess the order of Eq. (28) in terms of the derivatives of \(\Psi_{r}\). Equation (10) contains first order derivatives of \(\Psi_{r}\). However, the prefactor in front of \(D_{r}\Psi_{r}\) is \(\left(g^{rr}-\frac{B^{r}B^{r}}{N^{2}}\right)\), which vanishes on the horizon, due to the boundary conditions used (see Sec. 2.3 and [13]). So \(\Box\phi\) is a first order equation in terms of \(\Psi_{r}\) but only zeroth order near the horizon. The same is true for Eq. (11). It is then easy to verify that the conservation equation (28) contains second order derivatives of \(\Psi_{r}\) but is degenerate on the horizon, where only first order derivatives are present. It follows that only one boundary condition must be prescribed when solving Eq. (28). A naive choice would be to relax any condition on the horizon and simply demand that \(\Psi_{r}=0\) at infinity. However this leads to solutions that do not verify \(K=0\) and \(V^{i}=0\) and so are not solutions of Einstein's equations. A suitable choice consists in relaxing the condition at infinity and to enforce, on the horizon, the condition \(V^{r}=0\). It is not trivial that this condition is sufficient to ensure that \(K\) and \(V^{i}\) vanish everywhere, but it turns out to be the case. Moreover the obtained solutions do indeed fulfill \(\Psi_{r}=0\) at infinity. This situation concerning the boundary conditions is to be contrasted with what happens for the hairy black holes constructed in Sec. V of [13] where only a vanishing boundary condition at infinity is needed. One can conjecture that this difference comes from the fact that the equations are of different order, in terms of the scalar-field: second order for the hairy black holes of [13] and third order in the cubic Galileon case.
The above conclusions still hold in the rotating case, where both \(\Psi_{r}\) and \(\Psi_{\theta}\) must be taken into account. The conservation equation (28) is solved using a single boundary condition on the horizon: \(V^{r}=0\). An additional equation is provided by the symmetry of second derivatives: \(\partial_{r}\left(r\Psi_{\theta}\right)=\partial_{\theta}\Psi_{r}\). From a technical point of view, this can be seen as a first order differential equation on \(\Psi_{\theta}\), which is solved by demanding that, at infinity \(\Psi_{\theta}=0\). It turns out that this is sufficient to lead to valid solutions. In particular, there is no need to impose anything for \(V^{\theta}\) on the horizon. All this discussion about the scalar-field equation may seem rather technical but it is be noted that solving it
properly has been the main difficulty in getting the solutions presented here.
## 3 Results
### Numerical method
The equations presented in Sec. 2 are solved by means of the Kadath library [15, 16]. The setting is very similar to the one used in Sec. V of [13]. Space is divided into several spherical shells, the last one extending up to infinity thanks to a compactification in \(1/r\). This setting is very standard and has proven to lead to good numerical results, in terms of both convergence and precision, in many different physical situations. Spherical coordinates \((r,\theta,\varphi)\) are used for the points and the tensors are given on the associated spherical orthonormal tensorial base. In each domain, a spectral decomposition of the fields is used where the angles \((\theta,\varphi)\) are expanded onto trigonometrical functions and the radial coordinate is described by means of Chebyshev polynomials. The numbers of points in each dimension are labelled \(N_{r}\), \(N_{\theta}\) and \(N_{\varphi}\). This work is concerned with axisymmetric configurations only, so that the resolution in \(\varphi\) is maintained fixed to \(N_{\varphi}=4\) (for technicalities in the Kadath library it is not possible to have less points).
The Kadath library enables to transform the system of equations into a discretized system on the spectral coefficients by means of a weighted residual method, essentially a version of the tau-method. Some regularities (i.e. on the axis of the spherical coordinates) are enforced by means of Galerkin basis. The non-linear discretized system is solved by means of a Newton-Raphson iteration. The code is parallelized using MPI and an typical job runs on 200 cores.
As in [10], as a first step, the test-field solution is obtained. It corresponds to the limit \(\gamma\to 0\), where the back-reaction of the scalar-field onto the metric sector is neglected. Thus the metric fields are fixed to the ones of a Schwarzschild black hole, obtained numerically in the maximal slicing and spatial harmonic gauge. It corresponds to the configurations obtained in Sec. III of [13], with \(\Omega=0\). Once the metric fields are known, the scalar-field is obtained by solving the equation \(J_{r}=0\), which, in the non-rotating case, is equivalent to solving Eq. (28). The test-field solution is known to have a different asymptotic behavior, at spatial infinity, than the full solutions. Indeed, as can be seen, for instance, on Eq. (41) of [10], \(\Psi_{r}\) behaves like \(1/\sqrt{r}\) when \(r\to\infty\). This square-root behavior is inconsistent with the compactification used by the Kadath library so that \(J_{r}=0\) is solved only up to a finite radius \(r_{\rm out}\). At that outer radius, the boundary condition \(\partial_{r}\Psi_{r}+2/r\Psi_{r}=0\) is enforced. This ensures that the true test-field solution is recovered when \(r_{\rm out}\to\infty\). This technicality is only used for the test-field case and standard compactification and exact boundary conditions at infinity are used otherwise.
The test-field solution is used as a first initial guess to get solutions of the full Einstein-Klein-Gordon system. In the context of this paper, the coordinate radius of the black hole is maintained fixed to \(r_{H}=1\). The parameter \(q\) is also fixed to
\(q=1\), so the configurations depend on two remaining parameters: the angular velocity \(\Omega\) and the coupling constant \(\gamma\). Sequences are constructed by slowly varying those parameters. Let us mention that the various quantities are not scaled in the same way as in [10]. Indeed, in that previous paper, the various quantities were scaled by means of the radius of the horizon, which is not a coordinate independent quantity. In this paper, the circumferential radius \(r_{\rm circ}\) of the black hole, that is the proper length of the horizon in the orbital plane divided by \(2\pi\), is used. This change in scaling makes a precise comparison between the two papers somewhat difficult. However this drawback is overcome by the advantage in dealing with coordinate independent quantities only. Let us mention that the computation of \(r_{\rm circ}\) involves the value of \(\gamma_{\varphi\varphi}\) on the horizon, so that it is not constant for all configurations, even if \(r_{H}\) is fixed.
### Exact solutions with rotation
In order to assess the validity of the computed configurations, the various sources of numerical errors are monitored, as a function of the resolution. More precisely they consist of every equation that must be verified for a true solution but that is not solved explicitly by the code. This is the case for the gauge conditions \(K=0\) and \(V^{i}=0\) and for the \(Y_{0}^{0}\) part of the expansion condition on the horizon \(\Theta=0\) (see [13] and Sec. 2.3 for more details). An additional source of error comes from the method used to solve the partial differential equations (i.e. the tau-method). In order to enforce appropriate boundary and matching conditions, the last coefficients of the residual of the equations are not forced to be zero. Nevertheless, their value must go to zero with increasing resolution for well-posed problems. The maximal value of all those errors are shown in Fig. 1, for various resolutions. The curves are labelled by the number of points in \(r\) and \(\theta\) in the form \(N_{r}\ \times N_{\theta}\). The number of points in the radial direction is always higher than the angular one, a typical feature when using the Kadath library, that is known to facilitate convergence.
The first panel of Fig. 1 shows the errors for non-rotating configurations, as a function of the coupling constant \(\gamma\). The errors exhibit a spectral convergence with an order of magnitude improvement between the various resolutions. Moreover, the code can reach higher values of the coupling constant with higher resolution. The second panel of Fig. 1 shows the errors for configurations with rotation. The coupling constant is maintained fixed to a moderate but not negligible value. The errors are plotted as a function of the angular velocity \(\Omega\). As for the non-rotating case, the curves exhibit a clear spectral convergence. This is to be contrasted with the second panel of Fig. 1 of [10], where the errors were independent of the resolution. This shows that the coordinates used in this work are consistent with the true rotating solutions, contrary to the quasi-isotropic ones (see discussion in Sec. 3.5 of [10]). The configurations presented in this work are the first exact numerical solutions of rotating black holes in the cubic Galileon theory.
The scalar-field \(\Psi\) can be constructed from \(\Psi_{r}\) by a numerical integration of the
equation \(\partial_{r}\Psi=\Psi_{r}\). This is easily done using Kadath. One can check that the solution is, as expected, regular everywhere. In particular no divergences appear on the horizon and the field vanishes at spatial infinity. As an illustration, some radial profiles of \(\Psi\) are shown in Fig. 2. One can notice that the angular dependence is relatively small. This is to be expected as the amplitude of \(\Psi_{\theta}\) is, in that case, almost two orders of magnitude smaller that the one of \(\Psi_{r}\). Nevertheless some effect of the coordinate \(\theta\) can be seen, in particular in the region close to \(r\approx 3r_{\rm circ}\) where the amplitude of \(\Psi_{\theta}\) is the biggest. The value of the field on the horizon also exhibits some \(\theta\) dependence. By carefully exploring the parameter space, it should be possible to find configurations for which the effect of \(\theta\) is bigger but this is beyond the scope of this paper.
Figure 3 shows, as an example, two global quantities for a sequence of rotating solutions: the ADM (Arnowitt-Deser-Misner) mass and the angular momentum. Both quantities are computed by surface integrals at infinity (see chapter 8 of [14] for
Figure 1: Errors on the full set of equations, for various resolutions. The first panel shows the errors for the non-rotating solutions, as a function of the coupling constant \(\gamma\) and the second one the errors for rotating configurations, as a function of the angular velocity \(\Omega\). Spectral convergence is observed in both cases.
Figure 2: Radial profiles of the scalar-field \(\Psi\), for \(\gamma=0.01\) and \(r_{\rm circ}\Omega\approx=0.505481\). Three different values of the coordinate \(\theta\) are shown in each case. The curves are shown as a function of \(r/r_{\rm circ}\) and the three panels represent different radial regions.
instance). The first panel shows that the ADM mass vanishes, for all configurations, as its values converges to zero when increasing the resolution (the blue curve behavior comes from a change of sign in the computed ADM mass). This is an effect already observed in [10] where it was shown that the Komar mass of the configurations must be zero as soon as the coupling constant \(\gamma\) is different from zero. The configurations being stationary, the Komar and ADM mass must coincide. It implies that the ADM mass must also vanish, as is confirmed by the first panel of Fig. 3. The situation is different for the angular momentum shown in the second panel of Fig. 3. The curves do not converge to zero and the difference between the two highest resolutions gives a measure of the overall precision on the value of \(J\).
To further illustrate the differences between the cubic Galileon black holes and the classical ones, one can study the surface gravity. At each point of the horizon, it can be computed by (see Eq. (10.9) in [17]):
\[\kappa=\tilde{s}^{i}D_{i}N-NK_{ij}\tilde{s}^{i}\tilde{s}^{j}. \tag{32}\]
The first panel of Fig. 4 shows the orbital velocity \(\Omega\) as a function of the average value of \(\kappa\) on the horizon. The black curve corresponds to the Kerr black hole case and to other ones to two different values of the coupling constant. The average is taken on the collocation points that lie on the horizon. So it is not the true angular average but converges rapidly to it, when resolution increases. The plot clearly indicates that the cubic Galileon field has a strong impact, at least on the average value of the surface gravity, even in the non-rotating case. Moreover, the effect of the cubic Galileon does not limit itself to the average of the surface gravity. It also makes the solutions deviate from the zeroth law of black holes thermodynamic which states that the surface gravity must be constant on the horizon. This can be measured by computing the mean of
Figure 3: The ADM mass \(M_{\rm ADM}\) (first panel) and angular momentum \(J\) (second panel), for a sequence of rotating solutions and three different resolutions. The coupling constant is fixed to \(\gamma=0.01\). The ADM mass converges to zero whereas the angular momentum is non-zero.
the absolute value of relative deviation on the horizon, defined as \(\Delta_{\kappa}=\langle\frac{\left|\kappa-\langle\kappa\rangle\right|}{\kappa}\rangle\), quantity that vanishes if and only if \(\kappa\) is constant. This deviation is shown on the second panel of Fig. 4 for rotating sequences of various resolutions. The value of \(\Delta_{\kappa}\) is clearly independent of the resolution and in particular it does not converge to zero: the black holes in the cubic Galileon theory don't obey the zeroth law.
It was shown in [18] that the zeroth law is true for stationary black holes under the dominant energy condition. For the configurations obtained in this paper, it turns out that the value of the energy density given by Eq. (16) is negative everywhere. It violates the weak energy condition which states that \(T_{\mu\nu}X^{\mu}X^{\nu}\geq 0\) for every time-like vector \(X^{\mu}\), meaning every physical observer measures a positive energy density. The weak energy solution being included in the dominant one, the latter is also violated ; there is no reason the zeroth law should hold. This can be linked to the positive energy theorem [19, 20, 21] which essentially states that any spacetime with zero ADM mass and that obeys the dominant energy condition must be Minkowski spacetime. It follows that the zero ADM mass configurations constructed in this paper cannot obey the dominant energy condition and so can violate the zeroth law, as is observed.
## 4 Conclusion
In this paper, for the first time, exact configurations (up to the numerical precision) of rotating black holes in the cubic Galileon theory are constructed. This is achieved by moving away from the quasi-isotropic coordinates used before. Instead a set of differential gauges under the 3+1 formalism is used : maximal slicing for the time coordinate and the spatial harmonic gauge for the spatial coordinates. In this context, the presence of the black hole is enforced by demanding that it is an apparent horizon
Figure 4: The first panel shows the angular velocity \(\Omega\), as a function of the average value of the surface gravity on the horizon. The second panel shows the average relative variation on the horizon of the surface gravity \(\Delta_{\kappa}\), as a function of \(\Omega\). Quantities are scaled by \(r_{\rm circ}\).
in equilibrium. The full set of boundary conditions that ensues from those choices has been presented is details in [13] and used with success in several cases. The fact that the equation for the scalar-field contains third order derivatives (instead of two for more usual cases), leads to additional complication in terms of what boundary conditions must be used for the scalar-field. Nevertheless, an appropriate choice has been found and enabled the successful computation of rotating black holes in the cubic Galileon theory.
After carefully assessing the validity of the numerical results by monitoring various error indicators, especially those linked to the gauge choice, several properties of the solutions have been discussed. In particular, as in [10], it is shown that the mass of the black hole vanishes, contrary to the angular momentum. A link is made between that property and the fact that the configurations do not obey the zeroth-low of thermodynamics. Both features arise from the fact that the stress-energy tensor coming from the scalar-field doesn't verify the dominant energy condition.
If the black holes in the cubic Galileon theory do not seem to be a valid alternative to the astrophysical ones, especially with a vanishing mass, they are still worth studying. First, there was a need to cure the main limitation of [10], limitation coming form the use of quasi-isotropic coordinates. Second, this paper is another successful application of the formalism presented in [13], a valuable tool to compute black hole solutions in various context. For the future, there are plans to apply it to cases that are still allowed by the observations. Once computed, various physical observables could be extracted from the numerical solutions. One could study the orbits of massive or massless particles around those objects, accretion disks in their vicinity, compute the frequencies of the quasi-normal modes or extract the gravitational waveform emitted by a binary system. In the long run, it will help unveiling the nature of the most compact objects in the Universe and put constraints on the theory of gravity.
The author acknowledges the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-22-CE31-0015 (project StronG). This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (Reference No. ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche.
|
2301.12247 | SEGA: Instructing Text-to-Image Models using Semantic Guidance | Text-to-image diffusion models have recently received a lot of interest for
their astonishing ability to produce high-fidelity images from text only.
However, achieving one-shot generation that aligns with the user's intent is
nearly impossible, yet small changes to the input prompt often result in very
different images. This leaves the user with little semantic control. To put the
user in control, we show how to interact with the diffusion process to flexibly
steer it along semantic directions. This semantic guidance (SEGA) generalizes
to any generative architecture using classifier-free guidance. More
importantly, it allows for subtle and extensive edits, changes in composition
and style, as well as optimizing the overall artistic conception. We
demonstrate SEGA's effectiveness on both latent and pixel-based diffusion
models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of
tasks, thus providing strong evidence for its versatility, flexibility, and
improvements over existing methods. | Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, Kristian Kersting | 2023-01-28T16:43:07Z | http://arxiv.org/abs/2301.12247v2 | # SEGA: Instructing Diffusion using Semantic Dimensions
###### Abstract
Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (Sega) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate Sega's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.
## 1 Introduction
The recent popularity of text-to-image diffusion models (Saharia et al., 2022; Ramesh et al., 2022; Rombach et al., 2022) can largely be attributed to their versatility, expressiveness, and--most importantly--the intuitive interface they provide to users. The generation's intent can easily be expressed in natural language, with the model producing faithful interpretations of a text prompt. Despite the impressive capabilities of these models, the initially generated images are rarely of high quality. Accordingly, a human user will likely be unsatisfied with certain aspects of the initial image, which they will attempt to improve over multiple iterations. Unfortunately, the diffusion process is rather fragile as small changes to the input prompt lead to entirely different images. Consequently, fine-grained semantic control over the generation process is necessary, which should be as easy and versatile to use as the initial generation.
Previous attempts to influence dedicated concepts during the generation process require additional segmentation masks, extensions to the architecture, model fine-tuning, embedding optimization (Avrahami et al., 2022; Hertz et al., 2022; Kawar et al., 2022; Wu et al., 2022). While these techniques produce satisfactory results, they disrupt the fast, exploratory workflow that is the strong suit of diffusion models in the first place. We propose Semantic Guidance (Sega) to uncover and interact with semantic directions inherent to the model. Sega requires no additional training, no extensions to the architecture, nor external guidance and is calculated within a single forward pass. We demonstrate that this semantic control can be inferred from simple textual descriptions using the model's noise estimate alone. With this, we also refute previous research claiming these estimates to be unsuitable for semantic control (Kwon et al.,
2022). The guidance vectors uncovered with Sega are robust, scale monotonically, and are largely isolated. This enables simultaneous applications of subtle edits to images, changes in composition and style, as well as optimizing the artistic conception. Furthermore, Sega allows for probing the latent space of diffusion models to gain insights into how abstract concepts are represented by the model and how their interpretation reflects on the generated image.
In this paper, we establish the methodical benefits of Sega and demonstrate that this intuitive, lightweight approach offers sophisticated semantic control over image generations. Specifically, we contribute by (i) devising a formal definition of Semantic Guidance and discussing the numerical intuition of the corresponding semantic space, (ii) demonstrating the robustness, uniqueness, monotonicity, and isolation of semantic vectors, and (iii) providing an exhaustive empirical evaluation of Sega's semantic control.
## 2 Background
**Semantic Dimensions.** Research on expressive semantic vectors that allow for meaningful interpolation and arithmetic pre-date generative diffusion models. Addition and subtraction on text embeddings such as word2vec (Mikolov et al., 2013, 2013) have been shown to reflect semantic and linguistic relationships in natural language (Mikolov et al., 2013, 2016). One of the most prominent examples is that the vector representation of 'King - male + female' is very close to 'Queen'. Sega enables similar arithmetic for image generation with diffusion (cf. Fig. 2b). Up to now, StyleGANs (Karras et al., 2019, 2020) also contain semantic dimensions that can be utilized during generation. For example, Patashnik et al. (2021) combined these models with CLIP (Radford et al., 2021) to offer limited textual control over generated attributes. However, training StyleGANs at scale with subsequent fine-tuning is notoriously fragile due to the challenging balance between reconstruction and adversarial loss. Yet, large-scale pre-training is the base of flexible and capable generative models (Petroni et al., 2019).
**Image Diffusion.** Recently, large-scale, text-guided diffusion models have enabled a more versatile approach for image generation (Saharia et al., 2022; Ramesh et al., 2022; Balaji et al., 2022). Especially latent diffusion models (Rombach et al., 2022) have been gaining much attention. These models perform the diffusion process on a compressed space perceptually equivalent to the image space. For one, this approach reduces computational requirements. Additionally, the latent representations can be utilized for other downstream applications (Frans et al., 2021; Liu et al., 2015).
**Image Editing.** While these models produce astonishing, high-quality images, fine-grained control over this process remains challenging. Minor changes to the text prompt often lead to entirely different images. One approach to tackle this issue is inpainting, where the user provides additionally semantic masks to restrict changes to certain areas of the image (Avrahami et al., 2022; Nichol et al., 2022). Other methods involve computationally expensive fine-tuning of the model to condition it on the source image before applying edits (Kawar et al., 2022; Valevski et al., 2022). In contrast, Sega performs edits on the relevant image regions through text descriptions alone and requires no tuning.
**Semantic Control.** Other works have explored more semantically grounded approaches for interacting with image generation. Prompt-to-Prompt utilizes the semantic information of the model's cross-attention layers that attribute pixels to tokens from the text prompt (Hertz et al., 2022). Dedicated operations on the cross-attention maps enable various changes to the generated image. On the other hand, Sega does not require token-based conditioning and allows for combinations of multiple semantic changes. Wu et al. (2022) studied the disentanglement of concepts for diffusion models using linear combinations of text embeddings. However, for each text prompt and target concept, a dedicated combination must be inferred through optimization. Moreover, the approach only works for more substantial changes to an image and fails for small edits. Sega, in contrast, is capable of performing such edits without optimization.
**Noise-Estimate Manipulation.** Our work is closely related to previous research working directly on the noise estimates of diffusion models. Liu et al. (2022) combine multiple estimates to facilitate changes in image composition. However, more subtle semantic changes to an image remain unfeasible with this method. In fact, Kwon et al. (2022) argue that the noise-estimate space of diffusion models is unsuited for semantic manipulation of the image. Instead, they use a learned mapping function on changes to the bottleneck of the underlying U-Net. This approach enables various manipulations that preserve the original image quality. However, it does not allow for arbitrary spontaneous edits of the image, as each editing concept requires minutes of training. Sega, in comparison, requires no extension to the architecture and produces semantic vectors ad-hoc for any textual prompt. Lastly, Safe Latent Diffusion (SLD) uses targeted manipulation of the noise estimate to suppress inappropriate content during image generation (Schramowski et al., 2022). Instead of arbitrary changes to an image, SLD prevents one dedicated concept from being generated. Additionally, SLD is complex, and the hyperparameter formulation can be improved through a deeper understanding of the numerical properties of diffusion models' noise estimate space.
## 3 Semantic Guidance
Let us now devise Semantic Guidance for diffusion models.
### Guided Diffusion
The first step towards Sega is guided diffusion. Specifically, diffusion models (DM) iteratively denoise a Gaussian distributed variable to produce samples of a learned data distribution. For text-to-image generation, the model is conditioned on a text prompt \(p\) and guided toward an image faithful to that prompt. The training objective of a diffusion model \(\hat{x}_{\theta}\) can be written as
\[\mathbb{E}_{\mathbf{x},\mathbf{c}_{p},\epsilon,t}\left[w_{t}||\mathbf{\hat{x}}_ {\theta}(\alpha_{t}\mathbf{x}+\omega_{t}\epsilon,\mathbf{c}_{p})-\mathbf{x}|| _{2}^{2}\right] \tag{1}\]
where \((\mathbf{x},\mathbf{c}_{p})\) is conditioned on text prompt \(p\), \(t\) is drawn from a uniform distribution \(t\sim\mathcal{U}([0,1])\), \(\epsilon\) sampled from a Gaussian \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\), and \(w_{t},\omega_{t},\alpha_{t}\) influence the image fidelity depending on \(t\). Consequently, the DM is trained to denoise \(\mathbf{z}_{t}:=\mathbf{x}+\epsilon\) yielding \(\mathbf{x}\) with the squared error loss. At inference, the DM is sampled using the model's prediction of \(\mathbf{x}=(\mathbf{z}_{t}-\tilde{\epsilon_{\theta}})\), with \(\tilde{\epsilon_{\theta}}\) as described below.
Classifier-free guidance (Ho & Salimans, 2022) is a conditioning method using a purely generative diffusion model, eliminating the need for an additional pre-trained classifier. During training, the text conditioning \(\mathbf{c}_{p}\) drops randomly with a fixed probability, resulting in a joint model for unconditional and conditional objectives. During inference, the score estimates for the \(\mathbf{x}\)-prediction are adjusted so that:
\[\tilde{\epsilon}_{\theta}(\mathbf{z}_{t},\mathbf{c}_{p}):=\epsilon_{\theta}( \mathbf{z}_{t})+s_{g}(\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{p})-\epsilon _{\theta}(\mathbf{z}_{t})) \tag{2}\]
with guidance scale \(s_{g}\) and \(\epsilon_{\theta}\) defining the noise estimate with parameters \(\theta\). Intuitively, the unconditioned \(\epsilon\)-prediction is pushed in the direction of the conditioned one, with \(s_{g}\) determining the extent of the adjustment.
### Semantic Guidance on Concepts
We introduce Sega to influence the diffusion process along several directions. To this end, we substantially extend the principles introduced in classifier-free guidance by solely interacting with the concepts already present in the model's latent space. Therefore, Sega requires no additional training, no extensions to the architecture, and no external guidance. Instead, it is calculated during the existing diffusion iteration. More specifically, Sega uses multiple textual descriptions \(e_{i}\), representing the given target concepts of the generated image, in addition to the text prompt \(p\).
**Intuition.** The overall idea of Sega is best explained using a 2D abstraction of the high dimensional \(\epsilon\)-space, as shown in Fig. 2. Intuitively, we can understand the space as a composition of arbitrary sub-spaces representing semantic concepts. Let us consider the example of generating an image of a king. The unconditioned noise estimate (black dot) starts at some random point in the \(\epsilon\)-space without semantic grounding. The guidance corresponding to the prompt "a portrait of a king" represents a vector (blue vector) moving us into a portion of \(\epsilon\)-space where the concepts'male' and royal overlap, resulting in an image of a king. We can now further manipulate the generation process using Sega. From the unconditioned starting point, we get the directions of'male' and 'female' (orange/green lines) using estimates conditioned on the respective prompts. If we subtract this inferred'male' direction from our prompt guidance and add the 'female' one, we now reach a point in the \(\epsilon\)-space at the intersection of the 'royal' and 'female' sub-spaces, i.e., a queen. This vector represents the final direction (red vector) resulting from semantic guidance.
**Isolating Semantics in Diffusion.** Next, we investigate the actual noise-estimate space of Stable Diffusion (SD). This enables extracting semantic concepts from within that space and applying them during image generation.
Numerical values of \(\epsilon\)-estimates are generally Gaussian distributed. While the value in each dimension of the latent vector can differ significantly between seeds, text prompts, and diffusion steps, the overall distribution always remains similar to a Gaussian distribution (cf. App. B). Using the arithmetic principles of classifier-free guidance, we can now identify those dimensions of a latent vector encoding an
Figure 2: Semantic guidance (SEGA) applied to the image ‘a portrait of a king’ using ‘king’\(-\) ‘male’\(+\) ‘female’. (Best viewed in color)
arbitrary semantic concept. To that end, we calculate the noise estimate \(\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{e})\), which is conditioned on a concept description \(e\). We then take the difference between \(\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{e})\) and the unconditioned estimate \(\epsilon_{\theta}(\mathbf{z}_{t})\) and scale it. Again, the numerical values of the resulting latent vector are Gaussian distributed, as shown in Fig. 3. We will demonstrate that those latent dimensions falling into the upper and lower tail of the distribution alone encode the target concept. We empirically determined that using only 1-5% of the \(\epsilon\)-estimate's dimensions is sufficient to apply the desired changes to an image. Consequently, the resulting concept vectors are largely isolated; thus, multiple ones can be applied simultaneously without interference (cf. Sec. 4). We subsequently refer to the space of these sparse noise-estimate vectors as _semantic space_.
**One Direction.** Let us formally define the previous intuition for Sega by starting with a single direction, i.e., editing prompt. Again, we use three \(\epsilon\)-predictions to move the unconditioned score estimate \(\epsilon_{\theta}(\mathbf{z}_{t})\) towards the prompt conditioned estimate \(\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{p})\) and simultaneously away/towards the concept conditioned estimate \(\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{e})\), depending on the editing direction. Formally, we compute
\[\epsilon_{\theta}(\mathbf{z}_{t})+s_{g}\big{(}\epsilon_{\theta}(\mathbf{z}_{ t},\mathbf{c}_{p})-\epsilon_{\theta}(\mathbf{z}_{t})\big{)}+\gamma(\mathbf{z}_{t}, \mathbf{c}_{e}) \tag{3}\]
with the semantic guidance term \(\gamma\)
\[\gamma(\mathbf{z}_{t},\mathbf{c}_{e})=\mu(\psi;s_{e},\lambda)\psi(\mathbf{z}_ {t},\mathbf{c}_{e}) \tag{4}\]
where \(\mu\) applies an editing guidance scale \(s_{e}\) element-wise, and \(\psi\) depends on the editing direction:
\[\psi(\mathbf{z}_{t}, \mathbf{c}_{p},\mathbf{c}_{e})= \tag{5}\] \[\begin{cases}\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{e})- \epsilon_{\theta}(\mathbf{z}_{t})&\text{if pos. guidance}\\ -\big{(}\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{e})-\epsilon_{\theta}( \mathbf{z}_{t})\big{)}&\text{if neg. guidance}\end{cases}\]
Consequently, changing the guidance direction is reflected by the direction of the vector between \(\epsilon_{\theta}(\mathbf{z}_{t},\mathbf{c}_{e})\) and \(\epsilon_{\theta}(\mathbf{z}_{t})\).
The term \(\mu\) (Eq. 4) considers those dimensions of the prompt conditioned estimate relevant to the defined editing prompt \(e\). To this end, \(\mu\) takes the largest absolute values of the difference between the unconditioned and concept-conditioned estimates. This corresponds to the upper and lower tail of the numerical distribution as defined by percentile threshold \(\lambda\). All values in the tails are scaled by an edit scaling factor \(s_{e}\), with everything else being set to 0, such that
\[\mu(\psi;s_{e},\lambda)=\begin{cases}s_{e}&\text{where }|\psi|\geq\eta_{ \lambda}(|\psi|)\\ 0&\text{otherwise}\end{cases} \tag{6}\]
where \(\eta_{\lambda}(\psi)\) is the \(\lambda\)-th percentile of \(\psi\). Consequently, larger values of \(s_{e}\) increase the effect of semantic guidance.
To offer even more control over the diffusion process, we make two adjustments to the methodology presented above. We add a warm-up parameter \(\delta\) that will only apply guidance \(\gamma\) after an initial warm-up period in the diffusion process, i.e., \(\gamma(\mathbf{z}_{t},\mathbf{c}_{p},\mathbf{c}_{S}):=\mathbf{0}\) if \(t<\delta\). Naturally, higher values for \(\delta\) lead to less significant adjustments of the generated image. If we aim to keep the overall composition of the image unchanged, selecting a sufficiently high \(\delta\) ensures only altering fine-grained details of the output.
Furthermore, we add a momentum term \(\nu_{t}\) to the semantic guidance \(\gamma\) in order to accelerate guidance over time steps for dimensions that are continuously guided in the same direction. Hence, \(\gamma_{t}\) is defined as:
\[\gamma(\mathbf{z}_{t},\mathbf{c}_{e})=\mu(\psi;s_{e},\lambda)\psi(\mathbf{z}_ {t},\mathbf{c}_{e})+s_{m}\nu_{t} \tag{7}\]
with momentum scale \(s_{m}\in[0,1]\) and \(\nu\) being updated as
\[\nu_{t+1}=\beta_{m}\nu_{t}+(1-\beta_{m})\gamma_{t} \tag{8}\]
where \(\nu_{0}=\mathbf{0}\) and \(\beta_{m}\in[0,1)\). Thus, larger \(\beta_{m}\) lead to less volatile changes in momentum. Momentum is already built up during the warm-up period, even though \(\gamma_{t}\) is not applied during these steps.
**Beyond One Direction.** Now, we are ready to move beyond using just one direction towards multiple concepts \(e_{i}\) and, in turn, combining multiple calculations of \(\gamma_{t}\).
For all \(e_{i}\), we calculate \(\gamma_{t}^{i}\) as described above with each defining their own hyperparameter values \(\lambda^{i}\), \(s_{e}^{i}\). The weighted sum of all \(\gamma_{t}^{i}\) results in
\[\hat{\gamma}_{t}(\mathbf{z}_{t},\mathbf{c}_{p};\mathbf{e})=\sum\nolimits_{i\in I }g_{i}\gamma_{t}^{i}(\mathbf{z}_{t},\mathbf{c}_{p},\mathbf{c}_{e_{i}}) \tag{9}\]
In order to account for different warm-up periods, \(g_{i}\) is defined as \(g_{i}=0\) if \(t<\delta_{i}\). However, momentum is built
Figure 3: Numerical intuition of semantic guidance. The difference between the concept-conditioned and unconditioned estimates is first scaled. Subsequently, the values in the upper and lower tail are used as the dimensions representing the specified concept. Distribution plots calculated using kernel-density estimates with Gaussian smoothing.
up using all editing prompts and applied once all warm-up periods are completed, i.e., \(\forall\delta_{i}:\delta_{i}\geq t\). We provide a pseudo-code implementation of Sega in App. A.
Sega's underlying methodology is architecture-agnostic and applicable to any model employing classifier-free guidance. For subsequent experiments, we base our implementation on SD v1.51 and make our code available online. We note that Sega can easily be applied to real images using reconstruction techniques for diffusion models. However, this is out of scope for this work, wherefore we limit the depicted examples and evaluation to generated images.
Footnote 1: [https://huggingface.co/runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
## 4 Properties of Semantic Space
With the fundamentals of semantic guidance established, we next investigate the properties of Sega's semantic space. In addition to the following discussion, we present further examples in the Appendix.
**Robustness.**Sega behaves robustly for incorporating arbitrary concepts into the original image. In Fig. 3(a), we applied guidance for the concept _'glasses'_ to images from different domains. Notably, this prompt does not provide any context on how to incorporate the glasses into the given image and thus leaves room for interpretation. The depicted examples showcase how Sega extracts best-effort integration of the target concept into the original image that is semantically grounded. This makes Sega's use easy and provides the same exploratory nature as the initial image generation.
**Uniqueness.** Guidance vectors \(\gamma\) of one concept are unique and can thus be calculated once and subsequently applied to other images. Fig. 3(b) shows an example for which we computed the semantic guidance for _'glasses'_ on the left-most image and simply added the vector in the diffusion process of other prompts. All faces are generated wearing glasses without a respective \(\epsilon\)-estimate required. This even covers significant domain shifts, as seen in the one switching from photo-realism to drawings.
However, the transfer is limited to the same initial seed, as \(\epsilon\)-estimates change significantly with diverging initial noise latents. Furthermore, more extensive changes to the image composition, such as the one from human faces to animals or inanimate objects, require a separate calculation of the guidance vector. Nonetheless, Sega introduces no visible artifacts to the resulting images.
**Monotonicity.** The magnitude of a semantic concept in an image scales monotonically with the strength of the semantic guidance vector. In Fig. 1, we can observe the effect of increasing the strength of semantic guidance \(s_{e}\). Both for positive and negative guidance, the change in scale correlates with the strength of the smile or from. Consequently, any changes to a generated image can be steered intuitively using only the semantic guidance scale \(s_{e}\) and warm-up
Figure 4: Showcasing robustness and uniqueness of the semantic guidance vectors inferred with Sega. Top row depicts the unchanged image, while the bottom row depicts the ones guided towards ‘_glasses’_. (Best viewed in color)
Figure 5: Successive combination of concepts. From top left to bottom right an additional concept is added in each image. The concepts do not interfere with each other and only change the relevant portion of the image. (Best viewed in color)
period \(\delta\). This level of control over the generation process is also applicable to multiple concepts with arbitrary combinations of the desired strength of the edit per concept.
**Isolation.** Different concepts are largely isolated because each concept vector requires only a fraction of the total noise estimate. Meaning that different vectors do not interfere with each other. Thus, multiple concepts can be applied to the same image simultaneously, as shown in Fig. 5. We can see, for example, that the glasses which were added first remain unchanged with subsequently added edits. We can utilize this behavior to perform more complex changes, best expressed using multiple concepts. One example is the change of gender by simultaneously removing the'male' concept and adding the 'female' one (cf. Figs. 1 and 6).
## 5 Experimental Evaluation
Next, we present an exhaustive evaluation of semantic guidance on an empirical benchmark, as well as on a variety of qualitative tasks. The intention with Sega is not to outperform existing methods. The performance and fidelity of outputs always depends on the underlying capabilities of the model. Our main focus are the inherent semantic capabilities of diffusion models by examining the level of control available in interacting with the model. Refuting claims of previous research, we demonstrate the suitability of noise estimates for semantic control (Kwon et al., 2022) and its capability for small, subtle changes (Wu et al., 2022).
### Empirical
We performed an extensive empirical evaluation on human faces and respective attributes. This setting is inspired by the CelebA dataset (Liu et al., 2015) and marks a well-established benchmark for semantic changes in image generation. We generated 250 images with unique seeds using the prompt 'an image of the face of a random person' and manipulated ten facial attributes. These attributes are a subset of the CelebA labels. All attributes and respective examples of the corresponding manipulation using Sega are depicted in Fig. 6. In addition to additive image edits, we evaluated negative guidance using three of these attributes as well as two combinations of four simultaneous edits, as shown in Fig. 5. Therefore, our empirical evaluation spans 15 attribute changes and combinations in total.
We evaluated the generated images with a user study and provide more details on its implementation in App. D. The results are shown in Tab. 1. For positive guidance, Sega faithfully adds the target concept to the image on average in 95% of the cases, with the majority of attributes exceeding 97%. We further manually investigated the two outliers, '_Bald_' and '_Bangs_'. We assume that many of the non-native English-speaking annotators were not familiar with the term 'bangs' itself. This assumption is based on correspondence with some of the workers, and the conspicuously low rate of annotator consensus. Consequently, the numbers for '_bangs_' should be taken with a grain of salt. For baldness, we found that long hair often makes up a large portion of a portrait and thus requires more substantial changes to the image. Consequently, such edits require stronger hyperparameters than those chosen for this study. We observe negative guidance to remove existing attributes from an image to work similarly well. It is worth pointing out that the guidance away from '_beard_' usually resulted in a substantial reduction in facial hair, but failed to remove it entirely for \(\sim\)10% of the images. Again, this suggests that the hyperparameters were probably not strong enough.
Lastly, we look into the simultaneous guidance of multiple concepts at once. The results in Tab. 2 empirically demonstrate the isolation of semantic guidance vectors. The per-attribute success rate remains similar for four instead of one distinct edit concept, suggesting no interference of guidance vectors. Consequently, the success of multiple edits only depends on the joint probability of the individual concepts. In comparison, if only two out of four applied concepts were interfering with each other to be mutually exclusive, the success rate of such a combination would always be 0%. Contrary to that, we successfully apply concurrent concepts in up to 91% of generated images.
Figure 6: Examples from our empirical evaluation benchmark. Showcases the 10 attributes edited with Sega. Original and edited images are evaluated through a user study on whether they exhibit a certain feature. (Best viewed in color)
### Qualitative
In addition to the empirical evaluation, we present qualitative examples on other domains and tasks. We show further examples in higher resolution in the Appendix. Overall, this highlights the versatility of Sega since it allows interaction with any of the abundant number of concepts diffusion models are capable of generating in the first place. In Fig. 6(a), we can see various edits being performed on an image of a car. This showcases the range of potential changes feasible with Sega, which include color changes, altering the vehicle type or surrounding scenery. In each scenario, the guidance vector inferred using Sega accurately targets the respective image region and performs changes faithful to the editing prompt. All while making next to no changes to the irrelevant image portions.
Furthermore, we performed a diverse set of style transfers, as shown in Fig. 6(b). Sega faithfully applies the styles of famous artists, as well as artistic epochs and drawing techniques. In this case, the entirety of the image has to be changed while keeping the image composition the same. Consequently, we observed that alterations to the entire output--as in style transfer--require a slightly lower threshold of \(\lambda\approx 0.9\). Nonetheless, this still means that 10% of the \(\epsilon\)-space is sufficient to change the entire style of an image. Fig. 6(b) also includes a comparison between outputs produced by Sega with those from simple extensions to the prompt text. Changing the prompt also significantly alters the image composition. These results further highlight the advantages of semantic control, which allows versatile and yet robust changes.
An additional benefit of semantic guidance directly on noise estimates is its independence on the modality of the concept description. In the case of SD, the modality happens to be text in natural language, but Sega can be applied to any conditional diffusion model. We explore this idea further with the goal of optimizing the overall artistic conception of generated images. Instead of defining the target concept with one text prompt, we directly use an abstract embedding, representing a particular type of imagery. To that end, we collected prompts known to produce high-quality results2 for five different types of images in _portrait photography, animation, concept art, character design_, and _modern architecture_. The conditioning embedding for one style is calculated as the average over the embeddings of all collected prompts. Exemplary outputs are depicted in Fig. 6(c). The results are of high quality and stay close to the original image but accurately reflect the targeted artistic direction beyond a single text prompt. Since concept vectors are isolated (cf. Sec. 4), we can also apply various types of changes (e.g. style transfer + composition) to the image simultaneously.
Footnote 2: Prompts taken from [https://mpost.io/best-100-prompts](https://mpost.io/best-100-prompts)
## 6 Broader Impact on Society
Recent developments in text-to-image models (Ramesh et al., 2022; Nichol et al., 2022; Saharia et al., 2022) have the potential for a far-reaching impact on society, both positive and negative, when deployed in applications such as image generation, image editing, or search engines. Previous research (Bianchi et al., 2022; Schramowski et al., 2022) described many potential negative societal implications that may arise due to the careless use of such large-scale gener
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & **Attribute** & **Samples** & **Consensus (\%)** & **Success (\%)** \\ \hline \multirow{8}{*}{**Sega**} & Gender & 241 & 99.2 & 100.0 \\ & Glasses & 243 & 99.6 & 100.0 \\ & Smile & 146 & 100.0 & 99.3 \\ & Bald & 220 & 91.2 & 82.1 \\ & Beard & 135 & 97.8 & 97.0 \\ & Hat & 210 & 99.0 & 99.0 \\ & Curls & 173 & 95.4 & 97.0 \\ & Makeup & 197 & 99.5 & 99.0 \\ & Gray hair & 165 & 97.6 & 91.2 \\ & Bangs & 192 & 86.2 & 82.7 \\ \hline \multirow{8}{*}{**Sega**} & **Overall** & **1922** & **96.5** & **95.0** \\ \cline{2-5} & No Glasses & 6 & 98.9 & 100.0 \\ \cline{1-1} & No Smile & 93 & 100.0 & 94.4 \\ \cline{1-1} & No Beard & 111 & 100.0 & 89.9 \\ \cline{1-1} \cline{2-5} & **Overall** & **210** & **99.5** & **92.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Empirical results of our user study conducted on face attributes. Sample sizes result from the portion of the 250 original images that did not contain the target attribute. Annotator consensus refers to the percentage of images for which the majority of annotators agreed on a label. Success rate is reported on those images with annotator consensus.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & **Attribute** & **Samples** & **Consensus (\%)** & **Success (\%)** \\ \hline \multirow{8}{*}{**Sega**} & \(\geq\) 1 Attr. & & & 100.0 \\ & \(\geq\) 2 Attr. & & & 100.0 \\ & \(\geq\) 3 Attr. & & & 98.2 \\ & **All 4 Attr.** & & & **90.7** \\ \cline{1-1} \cline{2-5} & Glasses & & 96.4 & 100.0 \\ & Smile & 55 & 100.0 & 96.4 \\ & Curls & & 100.0 & 98.2 \\ & Beard & & 100.0 & 100.0 \\ \hline \multirow{8}{*}{**Sega**} & \(\geq\) 1 Attr. & & & 100.0 \\ & \(\geq\) 2 Attr. & & & 100.0 \\ & \(\geq\) 3 Attr. & & & 100.0 \\ \cline{1-1} & **All 4 Attr.** & & & **75.6** \\ \cline{1-1} \cline{2-5} & No Smile & & 100.0 & 97.8 \\ \cline{1-1} & Makeup & & 100.0 & 97.6 \\ \cline{1-1} & Hat & & 100.0 & 88.6 \\ \cline{1-1} & Female & & 100.0 & 100.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of user study on simultaneous combination of face attributes. Sample sizes result from the portion of the 250 original images that did not contain any target attribute. Annotator consensus refers to the percentage of images for which the annotator’s majority agreed on a label. Success rates for combinations on any combination of \(x\) attributes with per-attribute scores reflecting the isolated success of that edit. Nonetheless, all scores are shown for images with all 4 edit concepts applied simultaneously.
ative models. Many of these problems can be attributed to the noisy, large-scale datasets these models rely on. Since recent text-to-image models, such as SD, are trained on web-crawled data containing inappropriate content (Schuhmann et al., 2022), they are no exception to this issue. Specifically, current versions of SD show signs of inappropriate degeneration (Schramowski et al., 2022). While Schramowski et al. (2022) utilize the model's notion of inappropriateness to steer the model away from generating related content, it is noteworthy that we introduce an approach that could also be used to guide image generation toward inappropriate material. However, on the positive side, Sega has the potential to mitigate bias. As demonstrated by Nichol et al. (2022), removing data from the training set has adverse effects, e.g., on a model's generalization ability. In contrast, Sega works at inference promoting fairness in the outcome. Therefore, we advocate for further research in this direction.
Another frequently voiced point of criticism is the notion that generative models like SD are replacing human artists and illustrators. At first glance, the great results produced by these models might warrant this impression which is further fueled by unbalanced media coverage. However, the generative process as a whole still requires a substantial amount of iterative human feedback and creative thinking. Sega further promotes creative artwork by providing intuitive means of interaction that support an exploratory, creative process.
## 7 Conclusions
We introduced semantic guidance (Sega) for diffusion models. Sega facilitates interaction with arbitrary concepts during image generation. The approach requires no additional training, no extensions to the architecture, no external guidance, and is calculated during the existing generation process. The concept vectors identified with Sega are robust, isolated, can be combined arbitrarily, and scale monotonically. We evaluated Sega on a variety of tasks and domains, highlighting--among others--sophisticated image composition and editing capabilities.
Our findings are highly relevant to the debate on disentangling models' latent spaces. So far, disentanglement as a property has been actively pursued (Karras et al., 2020). However, it is usually not a necessary quality in itself but a means to an end to easily interact with semantic concepts. We demonstrated that this level of control is feasible without disentanglement and motivate research in this direction.
Figure 7: Qualitative Examples of Semantic Guidance on various tasks and domains. (Best viewed in color)
Additionally, we see several other exciting avenues for future work. For one, it is interesting to investigate further how concepts are represented in the latent space of DM's and how to quantify them. More importantly, automatically detecting concepts could provide novel insights and toolsets to mitigate biases, as well as enacting privacy concerns of real people memorized by the model.
AcknowledgmentsThis research has benefited from the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) cluster projects "The Third Wave of AI" and hessian.AI, from the German Center for Artificial Intelligence (DFKI) project "SAINT", the Federal Ministry of Education and Research (BMBF) project KISTRA (reference no. 13N15343), as well as from the joint ATHENE project of the HMWK and the BMBF "AVSV".
|
2302.13409 | A charged Coulomb Bose gas with dipole-dipole interactions | We systematically study the properties of a charged Coulomb Bose gas with
dipole-dipole interactions in the weak coupling limit at both zero and finite
temperatures using the Hartree-Fock-Bogoliubov approach. We numerically analyze
the collective excitations, the condensate fraction, the depletion, the
chemical potential, and the static structure factor. Moreover, we compare our
new findings with those of nondipolar charged Coulomb Bose gas. Our results
reveal that the complex interplay of Coulomb and dipole-dipole interactions may
modify the stability, the thermodynamics and the coherence of the system. | Abdelaali Boudjemaa | 2023-02-26T21:36:54Z | http://arxiv.org/abs/2302.13409v1 | # A charged Coulomb Bose gas with dipole-dipole interactions
###### Abstract
We systematically study the properties of a charged Coulomb Bose gas with dipole-dipole interactions in the weak coupling limit at both zero and finite temperatures using the Hartree-Fock-Bogoliubov approach. We numerically analyze the collective excitations, the condensate fraction, the depletion, the chemical potential, and the static structure factor. Moreover, we compare our new findings with those of nondipolar charged Coulomb Bose gas. Our results reveal that the complex interplay of Coulomb and dipole-dipole interactions may modify the stability, the thermodynamics and the coherence of the system.
The last decades have witnessed a remarkable surge of interest in charged bosons. A charged Bose gas (CBG) in which particles interact via Coulomb forces was stimulated by potential applications in statistical mechanics [1], physics of high-temperature superconductivity [2], Meissner-Ochsenfeld effect [3; 4], collective excitations [5; 6], nuclear reactions in dense plasmas and astrophysics [7; 8; 9; 10; 11], Wigner crystallization [12; 13; 14; 15] and so on.
The properties of CBG at zero temperature have been extensively studied using different approaches. In 1961, Foldy [16] calculated the ground-state energy and the elementary excitations spectrum of CBG employing the Bogoliubov theory [17], valid only at very low temperatures and in the high-density (weak coupling) regime, \(r_{s}\ll 1\)[16]. The coupling strength is characterized by the dimensionless gas parameter \(r_{s}=r_{0}/a_{B}\), where \(a_{B}\) is the Bohr radius, and \(r_{0}=(3/4\pi n)^{1/3}\) is the interparticle separation with \(n\) being the mean density. Higher-order corrections to the ground-state energy were obtained in [18; 19; 20] using beyond the Bogoliubov approximation. The subsequent studies dealt with the random phase approximation dielectric response function [21; 22; 23], quantum-to-classical mappings [24; 25], collective modes and screening properties of CBG [26; 27]. Further investigations have been performed at finite temperature focusing on the critical temperature, elementary excitations, the normal and anomalous momentum distributions [28; 29; 30; 31; 32; 33]. Rigorous results for various ground-state properties of CBG have been obtained in the frame of Quantum Monte Carlo methods (see e.g. [34; 35; 36; 13] and references therein). Ground-state energies of the two-component CBG have been computed by Lieb _et al.[37]_ using Dyson's method. The authors of Refs.[38; 39] have analyzed the magnetic properties of CBG within the mean-field theory. Other aspects of CBG have been studied in [40] and references therein.
In this Letter we investigate the ground-state properties of CBG with short-range (contact) and dipole-dipole interactions (DDI) using the full HFB theory. Meissner effect in a charged Bose gas with short-range repulsion has been addressed in [41], where the collective excitations due to the repulsive interaction found to complicate the situation. Ultracold quantum gases with DDI have attracted tremendous interests recently due to the long-range character and the anisotropy (see for review [42; 43; 44; 45] and references therein) in contrast to the short-range interactions. Dipolar Bose-Einstein condensates (BECs) consist of atoms with sizeable magnetic dipole moments and have been experimentally realized with \({}^{52}\)Cr [46], \({}^{164}\)Dy [47], and \({}^{168}\)Er [48]. Most recently, Er-Dy mixture has been experimentally achieved in two-species magneto-optical trap [49]. The DDI may strongly affect the excitations, the dynamics and the thermodynamic properties of the BEC [42; 43; 44; 45]. In addition, they lead to the emergence of novel quantum phases such as supersolid and droplet states (see for review [50; 51; 52] and references therein). Therefore, it is instructive to discuss the role played by the competition between the Coulomb, contact and dipolar interactions in the CBG.
Within the Hartree-Fock-Bogoliubov (HFB) theory we write down the generalized nonlocal Gross-Pitaevskii and calculate the Bogoliubov excitations energy. We show that this latter presents a plasmon gap spectrum at long wavelengths (low momenta) regime due to the Coulomb interactions. The presence of DDI lead to shift the frequency of such a gap. We provide useful analytic expressions for the normal and anomalous fluctuations, the equation of state (EoS), and the static structure factor. The obtained expressions (notably those of the anomalous density and the EoS) suffer from both infrared and ultraviolet divergences. The former originates from the Coulomb interaction while the latter is caused by the use of a contact interaction. In the absence of contact interactions and DDI, exact cancellation of infrared-divergent terms in the HFB shift of the single-particle excitation energy has been demonstrated in Ref.[33].
Numerical results of the obtained equations are presented in terms of temperature, the DDI, and the gas parameter in the weak-coupling regime. We show that the normal and anomalous fractions increase with \(r_{s}\). In a comparison with a conventional dipolar BEC this study reveals that the DDI can decrease both the non-condensed and the anomalous concentrations apart in the regime of very weak coupling where the depletion rises with the DDI. Our results indicate also that the con
densed fraction reduces with \(r_{s}\) for any value of temperature. Crucially, we point out that the correction to the EoS arising from the quantum fluctuations exhibits an unconventional behavior with temperature, DDI and the gas parameter. Furthermore, it is shown that the static structure factor overshoots unity displaying a sharp peak at lower temperatures and for relatively large DDI. It is found that the interplay of the Coulomb interaction and the DDI may lead to shift the height and the position of such peaks. To the best of our knowledge, this is the first work unveiling these spectacular properties of CBG.
We consider a gas of \(N\) identical charged bosons with charge \(e\) and mass \(m\) in a box of volume \(V\) with both contact and dipolar interactions moving in a static uniform neutralizing background. We assume that the dipoles are strictly aligned along the \(z\)-axis, in this case the interaction potential has a contact component related to the \(s\)-wave scattering length \(a\) and a dipolar component (see below). In the frame of the HFB theory, uniform charged bosons with DDI are described by the following nonlocal generalized Gross-Pitaevskii equation [53; 54; 32; 55]:
\[i\hbar\dot{\Phi}(\mathbf{r},t)=\bigg{(}-\frac{\hbar^{2}\nabla^{2}}{2m}-\mu \bigg{)}\Phi(\mathbf{r},t)+\int d\mathbf{r}^{\prime}V(\mathbf{r}-\mathbf{r}^{ \prime})\bigg{[}n(\mathbf{r}^{\prime},t)\Phi(\mathbf{r},t)+\tilde{n}(\mathbf{ r},\mathbf{r}^{\prime},t)\Phi(\mathbf{r}^{\prime},t)+\tilde{m}(\mathbf{r}, \mathbf{r}^{\prime},t)\Phi^{*}(\mathbf{r}^{\prime},t)\bigg{]}, \tag{1}\]
where \(\Phi(\mathbf{r})=\langle\hat{\psi}(\mathbf{r})\rangle\) is the condensate wavefunction, with \(\hat{\psi}(\mathbf{r})\) being the boson field operator, \(\mu\) is the chemical potential, \(n_{c}(\mathbf{r})=|\Phi(\mathbf{r})|^{2}\), \(\tilde{n}(\mathbf{r})=\langle\hat{\bar{\psi}}^{\dagger}(\mathbf{r})\hat{\bar {\psi}}(\mathbf{r})\rangle\) and \(\tilde{m}(\mathbf{r})=\langle\hat{\bar{\psi}}(\mathbf{r})\hat{\bar{\psi}}( \mathbf{r})\rangle\) are respectively the condensed, non-condensed and anomalous densities, where \(\hat{\bar{\psi}}(\mathbf{r})=\hat{\psi}(\mathbf{r})-\Phi(\mathbf{r})\) is the noncondensed part of the field operator. The total density is given by \(n(\mathbf{r})=n_{c}(\mathbf{r})+\tilde{n}(\mathbf{r})\). The terms \(\tilde{n}(\mathbf{r},\mathbf{r}^{\prime})\) and \(\tilde{m}(\mathbf{r},\mathbf{r}^{\prime})\) are respectively, the normal and the anomalous one-body density matrices which account for the exchange interaction between the condensed and non-condensed atoms. The two-body interaction potential is
\[V(\mathbf{r}-\mathbf{r}^{\prime}) =V_{g}(\mathbf{r}-\mathbf{r}^{\prime})+V_{\mathrm{dd}}(\mathbf{r }-\mathbf{r}^{\prime})+V_{c}(\mathbf{r}-\mathbf{r}^{\prime}) \tag{2}\] \[=g\delta(\mathbf{r}-\mathbf{r}^{\prime})+\frac{C_{dd}}{4\pi}\frac {1-3\cos^{2}\theta}{|\mathbf{r}-\mathbf{r}^{\prime}|^{3}}+\frac{C_{c}}{| \mathbf{r}-\mathbf{r}^{\prime}|},\]
where \(g=4\pi\hbar^{2}a/m>0\) is the coupling constant corresponds to the contact interaction with \(a\) being the \(s\)-wave scattering length, \(C_{dd}=d^{2}/\epsilon_{0}\) is the electric DDI strength with \(\epsilon_{0}\) being the permittivity of vacuum, \(\theta\) is the angle between the polarization direction and \(\mathbf{r}\), \(C_{c}\) describes the strength of Coulomb interactions, it is related to the Bohr radius via: \(C_{c}=4\pi\hbar^{2}/(ma_{B})\)[16; 26; 33]. It is clear that for \(C_{c}=C_{\mathrm{dd}}=0\), Eq.(1) reduces to the standard local Gross-Pitaevskii equation.
Now we calculate the elementary excitations and fluctuations of a CBG. In the high-density limit and when the temperature is close to zero, we can linearize Eq.(1) using the \(\Phi=\sqrt{n_{c}}+\delta\Phi\), where \(\delta\Phi(\mathbf{r},t)=u_{k}e^{i\mathbf{k}\cdot\mathbf{r}-i\varepsilon_{k}t/ \hbar}+v_{k}e^{i\mathbf{k}\cdot\mathbf{r}+i\varepsilon_{k}t/\hbar}\ll\sqrt{n _{c}}\), where \(u_{k}\) and \(v_{k}\) are the Bogoliubov amplitudes. In Fourier space, the wavefunctions are real-valued (\(\Phi_{0}=\Phi_{0}^{*}=\sqrt{n_{c}}\)), and the interaction potential (2) is written as:
\[\tilde{V}(\mathbf{k}) =\tilde{V}_{g}(\mathbf{k})+\tilde{V}_{\mathrm{dd}}(\mathbf{k})+ \tilde{V}_{c}(\mathbf{k}) \tag{3}\] \[=g\big{[}1+\epsilon_{\mathrm{dd}}(3\cos^{2}\theta_{k}-1)+ \epsilon_{c}/k^{2}\big{]},\]
where \(\epsilon_{\mathrm{dd}}=C_{\mathrm{dd}}/3g\) is the relative strength which describes the interplay of contact interaction and the DDI, \(\theta_{k}\) is the angle between the vector \(\mathbf{k}\) and the polarization direction, and \(\epsilon_{c}=C_{c}/g\) which has dimension (length)\({}^{-2}\) is the relative coupling strength which describes the interplay of contact interaction and Coulomb interaction. For the most widely utilized species of cold atoms (such as Rb, Cr, Er, Dy), \(\epsilon_{c}a_{B}^{2}\) ranges from \(0.005\) to \(0.01\). Due to the electroneutrality, one can set \(\tilde{V}_{c}(\mathbf{k}\equiv 0)=0\), which is a consequence of the compensation of the boson-boson repulsion by the attraction due to a spatially homogeneous charged background [27].
The chemical potential is given according to Eq.(1) by [56; 57]
\[\mu=\tilde{V}(|\mathbf{k}|=0)n+\frac{1}{V}\sum_{\mathbf{k}\neq\mathbf{0}} \tilde{V}(\mathbf{k})\big{(}\tilde{n}_{k}+\tilde{m}_{k}\big{)}, \tag{4}\]
where \(\tilde{n}_{k}\) and \(\tilde{m}_{k}\) stand for the normal and anomalous distributions which can be defined in the spirit of the HFB approximation as [56; 32; 57]:
\[\tilde{n}_{k}=[v_{k}^{2}+(u_{k}^{2}+v_{k}^{2})N_{k}], \tag{5}\]
and
\[\tilde{m}_{k}=u_{k}v_{k}(2N_{k}+1), \tag{6}\]
where \(N_{k}=\langle\hat{b}_{k}^{\dagger}\hat{b}_{k}\rangle=[\exp(\varepsilon_{k}/T)- 1]^{-1}\) are occupation numbers for the excitations. The solution of the resulting Bogoliubov-de-Gennes (BdG) equations gives for the Bogoliubov quasiparticle amplitudes
\[u_{k}^{2}=\frac{\omega_{k}+\varepsilon_{k}}{2\varepsilon_{k}},\qquad v_{k}^{2}= \frac{\omega_{k}-\varepsilon_{k}}{2\varepsilon_{k}},\]
and for the Bogoliubov excitations energy [56; 57]
\[\varepsilon_{k}=\sqrt{\omega_{k}^{2}-\Delta_{k}^{2}}, \tag{7}\]
where
\[\omega_{k}\equiv E_{k}+n\tilde{V}(|\mathbf{k}|=0)+n_{c}\tilde{V}(\mathbf{k})+ \frac{1}{V}\sum_{p\neq 0}\tilde{n}_{p}\tilde{V}(\mathbf{k}+\mathbf{p})-\mu_{1},\]
and
\[\Delta_{k}\equiv n_{c}\tilde{V}({\bf k})+\frac{1}{V}\sum_{p\neq 0}\tilde{m}_{p} \tilde{V}({\bf k}+{\bf p}),\]
where \(E_{k}=\hbar^{2}k^{2}/2m\) is the free particle energy.
Evidently, the HFB spectrum (7) has an unphysical gap in the limit of long wavelengths due to the inclusion of the anomalous correlations. To circumvent this problem, we define the chemical potential \(\mu_{1}\) as [56; 58; 59]:
\[\mu_{1}=n\tilde{V}(|{\bf k}|=0)+\frac{1}{V}\sum_{k\neq 0}\tilde{V}({\bf k})( \tilde{n}_{k}-\tilde{m}_{k}). \tag{8}\]
with the condition of charge neutrality which cancels the \(k=0\) term associated with Coulomb interaction in the sum [27; 32]. It is obvious that the chemical potential \(\mu_{1}\) of Eq.(8) renders the spectrum (7) gapless in agreement with the Hugenholtz-Pines theorem [60]. Importantly, the excitation spectrum (7) has a roton-maxon structure originating from the anisotropy of the Hartree-Fock corrections.
Setting \(g=C_{\rm dd}=0\) in Eq.(7), the HFB excitation energy for a charged Bose gas is well reproduced [32]. In the long-wavelength limit \(k\to 0\) where \(\lim\limits_{k\to 0}(\tilde{n}_{k}-\tilde{m}_{k})\simeq-1/2\)[32], the zero-temperature Bogoliubov excitations energy (7) coincides with the plasma energy \(\varepsilon_{g}/\hbar=\sqrt{nC_{c}/m}\) (i.e. plasmon gap). At finite temperatures, one can expect that the spectrum energy (7) increases with \(T\) and vanishes at the transition. In the high momenta limit, \(k\to\infty\), the excitations spectrum (7) reduces to the free particle law (\(\varepsilon_{k}=E_{k}\)).
For the sake of simplicity, we shall assume from now on that \(\tilde{m}/n_{c}\ll 1\) and \(\tilde{n}/n_{c}\ll 1\) (i.e. we neglect higher-order terms) and set \(\theta_{k}=\pi/2\). In such a case the Bogoliubov excitations energy (7) reduces to the following dimensionless form:
\[\varepsilon_{q}=\varepsilon_{g}\sqrt{\frac{q^{4}}{12r_{s}}+\frac{q^{2}}{3r_{s} }\frac{r_{0}^{2}}{\xi^{2}}(1-\epsilon_{dd})+1}, \tag{9}\]
where \(q=kr_{0}\), and \(\xi=\hbar/\sqrt{mng}\) is the standard healing length of the condensate.
In Figure 1 we show the behavior of the Bogoliubov spectrum for different values of the coupling parameter \(r_{s}\). It is clearly seen that the spectrum \(\varepsilon_{q}\) increases monotonically with \(q\) whatever the values of \(r_{s}\) and \(\epsilon_{dd}\). It increases with decreasing both \(r_{s}\) and \(\epsilon_{dd}\).
In a homogeneous Bose gas, the depletion and the anomalous density are defined as [32; 56; 57]: \(\tilde{n}=V^{-1}\sum\limits_{{\bf k}\neq 0}n_{k}\), and \(\tilde{m}=-V^{-1}\sum\limits_{{\bf k}\neq 0}\tilde{m}_{k}\). Working in the thermodynamic limit, the sum over \(k\) can be replaced by integrals according to the prescription \(\sum\limits_{{\bf k}}\to V\int_{0}^{\infty}d{\bf k}/(2\pi)^{3}\). Thereafter inserting the identity \(2N(x)+1=\coth(x/2)\) into Eqs.(5) and (6), we then obtain for the noncondensed and anomalous densities [57; 61]:
\[\tilde{n}=\frac{1}{2}\int\frac{d{\bf k}}{(2\pi)^{3}}\left[\frac{E_{k}+n\tilde {V}({\bf k})}{\varepsilon_{k}}{\coth}\left(\varepsilon_{k}/2T\right)-1\right], \tag{10}\]
and
\[\tilde{m}=-\frac{1}{2}\int\frac{d{\bf k}}{(2\pi)^{3}}\frac{n\tilde{V}({\bf k} )}{\varepsilon_{k}}{\coth}\left(\varepsilon_{k}/2T\right). \tag{11}\]
The validity of the present HFB approach requires the inequality: \(\tilde{n}\ll n\). This implies that to have a dilute CBG, the following conditions: \(r_{s}\ll 1\) and \(na^{3}\ll 1\) must be fulfilled.
The condensed fraction can be evaluated through: \(n_{c}/n=1-\tilde{n}/n\). For \(g=C_{dd}=0\), the condensed fraction reduces to \(n_{c}/n\simeq 1-0.2\,r_{s}^{3/4}\)[16].
Correction to the chemical potential due to the Lee-Huang-Yang (LHY) quantum fluctuations can be given from Eq.(4) as [57; 61]: \(\mu_{\rm LHY}=V^{-1}\sum\limits_{{\bf k}\neq 0}\tilde{V}({\bf k})\big{(} \tilde{n}_{k}+\tilde{m}_{k}\big{)}\). Then using the definitions \(\tilde{n}_{k}\) and \(\tilde{m}_{k}\) from Eqs.(5) and (6), we obtain:
\[\mu_{\rm LHY}=\frac{1}{2}\int\frac{d{\bf k}}{(2\pi)^{3}}\tilde{V}({\bf k}) \left[\frac{E_{k}}{\varepsilon_{k}}{\coth}\left(\varepsilon_{k}/2T\right)-1 \right]. \tag{12}\]
This equation permits us to calculate the LHY corrections to all thermodynamic quantities. In the absence of the short-range and dipolar interactions the LHY-corrected EoS reads \(\mu_{\rm LHY}\simeq-0.23\,\varepsilon_{g}r_{s}^{3/4}\)[16].
It is worth stressing that the exact analytical solutions of the integrals (10), (11) and (12) are not trivial except in some limiting cases. Therefore, we solve them numerically.
The results for the noncondensed and the anomalous fractions for different values of \(r_{s}\) and \(\epsilon_{dd}\) are shown
graphically in Fig.2. We see that the noncondensed and the anomalous fractions increase with the coupling strength \(r_{s}\) even in the absence of the DDI (\(\epsilon_{dd}=0\)). In the very weak coupling regime \(r_{s}\lesssim 0.15\), \(\tilde{n}/n\) rises with \(\epsilon_{dd}\) (see the inset of Fig.2.a). On the contrary, for \(r_{s}>0.15\), the depletion decreases with the DDI even for relatively large \(\epsilon_{dd}\). This can be attributed to the effect of the Coulomb interaction which dominates both the short-range and the dipolar interactions. The anomalous fraction is decreasing with the DDI in the whole range of \(r_{s}\) as is seen in Fig.2.b. For very small \(r_{s}\), one has \(\tilde{n}/n\ll 1\) and \(\tilde{m}/n\ll 1\) in good agreement with the results of the Bogoliubov [32] and with Monte Carlo data [34]. Whereas for larger \(r_{s}\), both the depletion and the anomalous fraction increase continuously whatever the value of \(\epsilon_{dd}\). For example, for \(r_{s}=0.5\) and \(\epsilon_{dd}=0.45\), one has \(\tilde{n}/n>10\%\) and \(\tilde{m}/n\gtrsim 25\%\), pointing out that the present HFB theory becomes no longer applicable in the regime of \(r_{s}\gtrsim 0.5\). Another important remark is that the anomalous fraction is larger than the normal fraction similarly to the neutral atomic dipolar and nondipolar BECs [56; 57; 58; 59; 61].
Figure 3.a reports the condensate fraction \(n_{c}/n=1-\tilde{n}/n\) as a function of temperature \(T/\varepsilon_{g}\) for different values of the coupling strength \(r_{s}\) and of the relative interaction strength \(\epsilon_{dd}\). It is clearly visible that the condensate fraction strongly decreases with \(r_{s}\) for any values of temperature. This indicates that the dipolar CBG becomes strongly depleted for large \(r_{s}\) and \(\epsilon_{dd}\) (see red lines). We see also that at fixed temperature, \(n_{c}/n\) lowers with increasing \(\epsilon_{dd}\) except for large values of \(r_{s}\), where it slightly decreases with decreasing \(\epsilon_{dd}\) due to the interplay of the Coulomb and dipolar interactions.
Figure 3.b depicts that at low temperatures \(T\lesssim\varepsilon_{g}\), the LHY-corrected chemical potential reduces with \(r_{s}\) and \(\epsilon_{dd}\). Remarkably, it remains negative in such a regime leading to decrease the total EoS. The negative value of \(\mu_{\rm LHY}\) is a product of oppositely charged back
Figure 2: (a) Depletion \(\tilde{n}/n\) at \(T=0\) as a function of the coupling parameter \(r_{s}\) for different values of \(\epsilon_{dd}\). (b) Anomalous fraction \(\tilde{m}/n\) at \(T=0\) as a function of \(r_{s}\) for different values of \(\epsilon_{dd}\). Here we set \(na^{3}=0.0011\).
ground [26]. At \(T\gtrsim\varepsilon_{g}\), \(\mu_{\rm LHY}/\varepsilon_{g}\) increases linearly with temperature especially for large \(r_{s}\) regardless of the value of \(\epsilon_{dd}\). This can be attributed to the competition between the contact, the dipole-dipole and Coulomb interactions.
Information on the coherence and on the fluctuations of CBG are contained in the static structure factor which is defined as the Fourier transform of the density-density correlation function \(S({\bf k})=\langle\delta\hat{n}({\bf k})\delta\hat{n}(-{\bf k})\rangle/n\)[62], where \(\delta\hat{n}({\bf k})=\int d{\bf r}\,\delta\hat{n}({\bf r})e^{-i{\bf k}.{\bf r}}\) and \(\delta\hat{n}({\bf r})=\sqrt{n({\bf r})}\sum_{k}\left\{[u_{k}({\bf r})-v_{k}({ \bf r})]\exp(-i\varepsilon_{k}t/\hbar)\,\hat{b}_{k}+H.c\right\}\). In terms of the elementary excitation energy \(S({\bf k})\) reads [62]:
\[S({\bf k})=\frac{E_{k}}{\varepsilon_{k}}\coth\left(\frac{\varepsilon_{k}}{2T} \right), \tag{13}\]
At zero temperature, Eq.(13) reduces to \(S({\bf k})=E_{k}/\varepsilon_{k}\). At low temperatures and in the phonon regime (\(k\to 0\)) one has \(S({\bf k})\simeq T/\varepsilon_{g}\). In the opposite situation, at higher \(T\), \(S({\bf k})\) simplifies to its zero temperature value except in the limit \(k\to 0\) where the structure factor approaches its asymptotic value [62]. A non-correlated gas has a structureless spectrum \(S(k)=1\).
The numerical solution of Eq.(13) is presented in Fig.4. As one can see from the figure, at low temperatures \(T<\varepsilon_{g}\) where the main contribution comes from low momenta, the static structure factor has a strong dependence on the temperature, the Coulomb interaction and on the DDI. We observe also that S(\({\bf k}\)) increases significantly and develops a pronounced peak around \(q=q_{0}\). For instance, for \(r_{s}=0.5\), \(q_{0}\simeq 1.6\) or equivalently \(k\simeq 1.6/r_{0}\) (see Fig.4.a). The position of such a peak relies on \(r_{s}\). Remarkably, S(\({\bf k}\)) develops a peak even at relatively high temperatures (\(T\simeq\varepsilon_{g}\)) for all \(r_{s}\) due to the interplay of Coulomb and dipolar interactions. Augmenting further the temperature (\(T\geq 2\varepsilon_{g}\)), thermal effects do not favor such a localization behavior of the particles and the static structure factor is almost monotonic (see blue lines in Fig.4).
However in the case of nondipolar CBG (\(\epsilon_{dd}=0\)), the structure factor becomes less significant and exhibits a peak only at low temperatures and for \(r_{s}>0.02\) in contrast to the dipolar CBG (see the Fig.4.b). This can be understood due to the fact that in the absence of DDI and for \(r_{s}>0.02\), the Coulomb interaction dominates the system leading to strong thermal fluctuations even at \(T\lesssim\varepsilon_{g}/2\), giving rise to destroy the coherence of the system.
In conclusion, we studied the ground-state properties of a weakly interacting CBG with DDI at both zero and finite temperatures using the self-consistent HFB theory. We derived the generalized nonlocal Gross-Pitaevskii equation that describe the dynamics and the equilibrium of such a system. By solving the BdG equations we analyzed the Bogoliubov excitations spectrum that exhibits a plasmon gap giving rise to infrared divergences arising from the Coulomb interaction. Furthermore, the normal and anomalous fractions, the EoS, and the static structure factor have been computed numerically. We show that the intriguing interplay of Coulomb interactions and the DDI leads to affect these quantities and thus plays a pivotal role in the physics of the system.
Even though the experimental realization of identical charged bosons with DDI is a challenging problem, they constitute a promising area for applications. In the limit of a strong (even intermediate) coupling, one can expect that the presence of the DDI may lead to the appearance of a very sharp peak in the static structure factor driving the system to a transition to a Wigner crystal phase. A qualitative study of this crystallization necessitates sophisticated tools such as Quantum Monte Carlo simulations. An important extension of our work would be the study of the Meissner-Ochsenfeld effect in dipolar CBG. The competition between repulsive short-range, Coulomb and dipolar interactions may enhance the coherence of the system implying a singularity in the susceptibility prior to the BEC phase [41]. This could be a signature of the emergence of such a Meissner-Ochsenfeld effect.
Figure 4: Static structure factor from Eq.(13) as a function of dimensionless variable \(q=kr_{0}\) for different values of \(T/\varepsilon_{g}\) and \(r_{s}\). Parameters are: \(na^{3}=0.0011\), \(\epsilon_{dd}=0.95\) (a) and \(\epsilon_{dd}=0\) (b). Solid lines: \(r_{s}=0.02\). Dashed lines: \(r_{s}=0.1\). Dotted lines: \(r_{s}=0.5\). |
2305.08930 | On a discontinuity at the base of the transition layer located between
the Keplerian accretion disk and the compact object | We study the geometry of the transition layer (TL) between the classical
Keplerian accretion disk (the TL outer boundary) and the compact object at the
TL inner boundary. Our goal is to demonstrate using the hydrodynamical
formalism that the TL is created along with a shock due to a discontinuity and
to an adjustment of the Keplerian disk motion to a central object. We apply
hydrodynamical equations to describe a plasma motion near a central object in
the TL. We point out that before matter accretes to a central object the TL
cloud is formed between an adjustment radius and the TL inner boundary which is
probably a site where the emergent Compton spectrum comes from. Using a
generalization of the Randkine-Hugoniot relation and a solution of the azimutal
force balance equation we have reproduced the geometric characteristics of TL. | Lev Titarchuk, Ilia Kalashnikov | 2023-05-15T18:07:04Z | http://arxiv.org/abs/2305.08930v1 | On a discontinuity at the base of the transition layer located between the Keplerian accretion disk and the compact object
###### Abstract
Context:We study the geometry of the transition layer (TL) between the classical Keplerian accretion disk (the TL outer boundary) and the compact object at the TL inner boundary.
Aims:Our goal is to demonstrate using the hydrodynamical formalism that the TL is created along with a shock due to a discontinuity and to an adjustment of the Keplerian disk motion to a central object.
Methods:We apply hydrodynamical equations to describe a plasma motion near a central object in the TL.
Results:We point out that before matter accretes to a central object the TL cloud is formed between an adjustment radius and the TL inner boundary which is probably a site where the emergent Compton spectrum comes from. Using a generalization of the Randkine-Hugoniot relation and a solution of the azimutal force balance equation we have reproduced the geometric characteristics of TL.
## 1 Introduction
Accretion onto compact objects as white dwarf (WD), neutron star (NS), and black hole (BH) binaries are very similar. The accreting matter with relatively large angular momentum can form a disk or go to outflow (see Titarchuk et al. (2007) and Shaposhnikov & Titarchuk (2009), hereafter TSA07, ST09, respectively). On the other hand, the plasma with low angular momentum proceeds towards the compact object almost in a free-fall manner until the centrifugal force becomes strong to halt the flow (see, e.g., (Chakrabarti & Titarchuk, 1995), hereafter CT95). Thus, we can suggest that three definitive regions a (Keplerian) disk, the shock, and a transition layer (TL) between the last stable orbit near WD, NS or BH and the shock, which are probably a place where the emergent X-ray spectrum is formed. In this paper, we study the characteristics of this adjustment of the Keplerian disk flow to an WD, an NS or an BH.
The disk starts to deviate from a Keplerian motion at a certain radius to adapt itself to the boundary conditions at WD or NS surface (or at the last stable orbit, at \(r_{1}=3r_{8}\), where \(r_{8}\) is the Schwarzschild radius, in the case of a BH). Titarchuk et al. (1998) (hereafter TLM98) explained the millisecond variability detected by Rossi X-Ray Timing Explorer (RXTE) in the X-ray emission from a number of low-mass X-ray binary systems (XRBs). Later, Seifina & Titarchuk (2010), Farinelli & Titarchuk (2011), and so on analyzed X-ray data from Sco X-1, 4U 1728\(-\)34, 4U 1608\(-\)522, 4U 1636\(-\)536, 4U 0614\(-\)091, 4U 1735\(-\)44, 4U 1820\(-\)30, GX 5\(-\)1) in terms of dynamics of the centrifugal barrier, in a hot boundary region (the TL) surrounding a neutron star (NS). They demonstrated that this region may experience relaxation oscillations and that the displacements of a gas element both in radial and vertical directions occur at the same main frequency, of an order of the local Keplerian frequency.
Observations of black hole X-ray binaries and active galactic nuclei indicate that the accretion flows around black holes are composed of hot and cold gas, which have been theoretically described in terms of a hot corona next to an optically thick relatively cold disk. Liu & Qiao (2022) (hereafter LQ22) reviews the accretion flows around black holes, with an emphasis on the physics that determines the configuration of hot and cold accreting gas, and how the configuration varies with the accretion rate and thereby produces various luminosity and spectra. They provide references to the famous solutions of the standard disk model such as, for example, Shakura & Sunyaev (1973) (hereafter SS73). The most successful applications of this model are the steady state and time-dependent nature of thin disks in dwarf novae and soft X-ray transients. Application of the four accretion models to black holes presented by LQ22 is constrained by both the theoretical assumption of specific models and observational luminosity and spectrum.
The soft photons from the innermost part of the disk are scattered off the hot electrons thus forming the Comptonized X-ray spectrum [see Sunyaev & Titarchuk (1980), hereafter ST80]. The electron temperature is regulated by the supply of soft photons from a disk, which depends on the ratio of the energy release (accretion rate) in a disk and the energy release in the TL. For example, the electron temperature is higher for lower accretion rates (see, e.g., CT95), while for a high accretion rate (of an order of the Eddington one) the TL is cooled down very efficiently due to the soft photon flux and the Comptonization. This is a possible mechanism for the low-frequency QPOs (quasi-periodic oscillations), discovered by RXTE in a number of low-mass X-ray binary systems (LMXBs; see Strohmayer et al.
(1996); van der Klis et al. (1996), hereafter S96 and VK96a, respectively; Zhang et al. (1996), TLM98, Titarchuk et al. (2007) hereafter TSA07). These observations reveal a wealth of high and low-frequency X-ray variabilities that are believed to be due to the processes occurring in the very vicinity of an accreting NS and BH.
Lancova et al. (2019) discovered a new class of black hole accretion disk solutions through 3D radiative magnetohydrodynamic simulations applying the general relativity formalism. These solutions combine features of thin, slim, and thick disk models. They provide a realistic description of black hole disks and have a thermally stable, optically thick, Keplerian region supported by magnetic fields. Mishra et al. (2020) have conducted a detailed analysis of two-dimensional viscous, radiation hydrodynamic numerical simulations of the Shakura-Sunyaev thin disks around a stellar mass black hole. They found multiple robust, coherent oscillations occurring in the disk, including a trapped fundamental g-mode and standing-wave p-modes. The study suggests that these findings could be of astrophysical importance in observed twin peak, high-frequency QPOs. Large scale 3D magnetohydrodynamic simulations of accretion onto magnetized stars with tilted magnetic and rotational axes were done by Romanova et al. (2021).It was shown that the inner parts of the disc become warped, tilted, and precess due to magnetic interaction between the magnetosphere and the disc. According to results of the numerical simulations by Lagovskii & Chechetkin (2012), large-scale instability in accretion disks, reveals changes in a disk flow structure because of t the formation of large vortexes. Over time, this leads to the development of asymmetric spiral structures and results in angular-momentum redistribution within the disk.
Sukova et al. (2017) have conducted simulations of accretion flows with a low angular momentum, filling the gap between spherically symmetric Bondi accretion and disc-like accretion flows. They identify ranges of parameters for which the shock after formation moves towards or outwards the central black hole or the long-lasting oscillating shock is observed. The results are scalable with a central black hole mass and can be compared to QPOs of selected microquasars and supermassive black holes in the centres of weakly active galaxies. In order to explain NICER telescope evidences for non-dipolar magnetic field structures in rotation-powered millisecond pulsars, Das et al. (2022) have conducted a suite of general relativistic magnetohydrodynamic simulations of accreting neutron stars for dipole, quadrupole, and quadrudipolar stellar field geometries. The study found that the location and a size of the accretion columns resulting in hotspots changes significantly depending on initial stellar field strength and geometry, providing a viable mechanism to explain the radio power in observed neutron star jets.
Despite the increasing role of numerical simulations of accretion, the consideration of relatively simple analytical models allows us to identify the response of the system to changes in a set of parameters, which in the case of numerical simulations becomes a very cumbersome task. In the present paper we, along with other authors (e.g. Abramowicz & Fragile (2013); Ajay et al. (2022))), propose a model of accretion on the central object. The point is that existing models of shock waves (see e.g. Chakrabarti (1989)) cannot correctly describe the abrupt transition between the accretion disk and the TL. Our goal is to construct a satisfactory model of the shock wave lying at the base of TL, which can correctly describe observational data for temperature and density (Seifina & Titarchuk (2010)). In Sect. 2 we describe our approach to the TL model. In Sect. 3. we formulate equations to determine the vertical dependencies of density and pressure in the TL. In Secs. 4-5 we introduce the generalized Rankine-Hugoniot relation and take into account the rotation effect in the TL, respectively. We discuss our setup and final results in Sect. 6. Finally, we summarize our conclusions in Sect. 7.
## 2 Motivation and the model
Consideration of the two-dimensional structure of accretion disks is associated with obvious mathematical difficulties. Typically, a two-dimensional structure is only accounted for by an non-homogeneous distribution of matter within a layer of constant thickness \(2h\). Consideration of a disk of constant thickness is made possible by ignoring the vertical velocity \(v_{z}\). The more general case \(h=h(r)\) requires solving equation \(dh/dr=(v_{z}/u)|_{z=h(r)}\), where \(u\) is the radial velocity, to describe the behavior of the height as the radius changes.
According to the observations and their interpretations (TLM98, ST09) the disk height may achieve very significant values near the TL, while at larger distances it is pretty small. The thermodynamic characteristics of the plasma change considerably: in the TL, its temperature can reach tens of keV and its density drops drastically (see e.g. ST09). This change is widely believed to be presumably due to the presence of a shock wave (see e.g. CT95, TLM98 and ST09). Such a region near the central object (CO) is associated with the TL.
In an attempt to get closer to a two-dimensional consideration, we have chosen the following model describing the shock (Fig. 1). The flow velocity is supposed to have only radial and azimuthal components in the both sides of the shock, which are independent of the height. It is assumed that the height of the flow changes sharply by discontinuity from \(h_{2}\) (the disk thickness) to \(h_{1}\) (the TL thickness).
We require the angular velocity \(\omega\) of the TL to match to the CO at its surface and coincide with disk velocity at the shock position \(r_{2}\). This angular velocity of the disk is supposed to equal to Keplerian one. Only \(\nu\phi\) component of the viscous stress tensor was considered because it is responsible for taking off the angular momentum. We considered the stress tensor form \(\eta\nu\omega\omega^{\prime}\), where \(\eta\) is the turbulent viscosity.
The thermodynamic quantities of the flow have vertical dependencies which are determined by the CO gravity and an equation of state. In order to identify them, we did not assume the smallness of the vertical coordinate compared to the radial one.
Figure 1: The schematic representation of the considered model.
## 3 The vertical dependences
Let us start with the hydrostatic balance equation in the vertical direction:
\[\frac{\partial p}{\partial z}=-\rho\frac{\partial\Phi}{\partial z}, \tag{1}\]
where \(p\), \(\rho\) are pressure and density and \(\Phi\) is the gravitational potential creating by the central object. We supposed that the vertical distribution of matter is given by polytropic relation \(p=K\rho^{1+1/n}\), where \(K\) is a constant related to the entropy and \(n\) is the polytropic index. Then the formal solution of (1) may be written as:
\[K(n+1)\rho^{1/n}=\widetilde{f}(r)-\Phi(r,z), \tag{2}\]
where \(\widetilde{f}\) is an arbitrary function. We require the density and pressure to be zero at the inner boundary \(z=\pm h\) and equal to some functions \(\rho_{e}\), \(p_{e}\) correspondingly at the equator \(z=0\). Then we have:
\[\rho=\rho_{e}(r)\left(\frac{\Phi(r,h)-\Phi(r,z)}{\Phi(r,h)-\Phi( r,0)}\right)^{n}, \tag{3}\] \[p=p_{e}(r)\left(\frac{\Phi(r,h)-\Phi(r,z)}{\Phi(r,h)-\Phi(r,0)} \right)^{n+1}. \tag{4}\]
Assuming the height is small compared to the radius \(|z|\ll r\) we may get the usually used (Hoshi (1977); Matsumoto et al. (1984)) dependence \(\rho=\rho_{e}(r)(1-z^{2}/h^{2})^{n}\). However, we did not make such an assumption later on.
We will need integrals from such vertical dependencies:
\[\zeta_{a}(r,h)=\int_{-h}^{h}\left(\frac{\Phi(r,h)-\Phi(r,z)}{\Phi(r,h)-\Phi( r,0)}\right)^{n}dz. \tag{5}\]
Apparently there is no general expression for such an integral for arbitrary \(n\), but it may be calculated for particular values.
## 4 Generalised Rankine-Hugoniot relation
Let us denote by index 1 the values in TL and by index 2 the values in the disc. Then the mass continuity condition leads to:
\[\rho_{e1}u_{1}\zeta_{n,1}=\rho_{e2}u_{2}\zeta_{n,2}=j, \tag{6}\]
where, for the sake of brevity, we have redefined (5) as \(\zeta_{k,i}=\zeta_{k}(r_{2},h_{i})\). The momentum continuity:
\[\rho_{e1}u_{1}^{2}\zeta_{n,1}+p_{e1}\zeta_{(n+1),1}=\rho_{e2}u_{2}^{2}\zeta_{ n,2}+p_{e2}\zeta_{(n+1),2} \tag{7}\]
From (6)-(7) we may get the following expression for the mass flux:
\[j^{2}=\frac{p_{e2}\zeta_{(n+1),2}-p_{e1}\zeta_{(n+1),1}}{[\rho_{e1}\zeta_{n,1 }]^{-1}-[\rho_{e2}\zeta_{n,2}]^{-1}}. \tag{8}\]
The integrated over height energy continuity condition has the following form:
\[h_{1}u_{1}^{2}+\zeta_{1,1}w_{e1}=h_{2}u_{2}^{2}+\zeta_{1,2}w_{e2}, \tag{9}\]
where \(w_{e}\) - enthalpy at the equator. From (9) taking into account (6)-(8) we may derive the generalization of the Rankine-Hugoniot relation:
\[\frac{h_{1}[\rho_{e1}\zeta_{n,1}]^{-2}-h_{2}[\rho_{e2}\zeta_{n,2}]^{-2}}{[ \rho_{e1}\zeta_{n,1}]^{-1}-[\rho_{e2}\zeta_{n,2}]^{-1}}=\frac{\zeta_{1,2}w_{e2 }-\zeta_{1,1}w_{e1}}{p_{e2}\zeta_{(n+1),2}-p_{e1}\zeta_{(n+1),1}}. \tag{10}\]
If we set homogeneous vertical distribution \(\zeta_{k,i}=\zeta_{k}(r_{2},h_{i})=2h_{i}\) and \(h_{1}=h_{2}\) then the usual expression may be gotten:
\[\frac{\rho_{e2}^{-1}+\rho_{e1}^{-1}}{p_{e2}-p_{e1}}=\frac{w_{e2}-w_{e1}}{p_{e 2}-p_{e1}}. \tag{11}\]
For a given equation of state and known values of thermodynamic quantities on both sides of the discontinuity, (10) may be considered as an implicit equation relating the heights \(h_{1}\), \(h_{2}\) and the radius \(r_{2}\).
## 5 Rotation
The continuity equation, integrated over the height has the following solution:
\[r\rho_{e}u\zeta_{n}=-q, \tag{12}\]
where \(q=\dot{M}/2\pi>0\) is a constant. The equation of force balance in \(\phi\) direction for the TL is the following:
\[\frac{\partial}{\partial r}(r^{2}\rho_{1}u_{1}r\omega_{1}-\eta_{1}r^{3} \omega_{1}^{\prime})=0. \tag{13}\]
Integrated over the height it has the solution \(\omega_{1}=C_{1}r^{-\gamma_{1}}+C_{2}r^{-2}\), where we denoted
\[\gamma_{1}=\frac{q}{2h_{1}\eta_{1}}=\frac{\dot{M}}{4\pi h_{1}\eta_{1}}, \tag{14}\]
which is nothing more than the Reynolds number. Before defining the unknown constants, let us discuss the boundary conditions.
In our model, we do not take into account the gradual broadening of the flow after the shock wave, nor its gradual narrowing near the CO. Therefore, neglecting this, we require equality of angular velocities to the disk one at the outer boundary and the CO one at the inner boundary:
\[\omega_{1}(r_{1})=\omega_{s}, \tag{15}\]
Figure 2: The dependence of the ratio of the TL height to that of disk \(h_{1}/h_{2}\) on the Reynolds number \(\gamma_{2}=\dot{M}/4\pi h_{1}\eta_{2}\) and temperature in the TL \(T_{1}\) for the ratio of turbulent viscosities \(\eta_{2}/\eta_{1}=45\). All other calculations were carried out along the diagonal line, which reflects the transition between the high/soft and low/hard states.
\[\omega_{1}(r_{2})=\omega_{K}(r_{2}). \tag{16}\]
In addition to the conservation of radial momentum at the discontinuity, there must also be conservation of momentum in the \(\phi\)-direction. Given the viscosity, the law of \(\phi\)-momentum continuity, integrated over height, reads as follows:
\[j\,\omega_{1}-2h_{1}\eta_{1}\omega_{1}^{\prime}=j\,\omega_{2}-2h_{2}\eta_{2} \omega_{2}^{\prime}, \tag{17}\]
where \(j\) is the mass flux from (6).
Since we suppose that at the outer boundary \(r=r_{2}\) both angular velocities are equal to the Keplerian one (16), then we have the following condition for the angular velocity derivative:
\[\omega_{1}^{\prime}(r_{2})=\frac{\eta_{2}}{\eta_{1}}\frac{h_{2}}{h_{1}}\omega _{K}^{\prime}(r_{2}). \tag{18}\]
Thus, for the second-order differential equation (13) we have three boundary conditions (15)(16)(18), i.e. the problem is overdetermined. Using the conditions (15)(16) we may write down the solution:
\[\omega_{1}=\frac{1}{r_{1}^{2}r_{2}^{\gamma_{1}}-r_{1}^{\gamma_{1}^{\prime}}r_ {2}^{2}}\bigg{(}\frac{r_{1}^{2}r_{2}^{2}}{r^{2}}(\omega_{K}r_{2}^{\gamma_{1}} -\omega_{\gamma}r_{1}^{\gamma_{1}})-\frac{r_{1}^{\gamma_{1}}r_{2}^{\gamma_{1} }}{r^{\gamma_{1}}}(\omega_{K}r_{2}^{2}-\omega_{\gamma}r_{1}^{2})\bigg{)}, \tag{19}\]
and (18) may be used to get another implicit equation for \(h_{1}\), \(h_{2}\) and \(r_{2}\) with given \(\omega_{s}\), \(\omega_{K}(r_{2})\) and \(r_{1}\). Thus, knowing the CO radius \(r_{1}\) and height of the accretion disk \(h_{2}\), as well as the thermodynamic characteristics of the plasma \(\rho_{e}\), \(p_{e}\), \(w_{e}\) on both sides of the discontinuity, the length, and height of the transition layer can be calculated using (10)(18)(19).
## 6 Setup and results
Since we aim to calculate the geometric characteristics \(h_{1}\), \(r_{2}\) of the TL, in (19) we have switched to the Reynolds number for the disk \(\gamma_{2}\), which is independent of \(h_{1}\):
\[\gamma_{2}=\gamma_{1}\left(\frac{\eta_{1}}{\eta_{2}}\right)\left(\frac{h_{1}} {h_{2}}\right)=\frac{\dot{M}}{4\pi h_{2}\eta_{2}}. \tag{20}\]
As the CO we considered an NS with a mass of \(M=1.5M_{\odot}\), a radius of \(r_{1}=15\) km and an angular velocity \(\omega_{s}=10^{-2}\) s\({}^{-1}\). The gravitation potential of the NS is the Newtonian, \(\Phi=-GM/(r^{2}+z^{2})^{1/2}\).
Although in deriving the vertical distributions (3)-(4) it was assumed that the gas is polytropic, the equation of state for the quantities at the equator can, generally speaking, be given arbitrarily; but we considered the same equation of state for vertical and radial dependences. For both TL and accretion disk we took the polytropic index \(n=3\), which corresponds to the photon gas equation of state with \(w=4p/\rho\) and \(p=\sigma T^{4}/3\), where \(\alpha\) - radiation density constant, \(T\) - temperature.
As a disk height, we took three Schwarzchild radii \(h_{2}=3r_{8}=13.3\) km. The maximum (at the equator) density and temperature in the disk were taken as \(\rho_{e2}=1.2\cdot 10^{-5}\) g cm\({}^{-3}\) and \(T_{e2}=0.5\) keV see e.g. ST09.
The density for the TL was chosen to correspond to the observed optical depth relative to Thompson scattering \(\tau\simeq 2\) (see e.g. ST09). This value can be achieved by selecting the concentration \(10^{17}\) cm\({}^{-3}\) which gives \(\rho_{e2}=2\cdot 10^{-7}\) g cm\({}^{-3}\).
For neutron stars, the TL temperature is around 25 keV in the low/hard state and evolves to approximately 2 keV in the high/soft state (e.g. Seifina & Titarchuk (2010)). In our analysis, there is no way that we can establish a law of temperature change as a function of the mass accretion rate. Therefore, to begin with, we solved (10)(18)(19) for the entire temperature range. As can be seen from Fig. 2 the height of the TL is almost independent of the temperature in it. The same picture is observed for the TL length. However, the solution of (10) with some fixed \(r_{2}\) alone shows considerable sensitivity to the temperature. The independence of the geometry of the TL on its temperature obtained in our model is achieved only by solving the equations for height and length together. Thus, we can conclude that the geometric characteristics of the TL depend mainly on the accretion rate (Reynolds number \(\gamma\)). Hereafter, these geometric characteristics have been calculated assuming a simple linear relation, \(T_{1}(\gamma_{2})\), despite the insignificance of this correction. The relation, \(T_{1}(\gamma_{2})\), represented in Fig. 2, reflects the described above transition between the high/soft and low/hard states.
The turbulent viscosity may be estimated in different ways. SS73 proposed its value as \(\eta=\alpha\rho c_{s}l_{\rm turb}\), where \(\alpha\) is a dimensionless constant, \(c_{s}\) is the sound speed, and \(l_{\rm turb}\) is a turbulent length scale, which is equal to the height \(h\) or, in addition, to the value \((h^{-2}+h_{r}^{-2})^{-1/2}\), where \(h_{r}\) is the radial pressure scale \(h_{r}=|p/p^{\gamma}|\) (Popham & Sunyaev (2001), PSO1 hereafter). Also in radiation-pressure dominated conditions it may be estimated (TLM98) as \(\eta=m_{p}n_{\rm ph}c{\rm l}/3\), where \(m_{p}\) is the proton mass, \(n_{\rm ph}\) is the photon number density, \(c\) is the speed of light, and \(l\) is the
Figure 3: The dependence of the TL length \(L\) (left) and the ratio (right) of the TL height to disk height \(h_{1}/h_{2}\) on the Reynolds number, \(\gamma_{2}=\dot{M}/4\pi h_{2}\eta_{2}\) for different ratios of turbulent viscosities \(\eta_{2}/\eta_{1}\): 60 (solid line), 45 (dashed) and 30 (dotted).
photon mean free path. So the turbulent viscosity may depend on the conditions in the flow as well as its geometry. We avoided further reduction and took several values of the ratio of turbulent viscosities \(\eta_{2}/\eta_{1}\) to make the resulting TL lengths \(L\) to be near the observable ones as well as \(\gamma_{2}\) corresponds to \(\alpha^{-1}\) parameter from the SS73 model.
We calculated (10)(18)(19) with parameters describes above for three different values of the ratio \(\eta_{2}/\eta_{1}\): 30, 45, and 60. When this value is increased, there is no significant change, whereas, at values quite below 30, the length of TL becomes absurdly large.
As can be seen from Fig. 3, the TL length decreases with increasing \(\gamma_{2}\). At extremely low accretion rates, the length of the TL considerably exceeds the radius of the CO, and as it increases, it can drop to tenths of it. The motion of a gas element is a superposition of rotation with an angular velocity \(\omega\) and falling on the CO with the velocity \(u\), i.e. it is a spiral motion. A smaller radial gas flow (i.e. smaller \(\dot{M}\) and \(\gamma_{2}\)) increases the distance between the coils of such a spiral, thereby making it larger. Therefore, with a fixed viscosity responsible for decreasing the angular momentum, increasing the mass accretion rate leads to decreasing the TL length.
The behavior of the TL height \(h_{1}\) is similar - it is about 80 times the height of the disk at a low accretion rate and drops to about 45 when the accretion rate increases (see Fig. 3). Thus, the height of the TL of an NS can be up to a thousand kilometers in the low/hard state and half that in the high/soft state. In the frame of our model, this result is quite explainable. The fact is that an increase in the mass accretion rate at a fixed density \(\rho_{2}\) implies an increase in radial velocity. In this way, more matter flows through a certain area per unit of time, making it smaller.
## 7 Conclusion
We have solved the problem of the transition layer geometry near a central object (CO). It was supposed to be weakly magnetized, therefore we study only the hydrodynamical properties of such configurations. We have derived the generalized Rankine-Hugoniot relation taking into account different the heights of the accretion disk and TL and heterogeneous distributions along them. Then we solved the equations of continuity and force balance in \(\phi\) direction and have shown that the last one ought to have three boundary conditions. In order to satisfy them, a TL must have certain geometric characteristics.
As a CO we considered NS, setting the gas parameters on both sides of the shock wave according to observations (see ST09). Then the length and height of the transition layer were calculated. We suggest that the turbulent viscosity is constant on each side of the shock, which values have been chosen as such that the Reynolds number \(\gamma_{2}\) corresponds to \(\alpha^{-1}\) parameter from the SS73 model. As can be seen from Fig. 3, both the TL length and height decrease with increasing \(\gamma_{2}\). Therefore we may conclude that:
1. We clarify the nature of the region where X-ray spectra are formed.
2. When the source evolves to a softer state, the Compton corona region becomes more compact (see Fig. 3 and also ST06, ST09). We should emphasize that the integrated power \(P_{x}\) of the resulting power density spectra rapidly declines toward soft states (TSA07).
3. We should point out that a ratio of the TL height to the disk height \(h_{1}/h_{2}\) on the Reynolds number is quite large (see Fig. 3).
4. The calculations were done for a NS, although the same behavior should be expected for other types of COs (a WD and a BH).
5. According to the presented calculations and observed manifestations (ST06, ST09) the TL height \(h_{1}\) sufficiently exceeds the disk height \(h_{2}\). Therefore, as also noted in PS01, one-dimensional constant-height models need to be improved to correctly describe the transition layer.
|
2306.11931 | The Noise Within: Signal-to-Noise Enhancement via Coherent Wave
Amplification in the Mammalian Cochlea | The extraordinary sensitivity of the mammalian inner ear has captivated
scientists for decades, largely due to the crucial role played by the outer
hair cells (OHCs) and their unique electromotile properties. Typically arranged
in three rows along the sensory epithelium, the OHCs work in concert via
mechanisms collectively referred to as the "cochlear amplifier" to boost the
cochlear response to faint sounds. While simplistic views attribute this
enhancement solely to the OHC-based increase in cochlear gain, the inevitable
presence of internal noise requires a more rigorous analysis. Achieving a
genuine boost in sensitivity through amplification requires that signals be
amplified more than internal noise, and this requirement presents the cochlea
with an intriguing challenge. Here, we analyze the effects of spatially
distributed cochlear-like amplification on both signals and internal noise. | Alessandro Altoè, Christopher A Shera | 2023-06-20T22:49:49Z | http://arxiv.org/abs/2306.11931v3 | The Noise Within: Signal-to-Noise Enhancement via Coherent Wave Amplification in the Mammalian Cochlea
###### Abstract
The mammalian inner ear's extraordinary sensitivity has enjtivated scientists for decades, largely due to the crucial role played by outer hair cells (OHCs) and their unique piezoelectric properties. These specialized cells, arranged in three rows along the cochlea's sensory tissue, work in concert to amplify the faintest sounds. Referred to as the "cochlear amplifier," this mechanism poses a fascinating question: How does it effectively enhance ear sensitivity in real-world scenarios? While simplistic views attribute this enhancement solely to increased cochlear gain, the presence of internal noise in practical settings necessitates a more nuanced approach. Achieving a genuine boost in sensitivity through amplification requires that the signals are amplified more than the internal noise, thus presenting an intriguing challenge. In this study, we analyze the effects of coherent amplification on both signals and internal noise, employing a simple yet powerful mathematical framework and a simplified model of cochlear physics. Our findings not only generalize and expand upon previous discoveries concerning the impact of spatially coherent amplification on signal degradation in active gain media, but also unveil the elegant and efficient wave-based strategy employed by the cochlea to boost ear sensitivity. When considering narrowband signals, this strategy boils down to spatially amplifying the signal within a localized region of the cochlea, followed by rapid attenuation. This location-dependent wave amplification and attenuation meets the necessary conditions for amplifying near-characteristic frequency (CF) signals more prominently than internal noise components of the same frequency. In particular, our analysis reveals that the sharp wave cut-off past the CF location greatly reduces noise contamination, leading us to conclude that the distinctive asymmetric shape of the "cochlear filters" underlies an important but previously unrecognized noise reduction mechanism. When broadening our perspective to encompass broadband signals and noise, the spatially constrained amplification of the different signal components substantially enhances the overall signal-to-noise ratio along the entire length of the cochlea, significantly facilitating detection of broadband signals.
## I Introduction
In the 19th century, Bernhard Riemann made a remarkable observation that the sound of a foghorn could be detected from a distance of 5 miles, leading to his conclusion that the human inner ear has the ability to perceive sounds that generate only sub-atomic motions of the eardrum [1]. Over the course of one and a half centuries, Riemann's conjecture has evolved into an empirical fact [2]. The extraordinary sensitivity of the healthy mammalian ear can be attributed to the piezoelectric behavior of outer hair cells (OHCs) [3], a group of cells arranged in three rows along the sensory tissue (the organ of Corti). Through their coordinated spatial cooperation, these cells magnify the vibrations of the sensory tissue in response to faint sounds by more than two orders of magnitude [4]. The prevailing belief in the field posits that OHCs actively amplify sound-induced waves as they propagate along the spiral structure of the cochlea, a collective mechanism referred to as the "cochlear amplifier" [5]. However, the rationale behind amplification as a viable strategy for enhancing ear sensitivity remains elusive, given that the minimum vibration level required for sensory neurons to detect signals is inherently dictated by the level of internal noise [see e.g. 6]: It remains unclear how the cochlear amplifier, while amplifying signals, can avoid amplifying the accompanying internal noise [7]. Indeed, cochlear internal noise level depends on the same mechanism that control signal amplification [8].
In this study, we investigate the impact of coherent amplification on signals and internal noise through two distinct models: a mathematical model of spatially distributed amplification and an active cochlear model. We begin by examining the simplest scenario, which involves a highly anisotropic (one-dimensional) medium comprising a series of cascaded "noisy" amplifiers where signals propagate in a single direction. We then move to the more complex and biologically relevant case where the medium is nearly isotropic (i.e., when signals propagate in both directions). By analyzing the spatial variation in signal-to-noise ratio (SNR) along these media and their dependence on medium gain, we observe advantageous effects of amplification. Drawing from these findings, we then investigate signal and noise amplification within a simple but physically realistic model of the cochlea. Through this exploration, we gain understanding of how
the cochlea's spatially coherent amplification enhances SNRs, thus boosting the sensitivity of the ear.
## II Spatially distributed amplification in noisy media
_Propagation of signals and noise in one direction._ We start by considering the simpler scenario of the (discrete) distributed "one-way" noisy amplifier, depicted in Fig. 1A. The model consists of a chain of amplifiers that multiply the input signal (\(S\)) by a factor \(g\), representing the amplifier gain. The medium's noise is represented by noise sources that are summed with the propagating signal after each amplification stage. To remove the ambiguity regarding whether noise should be included before or after the amplification stage, the model includes noise sources both at the input and output of the first and last amplifier, respectively. This model approximates a strongly anisotropic medium, where signals and noise propagate only in one direction (from left to right in Fig. 1A). This scenario accurately represents what occurs in many man-made systems, such as cascaded electronic amplifiers or radio repeaters.
In this model we can turn amplification "off" by imposing \(g=1\)--and hence model signal propagation in a lossless, noisy medium--or turn it "on" by imposing \(g>1\). When \(g<1\), the distributed amplifiers become effectively distributed "brakes" that attenuate propagating signals. By comparing signal and noise when \(g=1\), \(g>1\) and \(g<1\), we quantify the impact of amplification and attenuation on the SNR along the amplifier's chain (i.e., at the nodes \(\text{out}_{1,2\ldots n}\) in Fig. 1A.)
The amplitude of the signal at a given node \(n\) is simply the amplitude of the input signal passed through \(n\) multipliers (\(|S_{n}|=g^{n}|S|\)) and hence turning on the amplifier boosts the signal amplitude by a gain factor
\[G_{\text{signal}}[n]=g^{n}. \tag{1}\]
We focus our analysis on the physically relevant case when the noise sources are uncorrelated, meaning that the noise in the medium is spatially incoherent. The main results regarding the differential effects of amplification on signals and internal noise hold even when the noise sources are spatially coherent, and its demonstration is trivial. For simplicity, we assume that the various noise sources are independent versions of the same stochastic process, with a root-mean-square (RMS) amplitude of \(\gamma\). In this case, the RMS amplitude of the noise (\(N_{\text{rms}}\)) at node \(n\) can be calculated by incoherent summation (linear summation of power) of the various amplified noise terms. Specifically, the noise power at node \(n\) can be expressed as a geometric series, where the \(m\)-th term represents the contribution of the \((n-m)\)-th source, amplified (or attenuated) \(m\) times. The expression for \(N_{\text{rms}}[n]\) can be simplified based on different scenarios:
\[N_{\text{rms}}[n]=\sqrt{\sum_{m=0}^{n}g^{2m}}\gamma=\begin{cases}\sqrt{\frac{g ^{2(n+1)}-1}{g^{2}-1}}\gamma&\text{for $g\neq 1$},\\ \sqrt{n+1}\gamma&\text{for $g=1$}.\end{cases} \tag{2}\]
Hence, turning on the amplifier boosts the noise gain by a factor of
\[G_{\text{noise}}[n]=\frac{N_{\text{rms}}[n]|_{g\neq 1}}{N_{\text{rms}}[n]|_{g=1} }=\sqrt{\frac{g^{2(n+1)}-1}{(n+1)(g^{2}-1)}}. \tag{3}\]
The SNR at the node \(n\) is given by \(R_{n}=|S_{n}|/\sqrt{2}N_{\text{rms}}\), where the factor of \(1/\sqrt{2}\) arises from taking the RMS amplitude of the signal. The effect of amplification on the system's sensitivity can be quantified by the SNR enhancement factor [9]
\[\mathbf{R}[n]=R_{n}(\text{on})/R_{n}(\text{off})=G_{\text{signal}}/G_{\text{ noise}}, \tag{4}\]
where \(R(\text{on})\) and \(R(\text{off})\) are the SNR with the amplifier on (\(g\neq 1\)) and off (\(g=1\)), respectively. Figure 1B illustrates the enhancement factor as a function of \(g\) for two values of \(n\). When \(\mathbf{R}>1\) the signal is amplified more than the internal noise, resulting in an increase in the SNR at the considered node. Conversely, when \(\mathbf{R}<1\), the signal is amplified less than the noise, leading to decrease the SNR at the considered node. It follows from Eqs. (1,3) that amplification (\(g>1\)) boosts signals more than internal noise, increasing SNR at all nodes. In particular, the larger the gain, the larger \(\mathbf{R}\), resulting in a greater improvement in SNR at any node. Additionally, the longer the chain of amplifiers, the larger the benefit of distributed amplification on the SNR and hence the system's sensitivity. Conversely, when the amplifiers act as attenuators (\(g<1\)), \(\mathbf{R}<1\), meaning that the signal is attenuated more than the internal noise. A relevant measure of signal degradation is the noise factor \(\mathbf{F}_{n}=R_{n}/R_{0}\), which quantifies how the SNR degrades along the transmission line. In our case
\[\mathbf{F}_{n}=\sqrt{\frac{g^{2(n+1)}(1-g^{-2})}{g^{2(n+1)}-1}}, \tag{5}\]
which approaches 1 (no significant SNR degradation along the line) when \(g\gg 1\).
_Signal vs. noise amplification in isotropic media._ To gain a better understanding of the cochlea and similar isotropic media, we now examine the scenario where signals propagate in two directions and the gain can vary along the medium. We simplify the analysis by disregarding potential scattering effects within the medium, and assume that the various noise sources have equal amplitudes (Fig. 1C). In this case, we express the amplification of signals and noise at a given node \(n\) by considering the system as a combination of two "one-way" amplification models (Fig. 1D). The propagation of a source from a node \(n^{\prime}\) to a receiver node \(n\) in the model is encapsulated
by the discrete Green's function \(G[n,n^{\prime}]\). In the simplified model, where each node \(n\) amplifies the signal by a factor \(g_{n}\):
\[G[n,n^{\prime}]=\prod_{m=\min(n,n^{\prime})-1}^{\max(n,n^{\prime})-1}g_{m}+ \delta_{n,n^{\prime}}, \tag{6}\]
where \(\delta_{n,n^{\prime}}\) is the Kronecker delta, and where it can be observed that \(G[n^{\prime},n]=G[n,n^{\prime}]\). In this model, the signal is effectively a source at the node 0 and hence its amplitude at the node \(n\) is
\[|S_{n}|=|S|G[n,0]. \tag{7}\]
The noise response at node \(n\) can be decomposed as the incoherent summation of noise from both the left and right sides of the node:
\[N_{\text{rms}}[n]=\sqrt{\sum_{n^{\prime}=0}^{L}(G[n,n^{\prime}] \gamma)^{2}}= \tag{8}\] \[=\gamma\sqrt{\sum_{n^{\prime}=0}^{n}(G[n,n^{\prime}])^{2}+\sum_{ n^{\prime}=n+1}^{N}(G[n,n^{\prime}])^{2}}\]
which can be included in the simple "one-way model" by adding the noise contribution from sources located to the right of the considered node (Fig. 1D). In this case, amplification is not necessarily beneficial for the SNR as in the simpler model of Fig. 1A. When the goal is to maximize the SNR at a given node \(n\) (for a signal source at node 0), the optimal choice of gain distribution along the transmission line is
\[g_{n^{\prime}}\gg 1\text{ for }n^{\prime}<n \tag{9}\] \[g_{n^{\prime}}\ll 1\text{ for }n^{\prime}\geq n.\]
In this case, the system approaches the performance of the one-way amplification model at the \(n\)-th node. However, differently than in the "one-way" model, it is not possible to boost SNR at all nodes simultaneously (see Fig. 2E).
## III Signal vs. Noise Amplification in the Mammalian Cochlea
_The cochlear amplifier._ Figure 2A,B illustrates the general function of the mammalian cochlea. Briefly, sound induced vibration of the stapes (the "last" bone of the middle ear) displaces inner-ear fluid, launching hydromechanical waves that propagate slowly from the base (i.e., the entrance) to the apex (the end) of the cochlea. Cochlear wave propagation is frequency dependent, so that waves peak on the BM at different locations depending on frequency--i.e., the cochlea maps frequency into locations, with frequency decreasing from base to apex. The presence of the cochlear amplifier _in vivo_
Figure 1: A) Effect of spatially distributed “one-way” amplification on signal and internal noise. The model consists of a chain of linear amplifiers (multipliers) with gain \(g\); the effect of internal noise is simulated by adding noise before and after each amplification stage. B) SNR enhancement (**R**) at the \(N\)-th output of the amplifier chain (shown for \(N\)=10 and \(N\)=20) as a function of the amplifiers gain \(g\). C) Bidirectional noisy amplification model. In this model, internal noise propagates (while identically amplified) in both directions. D) Equivalent one-way amplification model to study noise and signal response at the \(n\)-th node. E) Example of enhancement factor at different nodes in a chain of \(N=10\) bidirectional amplifiers. In this example the amplifiers gain is chosen to improve SNR at node 5 (see text), by imposing \(g_{m}=3\) for \(m<5\) and \(g_{m}=0.1\) for \(m\geq 5\).
boost waves as they propagate towards their characteristic frequency (CF) place, producing a stronger and more spatially localized response than for passive wave propagation in a dead cochlea. This amplification process effectively narrows the bandwidth of the sensory tissue and the response of auditory neurons, as there is a well-established symmetry between spatial and frequency tuning [10]. By narrowing the bandwidth of the "cochlear filters," their sensitivity is naturally enhanced through well understood principles [11]. However, the specific benefits of amplification for increasing the ear's sensitivity remain unclear. It is theoretically possible to narrow the bandwidth of auditory filters through completely passive (non-amplifying) mechanisms [e.g. 12]. Additionally, the nearly isotropic (bidirectional) [13] nature of cochlear amplification further raises questions about its overall impact on global cochlear sensitivity.
Before delving into the effects of amplification in a model, it is worth examining the stereotyped response of the basilar membrane (BM) depicted in Fig. 2B, as it provides an intuitive explanation of how cochlear amplification enhances the ear's sensitivity. At low sound levels, where the remarkable sensitivity of the ear comes into play, the detection of sound relies on direct activation of the most sensitive auditory neurons [14], which primarily respond to the velocity of the basilar membrane (BM) [15]. In essence, signal detection hinges upon the sensitivity of the sensory tissue that is finely tuned to the frequency of the signal. Thus, given the signal frequency, the challenge of cochlear sensitivity lies in maximizing the SNR at a specific location. The cochlea ingeniously solves this problem--akin to the problem of maximizing SNR at a specific node in the discrete amplifier model discussed earlier--through active traveling wave amplification.
_Amplification of external signals vs. internal sources._ In our analysis of cochlear mechanics, we consider a general model that describes the relationship between the velocity of the sensory organ's center of mass (\(V_{\rm CP}\)) and the pressure difference across it (\(P_{0}\)) in the linear regime. This relationship is characterized by a phenomenological admittance \(Y\), such that \(V=YP_{0}\) (frequency dependencies are not explicitly written for simplicity). Following Newton's second law and mass conservation we have that [see e.g. 16, and Appendix A]
\[\frac{1}{S}\frac{d}{dx}(S\frac{d\bar{P}}{dx})+\alpha ZY\bar{P}=0. \tag{10}\]
In this equation \(\bar{P}\) is the pressure difference between the "upper" and "lower" fluid chamber (see Fig. 2A) averaged over the chambers cross-sectional area (\(S\)). The term \(Z=i\omega M\) represents the "longitudinal" impedance accounting for fluid's effective mass (\(M\)); \(\alpha=P_{0}/\bar{P}\) is a function that relates average and driving pressure [17] depending on wavelength and model's geometry. For simplicity, we assume one-dimensional (1D) wave propagation, which allows us to set \(\alpha=1\) and \(\bar{P}=P_{0}\). It is worth noting that the equations for two- or three-dimensional (2D and 3D) models are more complex and can be found in Appendix A. However, the qualitative implications derived from the 1D model are well understood and still hold true in more realistic 2D and 3D models, as we will illustrate through numerical simulations [18].
When we assume nearly ideal "reflectionless" boundary conditions at the apical and basal end, we have that the 1D Green's function is (see Appendix)
\[G(x,x^{\prime}) \approx \tag{11}\] \[\frac{1}{2i}\sqrt{\frac{S(x^{\prime})}{S(x)}\frac{1}{k(x)k(x^{ \prime})}}\exp[-i\int\limits_{\min(x,x^{\prime})}^{\max(x,x^{\prime})}k(\hat{x })d\hat{x}],\]
with \(k\) the complex wavenumber. The pressure response when the cochlea is driven from the stapes is simply [19]
\[P(x)=2ik(0)G(x,0). \tag{12}\]
In the cochlea, the gain per unit length (\(g\)) is primarily determined by the imaginary part of the wave number (\(k\)) when the spatial gradients of cross-sectional area (\(S\)) and \(k\) are gentle enough. Specifically, the log-gain per unit length can be approximated as \(\frac{d\log(G\parallel)}{dx}\sim\Im(k)\). When \(\Im(k)>0\), the gain per unit length is larger than one, indicating amplification. On the other hand, when \(\Im(k)<0\), the gain per unit length is less than one, indicating attenuation. It is important to note that when the cochlear amplifier is inactive, \(\Im(k)<0\) everywhere. When the amplifier is maximally active, the characteristic frequency (CF) place is approximately located at \(\hat{x}\), where \(\Im(k)=0\), with \(\Im(k)>0\) when \(x<\hat{x}\), and \(\Im(k)<0\) when \(x>\hat{x}\)[20]. Importantly the wave cut-off is dramatic right apically the CF region (see Fig. 2B), so that \(g\ll 1\) just past \(\hat{x}\). In summary, prior to the CF location waves are amplified (\(g>1\)), while past the CF location they are rapidly attenuated (\(g\ll 1\)). This arrangement fulfills the conditions for boosting the SNR at the CF place according to the analysis of the bidirectional amplifier [Eq. (9) and Fig. 1C].
_Amplification of narrowband signals and noise._ In the context of analyzing the intrinsic effects of spatial amplification on SNR enhancement, we can focus on a narrow frequency range centered around the characteristic frequency (CF). Within an arbitrarily narrow frequency range, the internal noise can be approximated as spatially incoherent sinusoidal sources with randomly distributed amplitude. The mean of the amplitude distribution is denoted as \(\mu\), and the variance as \(\sigma^{2}\). By considering this simplified noise model, we can examine the impact of signal amplification on SNR without the confounding effects of bandwidth reduction induced by amplification. The rms noise pressure at a given location \(x\) can be approximated as
\[\bar{P}_{\rm noise}(x)\approx\gamma\sqrt{\int_{0}^{L}|G(x,x^{\prime})|^{2}dx^ {\prime}}, \tag{13}\]
where \(\gamma^{2}=\mu^{2}+\sigma^{2}\). This expression represents the statistical average of the noise pressure considering the amplitude distribution of the incoherent sinusoidal sources. The integral \(\int_{0}^{L}|G(x,x^{\prime})|^{2}dx^{\prime}\) captures the propagation of noise power from basal and apical noise sources to the location \(x\). Assuming that the wavenumber at the cochlear entrance [\(k(0)\)] is nearly constant, independent of cochlear amplification, we have that [21]
\[R\propto\frac{|G(x,0)|}{\sqrt{\int_{0}^{x}|G(x,x^{\prime})|^{2}dx^{\prime}+\int _{x}^{L}|G(x,x^{\prime})|^{2}dx^{\prime}}}, \tag{14}\]
which represents the ratio between the signal amplification at location \(x\) and the square root of the noise power contributions from basal and apical noise sources. In this equation, \(\int_{0}^{x}|G(x,x^{\prime})|^{2}dx^{\prime}\) and \(\int_{x}^{L}|G(x,x^{\prime})|^{2}dx^{\prime}\) represent noise power propagation to the location \(x\) from basal and apical noise sources, respectively. The values of these functions, calculated in an active "overturned" 2D model, where the cochlear amplifier is active and the noise is narrowband, centered around 10 kHz are shown in Fig. 2C. The figure shows that at the CF place, the contribution of apical noise sources is negligible compared to that of basal noise sources. This result is consistent with the anticipation that the sharp wave cut-off past the CF place suppresses the contribution of apical noise, making it insignificant. Thus, from the perspective of the CF place, the problem of signal versus noise amplification in the cochlea reduces to the simpler "one-way" amplification model, where amplification boosts the signal more than the internal noise.
Figure 2D, depicts the differential effects of amplification on signal and internal noise in the 2D cochlear model, for frequencies of 30 kHz and 10 kHz. As expected from the analysis of the bidirectional amplifier, turning on the cochlear amplifier boosts the signal more than the internal noise near the CF location. This is evident in the plot, where in vivo the signal amplitude
Figure 2: A) Simplified anatomical view of the mammalian cochlea. B) BM magnitude responses in vivo (amplifier on) and post-mortem (amplifier off) to 10 kHz and 30 kHz calculated in a 2D finite difference model of the mouse cochlea. C) Apical and basal noise propagation functions, for narrowband noise centered around of 10 kHz. These functions quantify at each location the expected noise power due to basal and apical noise sources (assuming equal power sources), respectively. D) BM response magnitude to sound signal and narrowband internal noise at 10 and 30 kHz, for a postmortem and in vivo models. The curves are normalized so that the response magnitude to signal and noise is the same postmortem at the CF place—in this way the difference between signal and noise response in vivo visually illustrates that turning on the amplifier boosts SNR at the CF place. E) Enhancement factor (ratio between SNR with amplifier on and amplifier off) along the cochlea, calculated for narrowband near-CF signal and noise, and for broadband signal and noise (white in [4,70] kHz). This figure shows that the near-CF positive SNR enhancement caused by turning on the amplifier, produces a broadband, global SNR enhancement.
is larger than that of noise near the region where the BM maximally responds [signal and noise levels are normalized so that postmortem they are the same (0 dB) at the CF place]. However, as we move away from the CF location, such as near the cochlear entrance, amplification becomes more pronounced for the internal noise compared to the signal. The differential effect of amplification on signal and internal noise highlights the selective enhancement of the signal relative to the noise at the CF location, where the cochlea achieves optimal sensitivity for sound detection.
_Amplification of broadband signals and noise._ Figure 2E shows the enhancement factor along the cochlea when both signals and noise are broadband--in these simulations signal and noise have white spectrum in the range [4; 70] kHz, covering the range of CF of the cochlear model. Except near the cochlear entrance--where CF waves do not travel long enough to get substantial amplification to the point that there is no SNR enhancement even at CF (open symbols in Fig. 2E)--amplification substantially boosts the broadband SNR, by \(\sim\)10 dB in the most sensitive locations. These results show that spatially restricted enhancement of the various signals components produces a global increase in cochlear sensitivity to broadband signals.
## IV Discussion
Despite the inner ear being a biological system with an astounding sensitivity, the fundamental mechanisms underlying it have been largely unexplored. Cochlear mechanical experts typically equate "gain" with "sensitivity," thus forgetting that the sensitivity of a system depends on internal noise [7]. The handful of attempts in relating cochlear amplification with (true) cochlear sensitivity [e.g. 22; 23] ignore the contribution of wave propagation and rely on non-equilibrium oscillator models whose relevance to cochlear mechanics is uncertain. Here we have shown that established mechanisms of spatially distributed amplification produce significant signal enhancement (Fig. 2E)--not unlike man-made systems such as lasers and active transmission lines [24; 9]. Indeed, by coherently amplifying signals while projecting frequencies into locations, the cochlea employs laser-like narrowband signal amplification [5] to improve sensitivity to both narrow- and broad-band signals [Fig. 2D].
The cochlear waveguide structure is essentially a non-homogeneous transmission line where the cut-off frequency changes with location (or equivalently changes with CF) [25]. In this way, waves within the cochlear frequency range are greatly attenuated before reaching the apical end [see also 26], avoiding noise "build-up" due to scattering from the apical termination which greatly degrade performance of active transmission lines [24]. Our results highlight the functional importance of the asymmetric shape--characterized by a steep high-frequency flank produced by the wave cut-off past the CF place--of the so called "cochlear filters" (i.e., the BM frequency response at one location): near-CF waves coming from more basal locations are amplified, while those coming from more apical locations (where there are noise sources but no signal) are squelched. That is, the cochlear steep wave cut-off underlies a peculiar form of spatial filtering of near-CF components, optimized to reject noise. It is worth noting that the cochlear ear-horn-like geometry contributes significantly to this "optimized spatial filtering": the cochlear tapered geometry facilitates the propagation of waves from the base to the apex, allowing for efficient signal propagation and amplification [see 19, and Appendix A].
The cochlear signal enhancement strategy elucidated here appears rather simple while biologically robust: cochlear waves are first amplified to then be rapidly attenuated. In this scenario, CF location, spatial amplification and the location where SNR is maximally boosted are intrinsically related-- there is no need to assume a priori tight coordination between various frequency and location-dependent mechanisms, but everything follows from the principal mechanism (spatial amplification). Indeed, the fact that the CF location is primarily determined by the location where amplification changes from positive to negative, automatically makes the cochlea operate under the conditions for boosting the SNR at the CF place, regardless of details in the function that describes amplification and its possible perturbations (such as those due to potential "manufacturing errors").
To conclude, in our model the cochlear amplifier boosts the ear sensitivity to broadband sounds of 6 dB or more, (and significantly more than that for narrowband sounds). As the ear is a pressure sensor, this means that cochlear wave amplification at minimum doubles the distance over which a broadband sound, such as a transient sound caused by a predator's sudden movement, can be heard--and much more than that in case of narrowband stimuli. Therefore, it is not a great leap of imagination to speculate that the peculiar wave-based frequency analysis performed by the mammalian cochlea might have evolved primarily in response to the selection pressure of extending the range of broadband sound detection.
###### Acknowledgements.
Supported by grants R21 DC019712 (AA) and R01 DC003687 (CAS) from the NIDCD/NIH.
## Appendix A Green's function in 1,2 and 3 dimensions
_Equations of motion._ The average pressure difference between the two scale (\(\bar{P}\)) and the partition's center of mass velocity (\(V_{\rm CP}\)) are related by the well-known trans
mission line equations
\[\begin{split}\frac{d\bar{P}}{dx}&=-\frac{i\rho\omega}{S}U, \\ \frac{dU}{dx}&=-bV_{\text{CP}},\end{split} \tag{10}\]
with \(U\) volume velocity, \(S\) the duct's effective acoustic area, \(\rho\) fluid's density, \(\omega\) angular frequency and \(b\) the partition's effective width. Partition's velocity can be expressed as the product of the pressure difference across the tissue \(P_{0}\) and a complex admittance \(Y_{\text{CP}}\)
\[V_{\text{CP}}=P_{0}Y_{\text{CP}}=\alpha\bar{P}Y_{\text{CP}} \tag{11}\]
with \(\alpha=P0/\bar{P}\), the short-wave hydrodynamic factor. Combining Eqs. (10-11) we find the following expression for \(\bar{P}\)
\[\frac{1}{S}\frac{d}{dx}(S\frac{d\bar{P}}{dx})+k_{x}^{2}\bar{P}=0, \tag{12}\]
with \(k^{2}=\alpha ZY\) and \(Z=\frac{i\rho\omega}{S}\) the acoustic impedance of the scalae.
_1D models._ In 1D models, the pressure field is a function of longitudinal distance from the stages (\(x\)) so that \(\bar{P}\equiv P_{0}\) and \(\alpha=1\). The Green's function \(G_{\text{1D}}(x,x^{\prime})\) is the response to a unitary point pressure source at \(x^{\prime}\), and hence can be expressed as
\[\frac{1}{S}\frac{d}{dx}(S\frac{dG_{\text{1D}}}{dx})+k_{x}^{2}G_{\text{1D}}=- \delta(x-x^{\prime}), \tag{13}\]
with \(k\) wavenumber and \(\delta\) the dirac delta function, and where the somewhat cumbersome notation of defining \(G_{\text{1D}}\). Note that the pressure source has unit of pressure over length square. We assume reflections boundary conditions and calculate \(G_{\text{1D}}\) using the Wentzel-Kramers-Brillouin (WKB) approximation. To do so we make a change of variable in Eq. (13). In particular, we define
\[\chi=S(0)\int_{0}^{x}\frac{dx}{S(x)}, \tag{14}\]
and \(\hat{k}=\frac{S}{S_{0}}k_{x}\). Eq. (13) can be then rewritten as
\[\begin{split}\frac{d^{2}G_{\text{1D}}}{d\chi^{2}}+\hat{k}^{2}G_{ \text{1D}}&=-\frac{S^{2}}{S^{2}(0)}\delta(x-x^{\prime})=\\ &=-\frac{S(x^{\prime})}{S(0)}\delta(\chi-\chi^{\prime}),\end{split} \tag{15}\]
In the \(\chi\) domain the 1D green's function is [25]
\[G(\chi,\chi^{\prime})=\frac{1}{2i}\sqrt{\frac{1}{\hat{k}(\chi)\hat{k}(\chi^{ \prime})}}\exp[-i\int\limits_{\min(\chi,\chi^{\prime})}^{\max(\chi,\chi^{ \prime})}\hat{k}(\hat{\chi})d\hat{\chi}]. \tag{16}\]
Accounting for the source amplitude in the \(\chi\) domain [Eq. (15)], and converting the solution back in the \(x\) domain we obtain
\[\begin{split} G_{\text{1D}}&(x,x^{\prime})=\\ &=\frac{1}{2i}\sqrt{\frac{S(x^{\prime})}{S(x)}\frac{1}{k(x)k(x^{ \prime})}}\exp[-i\int\limits_{\min(x,x^{\prime})}^{\max(x,x^{\prime})}k(\hat{x} )d\hat{x}].\end{split} \tag{17}\]
_2D and simplified 3D models._ A tapered 2D "box" model can be physically interpreted as a model where the cross-sectional area of the duct is a rectangle with constant width and varying height, while the partition spans the entire cochlear width and moves up and down as a piston ("wall-to-wall carpeting", see [25]). The equations for a 2D model, are approximately valid for a 3D model where the cross-sectional shapes of cochlear duct and cochlear partition are circular. When the radius of the partition is sufficiently small pressure is approximately a function of distance from the stages and \(x\) and distance from the partition's centers (\(r\))--i.e., under these assumptions the 3D model is effectively 2D in cylindrical coordinates [27; 28].
Importantly, although the 2D box model equations are valid for a 3D cylindrical model, the parameters and their spatial gradients are different between the two models [see also 29]. While the partition's admittance can be strategically chosen so that \(k\) is the same in 2 and 3D, [30] the spatial gradient of the cross sectional area \(S\) (which determines an important geometric pressure gain factor [19]) differs in the two models--in the tapered box model \(S\propto H\), while in the 3D cylindrical model \(S\propto H^{2}\), with \(H\) scala height (or radius).
Keeping in mind these important caveats, we now proceed to heuristically determine the reduced 2D Green's function \(\bar{G}_{\text{2D}}(x,x^{\prime})\) which described the average pressure at \(x\) in response to a 2D source placed at the center of the partition, i.e., at \(y=0\). Following [25], we note that the 2D reduced Green's function must obey the following relation
\[\frac{1}{S}\frac{d}{dx}(S\frac{d\bar{G}_{\text{2D}}(x,x^{\prime})}{dx})+k_{x}^ {2}\bar{G}_{\text{2D}}(x,x^{\prime})=\mathcal{F}\delta(x-x^{\prime}), \tag{18}\]
where \(\mathcal{F}\), a function to be determined, accounts for the fact that the source is 2D (differently than in the 1D model). Following the results of [25] obtained in a box model of constant cross-sectional area, we have that \(\mathcal{F}|_{x^{\prime}}\propto\alpha(x^{\prime})\). Because in our tapered model the area changes with location, we further need to figure out if there are systematic differences between 1, 2 and 3D sources that change with the cross-sectional area. In this regard, we note that a 2D point source is \(s_{\text{2D}}=\delta(x-x^{\prime})\delta(y)\) while a one-dimensional one is \(s_{\text{1D}}=\delta(x-x^{\prime})\); their source strength, averaged over the cross sectional area of a two-dimensional model is a factor \(H(x^{\prime})\) larger in 1 than in 2D [31]. Likewise a 3D source is \(s_{\text{3D}}=\delta(x-x^{\prime})\delta(y)\delta(z)\) whose strength is \(S(x^{\prime})\) smaller than a 1D one.
Based on these consideration, and further noting that \(S(x^{\prime})\propto H(x^{\prime})\) in 2D (so that we can write equations that are valid in 2D and 3D models), we conclude that \(\mathcal{F}\approx\alpha(x^{\prime})/S(x^{\prime})\):
\[\hat{G}_{\rm 2D}(x,x^{\prime})\approx\frac{\alpha(x^{\prime})}{S(x^{\prime})}G_{ \rm 1D}(x,x^{\prime}). \tag{10}\]
We now define \(\hat{G}_{\rm 2D}(x,x^{\prime})=G_{\rm 2D}(x,0,x^{\prime},0)\) where \(G_{\rm 2D}(x,y,x^{\prime},y_{0})\) is the "true" 2D Green's function, i.e., the function that describes the pressure response at \(x,y\) to a unit point source at \(x^{\prime},y^{\prime}\). Exploiting the definition of \(\alpha\) we have that
\[\hat{G}_{\rm 2D}(x,x^{\prime})\approx\] \[\frac{\alpha(x)\alpha(x^{\prime})}{2i}\sqrt{\frac{1}{k(x)k(x^{ \prime})S(x^{\prime})S(x)}}\exp[-i\int\limits_{\min(x,x^{\prime})}^{\max(x,x^{ \prime})}k(\hat{x})d\hat{x}], \tag{11}\]
where it can be appreciated that \(\hat{G}_{\rm 2D}(x,x^{\prime})=\hat{G}_{\rm 2D}(x^{\prime},x)\). The same Green's function holds for the simple 3D model [\(G_{\rm 3D}(x,x^{\prime})\approx\hat{G}_{\rm 2D}(x,x^{\prime})\)], keeping in mind the caveats regarding the cross-sectional areas in the two models.
_Non-ideal boundary conditions and numerical solutions._ The solution for the Green's function shown above, are obtained under the assumptions that there is no significant scattering from the basal and apical boundary. While this is a good approximation for the apical boundary--travelings waves are dramatically attenuated before reaching it [26]-- the same is not true for the basal boundary, where any impedance mismatch at the stages has the effect of backscattering a significant fraction of wave power [32]. When the wave frequency is sufficiently lower than the CF near the stages (so we can assume long-wave behavior near the stages), we can easily add back the effect of wave scattering and calculate the WKB approximation for this non-idealized Green's function
\[\begin{split}\hat{G}_{\rm 2D}(x,x^{\prime})=&\hat{G}_{ \rm 2D}(x,x^{\prime})+\\ +& R_{\rm st}\hat{G}_{\rm 2D}(0,x^{\prime})\alpha(x) \sqrt{\frac{S(0)k(0)}{S(x)k(x)}}\exp-i\int\limits_{0}^{x}k(\hat{x})d\hat{x}, \end{split} \tag{12}\]
where \(R_{\rm st}\) is the complex reflectance of the stages [32]. The second term on the right side of Eq. (12) represents a traveling from the base to the apex, generated by the pressure reflected from the stages [\(R_{\rm st}\hat{G}_{\rm 2D}(0,x^{\prime})\)].
_Numerical and semi-analytical calculations_ We cross-checked the quality of our calculations by comparing 2D WKB approximation of the Green's function against numerical calculations performed in a 2D finite-difference tapered model [33, 34], some of which are shown in Fig. 3A. Because calculating the WKB approximation of \(\alpha\) requires iterative methods that introduce various inaccuracies, we calculate \(\alpha\) numerically, driving the finite-difference model from the stages. Figure 3B shows the WKB solution of the Green's function for a 3D model with same wavenumber (\(k\)) and height (\(H\)) of the 2D model in Fig. 3A. While the agreement between WKB approximation and numerical solution is in most cases excellent, the WKB approximation can introduce spurious but egregious errors (due to the non-uniqueness of the WKB solution in the cut-off region [35]), making calculations noisy, especially at high frequencies. For this reason, in the main text we present results obtained with the 2D finite difference model-- the difference between 2D and 3D models are relatively minor, although it is worth mentioning that in 3D the enhancement factors are slightly larger thanks to the more dramatic tapering of the cross-sectional area in 3D than in 2D models.
## Appendix B Modeling details
We performed all calculations using an overturned model of the mouse cochlea [28], whose parameters are the same used in [13]. Differently than in classic models where the organ of Corti are doesn't deform, in this model the transverse (up-down) velocity of the center of mass is \(V_{\rm CP}=(V_{\rm BM}+V_{\rm top})/2\), where \(V_{\rm BM}\) and \(V_{\rm top}\) indicate the velocity of the bottom (BM) and the top-side (the reticular lamina and vectorial membrane) of the organ of Corti--their differential velocity is \(V_{\rm int}=V_{\rm top}-V_{\rm BM}\). Postmortem \(V_{\rm BM}\) and \(V_{\rm top}\) are similar so that to a first approximation \(V_{\rm int}\approx 0\) in a passive cochlea,
Figure 3: A) Example of Green’s function for a 2D model with reflective basal boundary (\(|R_{\rm st}|\approx 0.14\)), calculated numerically in a finite difference model (solid line) or with the WKB approximation [Eqs. (11,12), dashed lines]. The source locations for the various curves are indicated with vertical arrows; the source frequency is 10 kHz. B) Approximate Green’s function for a simplified 3D model (see text).
while \(V_{\rm int}\neq 0\) in vivo. The organ of Corti velocity can be rewritten in the compact form \(V_{\rm CP}=V_{\rm BM}+V_{\rm int}/2\), where \(V_{\rm int}\) is attributed to the piezo-electric action of OHCs, and is effectively the (velocity) source of wave amplification in the model.
Because the BM stiffness is about one order of magnitude larger than that of the structures surrounding the OHCs, OHC forces produce large displacements of the top side of the organ of Corti while having secondary effects on local BM motion [36; 37]. We therefore assume that internal OHC forces have negligible effects on BM motion so that the mechanical admittance of the BM (\(Y_{\rm BM}=V_{\rm BM}/P_{0}\)) is constant, independent of whether the cochlear amplifier is turned "on" or "off". For simplicity we assume that \(Y_{\rm BM}\) represents the admittance of a damped harmonic oscillator, and exploiting the relation between \(V_{\rm CP}\), \(V_{\rm BM}\) and \(V_{\rm int}\) we express the admittance of the organ of Corti admittance as \(Y_{\rm CP}=Y_{\rm BM}(1+0.5V_{\rm int}/V_{\rm BM})\). Following previous results we assume that in vivo at low sound levels \(0.5V_{\rm int}/V_{\rm BM}\approx i\beta\tau\) with \(\beta=f/{\rm CF}\) normalized frequency and \(\tau\) a (real) constant. Following [19] we assume that \(Y_{\rm CP}\) is scaling symmetric (identical when expressed as a function of normalized frequency \(\beta\)) everywhere in the cochlea [38].
|
2301.13159 | Spectral properties of the Laplacian of temporal networks following a
constant block Jacobi model | We study the behavior of the eigenvectors associated with the smallest
eigenvalues of the Laplacian matrix of temporal networks. We consider the
multilayer representation of temporal networks, i.e. a set of networks linked
through ordinal interconnected layers. We analyze the Laplacian matrix, known
as supra-Laplacian, constructed through the supra-adjacency matrix associated
with the multilayer formulation of temporal networks, using a constant block
Jacobi model which has closed-form solution. To do this, we assume that the
inter-layer weights are perturbations of the Kronecker sum of the separate
adjacency matrices forming the temporal network. Thus we investigate the
properties of the eigenvectors associated with the smallest eigenvalues (close
to zero) of the supra-Laplacian matrix. Using arguments of perturbation theory,
we show that these eigenvectors can be approximated by linear combinations of
the zero eigenvectors of the individual time layers. This finding is crucial in
reconsidering and generalizing the role of the Fielder vector in
supra-Laplacian matrices. | Zhana Kuncheva, Ognyan Kounchev | 2023-01-12T19:32:46Z | http://arxiv.org/abs/2301.13159v1 | # Spectral properties of the Laplacian of temporal networks following a constant block Jacobi model
###### Abstract
We study the behavior of the eigenvectors associated with the smallest eigenvalues of the Laplacian matrix of temporal networks. We consider the multilayer representation of temporal networks, i.e. a set of networks linked through ordinal interconnected layers. We analyze the Laplacian matrix, known as supra-Laplacian, constructed through the supra-adjacency matrix associated with the multilayer formulation of temporal networks, using a constant block Jacobi model which has closed-form solution. To do this, we assume that the inter-layer weights are perturbations of the Kronecker sum of the separate adjacency matrices forming the temporal network. Thus we investigate the properties of the eigenvectors associated with the smallest eigenvalues (close to zero) of the supra-Laplacian matrix. Using arguments of perturbation theory, we show that these eigenvectors can be approximated by linear combinations of the zero eigenvectors of the individual time layers. This finding is crucial in reconsidering and generalizing the role of the Fielder vector in supra-Laplacian matrices.
## 1 Introduction
In recent years, one of the major lines of research in complex network analysis is the topological changes that occur in a network over time. A sequence of networks with such a time-varying nature can be formalized as a _temporal network_Holme and Saramaki (2012). The _multilayer formulation_ of temporal networks Kivela et al. (2014) is one way to consider the interconnected topological structure changing over time: _ordinal_ interconnections between layers determine how a given node in one layer and its given counterparts in the previous and next time point layers are linked and influence each other. The network analysis community has strong traditions in using the spectral properties Moreno and Arenas (2013), Sol et al. (2013) of multilayer networks for various purposes such as centrality measures De Domenico et al. (2016) or investigating diffusion processes Sol et al. (2013).
One challenge associated with understanding the spectral properties of the temporal networks is the lack of available tools that respect the fundamental distinction between within-layer and inter-layer edges Kivela et al. (2014), Taylor et al. (2015), De Domenico et al. (2013) when studying the spectral properties of the Laplacian matrix \(\mathcal{L}\) of temporal networks, known as supra-Laplacian. A number of investigations were undertaken to show that the inter-layer couplings in multilayer networks distort those spectral properties and to explain the effect of different inter-layer weights over the eigenvalues of the supra-Laplacian Moreno and Arenas (2013), Sol et al. (2013). Up to our knowledge, there is no work related to the understanding of the information carried by the eigenvectors corresponding to the smallest eignevalues of the supra-Laplacian.
The spectral analysis on a network is nowadays understood as studying the spectral properties of the various Laplacian matrices defined on the network. In particular, for the so-called normalized Laplacian the most interesting are usually the smallest eigenvalues and their eigenvectors.
For a Laplacian matrix, the eigenvector corresponding to the smallest eigenvalue, \(\lambda_{1}=0\), is constant or weighted by the node degrees if the Laplacian is normalized Chung (1996). The eigenvector corresponding to the smallest non-zero eigenvalue, known as the algebraic connectivity, is in practice used for partitioning purposes Luo et al. (2002); Luxburg (2007) and is known as the Fiedler vector. In this article, we consider _slowly-changing temporal networks_ which means that the adjacency matrices forming the different time layers change relatively slowly Enright and Kao (2018). The main objective of the present paper is to draw a maximal profit of this important property for the majority of temporal networks. In particular, for every temporal network, for a sufficiently small interval, we have this effect.
Further, we add inter-layer weights to the temporal network which may be considered as _perturbations_ of the Kronecker sum of the separate adjacency matrices forming the different time layers, and we consider the Laplacian of the resulting matrix which is usually called supra-Laplacian Kivela et al. (2014). This point of view on the temporal networks, allows us to find an approximate closed form solution of the eigenvectors corresponding to the smallest eigenvalues of the supra-Laplacian. In particular, by applying arguments from perturbation theory, we are able to show that the eigenvectors corresponding to the smallest eigenvalues (of the supra-Laplacian) are well approximated by the space of the perturbed eigenvectors corresponding to all zero eigenvalues of the Laplacian matrices corresponding to the networks of the separate time layers.
The paper is organised as follows: in Sec. 2, we present the construction of the temporal network following a constant block Jacobi model. This model appears in a natural way as a first order approximation to the slowly-changing temporal network, and enjoys a closed-form solution of the eigenvectors of the supra-Laplacian matrix; in Sec. 3 we investigate the spectral properties of the supra-Laplacian and obtain an eigenvector solution of the reduced system; Sec. 4 is devoted to identifying the smallest eigenvectors, which are obtained by perturbation of the zero eigenvectors of the separate time layers, and discussing the influence of density and number of layers on these eigenvectors; finally we state the conclusions.
## 2 Temporal network following constant block Jacobi model: notations and definitions
A _temporal network_ is a set of networks in which edges and nodes vary in time. In this work, we make the assumption that each node \(i\) is present in all layers. We use the notation \(G^{t}\) for a layer in an ordered sequence of \(T\) networks \(\mathcal{T}=\left\{G^{1},G^{2},...,G^{T}\right\}\) with \(G^{t}=\left(V,A^{t}\right)\) where \(t\in\left\{1,2,...,T\right\}\) and the number of nodes is \(N\), i.e. \(N=\left|V\right|.\) Here \(A^{t}\) is a binary undirected and connected adjacency matrix. In order to use the multilayer framework for representing a _temporal network_, we consider the _diagonal ordinal coupling_ of layers Kivela et al. (2014); Bassett et al. (2011); Mucha et al. (2010), to define a new supra-network \(\widetilde{\mathcal{T}}\). We define the coupling edges: we denote by \(\omega_{i}^{t,p}\in\mathbb{R}\) the value of the inter-layer edge weight between node \(i\) in different time layers \(t\) and \(p\). Our main assumption is that only neighbouring layers may be connected, i.e. \(\omega_{i}^{t,p}=0\) for all layers \(G^{t}\) and \(G^{p}\), with \(p\neq t-1\) and \(p\neq t+1\). No other edges between \(G^{t}\) and \(G^{p}\) exist for indices \(t\neq p\).
As a result, the multilayer framework of the temporal network is expressed in an \(NT\)-node single adjacency matrix \(\mathcal{A}\) of size \(NT\times NT\) which is simply the adjacency matrix of the network \(\widetilde{\mathcal{T}}\), referred to as _supra-adjacency matrix_. Clearly, the diagonal blocks of \(\mathcal{A}\) are the adjacency matrices \(A^{t}\), and the off-diagonal blocks are the inter-layer weight matrices \(W^{t,p}=diag(\omega_{1}^{t,p},\omega_{2}^{t,p},...,\omega_{N}^{t,p})\) if \(p=t-1\) or \(p=t+1\).
The usual within-layer degree of node \(i\) in layer \(G^{t}\) is defined as \(d_{i}^{t}:=\sum_{j=1}^{N}A_{ij}^{t}\) while the multilayer node degree of node \(i\) in layer \(G^{t}\) is \(\mathfrak{d}_{i}^{t}:=d_{i}^{t}+\omega_{i}^{t,t-1}+\omega_{i}^{t,t+1}\). Define the degree matrix \(\mathcal{D}\) as \(\mathcal{D}:=\text{diag}\left(\mathfrak{d}_{1}^{1},\mathfrak{d}_{2}^{1},..., \mathfrak{d}_{N}^{1},\mathfrak{d}_{1}^{2},...,\mathfrak{d}_{N}^{2},..., \mathfrak{d}_{N}^{2}\right)\). The _normalized supra-Laplacian_\(\mathcal{L}\) is defined as \(\mathcal{L}:=\mathcal{D}^{-\frac{1}{2}}\left(\mathcal{D}-\mathcal{A}\right) \mathcal{D}^{-\frac{1}{2}}\)Chung (1996).
The supra-adjacency matrix \(\mathcal{A}^{0}\) with \(0\) inter-layer weights and its corresponding Laplacian matrix \(\mathcal{L}^{0}\) are directly expressed as a Kronecker sum:
\[\mathcal{A}^{0}:=\oplus_{t=1}^{T}A^{t}\longrightarrow\mathcal{L}^{0}=\oplus_{ t=1}^{T}L^{t} \tag{1}\]
where \(L^{t}\) is the normalized Laplacian of network \(G^{t}\).
From spectral graph theory Chung (1996), we know that due to the connectedness of \(A^{t}\), for every time point \(t\) the solution to \(L^{t}v_{1}^{t}=0\) corresponds to the first eigenvalue \(\lambda_{1}^{t}=0\) which has multiplicity one and the corresponding eigenvector \(v_{1}\) is the eigenvector \((D^{t})^{\frac{1}{2}}\mathbf{1}\), where \(\mathbf{1}\) is the constant one vector and \(D^{t}\) is the degree matrix for the adjacency matrix \(A^{t}\).
Hence, the equation \(\mathcal{L}^{0}v=0\) has a \(T-\)dimensional subspace of solutions and we find its basis explicitly: namely, for every \(t\) we define the column vector \(V^{t}\in\mathbb{R}^{NT}\), as a zero-padded vector with \(v_{1}^{t}\) at the position of the \(t^{th}\) block. Thus, all solutions to \(\mathcal{L}^{0}v=0\) are given by \(v=\sum_{t=1}^{T}\alpha_{t}V^{t}\) for arbitrary constants \(\alpha_{t}\).
The main objective of the present paper is to consider an ideal case of a temporal network which is slowly-changing in time, hence, is well approximated by a temporal network following a _constant block Jacobi model_: Let us consider the case where \(A^{t}=A\) for all \(t\) and \(W^{t,p}=W\) for all \(t,p\). An important step in our construction is to "periodize" the temporal network, which will provide the existence of a nice closed-form solution of the resulting network. This is not a very artificial approach since the "slowly-changing" of the network assumes that the network does not vary too much from the initial to the final layer: Namely, we construct a "periodic" supra-adjacency matrix \(\mathcal{A}\) and its corresponding supra-Laplacian matrix \(\mathcal{L}\) for temporal networks, by including non-zero diagonal blocks on the upper-right and lower-left corner blocks. In other words, we include inter-layer weights between the first time layer \(A^{1}\) and the last time layer \(A^{T}\). The resulting matrix \(\mathcal{A}\) is a _periodic constant block Jacobi_ matrix which gives the name of the model. In view of the slowly-changing nature of the temporal network \(G^{t}\), the matrix \(\mathcal{A}\) is a perturbation of the matrix \(\mathcal{A}^{0}\) and \(\mathcal{L}\) is a perturbation of the matrix \(\mathcal{L}^{0}\).
Furtheron, the resulting supra-Laplacian matrix \(\mathcal{L}\) is given by the following \(T\times T\) block matrix, which may be easily proved to be an infinite periodic block Jacobi matrix Sahbani [2015]:
\[\mathcal{L}:=\underbrace{\left(\begin{array}{cccc}\widetilde{L}&\widetilde{ L}_{W}&&&\widetilde{L}_{W}\\ \widetilde{L}_{W}&\widetilde{L}&\widetilde{L}_{W}&&\\ &\widetilde{L}_{W}&\widetilde{L}&&\\ &&&\cdots&\widetilde{L}_{W}\\ \widetilde{L}_{W}&&&\widetilde{L}_{W}&\widetilde{L}\end{array}\right)}_{ \mathcal{T}} \tag{2}\]
We have to note that if we have the same \(\omega\) for all matrices \(W\), then the blocks of the block-diagonal matrix \(\mathcal{D}\) contain the matrices \(D^{t}+2\omega I\). Since for every \(t\) holds equation \(L^{t}=I-D^{-1/2}AD^{-1/2},\) and since the matrix \(D^{-1/2}AD^{-1/2}\) has entries \(d_{i}^{-1/2}d_{j}^{-1/2}a_{ij}\), we see that \(\widetilde{L}\) is a perturbation of \(L\) which has just the elements \(-\left(d_{i}+2\omega\right)^{-1/2}\left(d_{j}+2\omega\right)^{-1/2}a_{ij}\) and not \(-d_{i}^{-1/2}d_{j}^{-1/2}a_{ij}\). Hence, written formally, we have the equality
\[\widetilde{L}=I-\left(D+2\omega I\right)^{-1/2}A\left(D+2\omega I\right)^{-1/2}\]
On the other hand, the matrix \(\widetilde{L}_{W}\) is equal to \(-\omega\left(D+2\omega I\right)^{-1}\), in equation (2).
The big advantage of the constant block Jacobi model is that we can find "explicitly" its spectrum which we discuss in the next sections.
Smallest eigenvalues and paired eigenvectors of the supra-Laplacian \(\mathcal{L}\) of temporal networks following constant block Jacobi model
As we know from spectral graph theory Chung [1996], the eigenvalues of the Laplacian \(L^{t}\) and of the supra-Laplacian \(\mathcal{L}\) are non-negative, and the minimal eigenvalue is \(0\), as mentioned above. As usual, in the applications the small eigenvalues and the corresponding eigenvectors are of particular importance. By perturbation theory, some of those eigenvalues which are very close to \(0\) are obtained as a direct perturbation of the \(0\) eigenvalues of all separate time layer Laplacian matrices \(L^{t}\), and the same holds about their paired eigenvectors. On the other hand, the eigenvectors paired to the bigger eigenvalues are obtained as perturbations not only of the \(0\) eigenvectors of the separate matrices \(L^{t}\) but also of the Fielder (and the higher) eigenvectors of the separate matrices \(L^{t}\).
The solution for the Laplacian \(\mathcal{L}\) in equation (2) is defined by:
\[\mathcal{L}\psi=\lambda\psi \tag{3}\]
and for finding it we apply a classical technique based on discrete Fourier transforms (DFTs), see e.g. Sahbani [2015]. To do this we represent each vector \(\psi\in\mathbb{R}^{NT}\) as the sequence of vectors \([\psi_{1},\psi_{2},...,\psi_{T}]\) where each vector \(\psi_{j}\) is the portion of eigenvector \(\psi\) corresponding to the \(j^{th}\) time block. Then equation (3) splits into the equations
\[\widetilde{L}_{W}\psi_{j-1}+\widetilde{L}\psi_{j}+\widetilde{L}_{W}\psi_{j+1} =\lambda\psi_{j}\qquad\text{for }j=1,2,...,T \tag{4}\]
where for the sake of notation simplicity we have put
\[\psi_{0}=\psi_{T},\qquad\psi_{T+1}=\psi_{1}.\]
For \(k=0,1,2,...,T-1,\) we denote the DFT of vector \(\psi\) at value \(k\) by \(\widehat{\psi}(k)\in\mathbb{R}^{N},\) and put
\[\widehat{\psi}(k):=\sum_{j=0}^{T-1}e^{-ijk\frac{2\pi}{T}}\psi_{j+1}. \tag{5}\]
It is important that from the set of DFT vectors \(\{\widehat{\psi}\left(k\right)\}_{k=0}^{T-1}\) we may recover the whole vector \(\psi\in\mathbb{R}^{NT}\) using the Fourier inversion formula:
\[\psi_{j}=\frac{1}{T}\sum_{k=0}^{T-1}\widehat{\psi}(k)e^{ijk\frac{2\pi}{T}}. \tag{6}\]
Now by applying the DFT (5) to equations (4) (i.e. by multiplying by exponents and summing up the equations), we obtain the fundamental equations satisfied by the DFT of the vector \(\psi\) defined in formula (5):
\[\left[\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L }_{W}\right]\widehat{\psi}\left(k\right)=\lambda\widehat{\psi}\left(k\right) \tag{7}\] \[\text{for}\quad k=0,1,...,T-1.\]
The following theorem justifies the application of the DFTs for solving the system (3):
**Theorem 1**: _The spectrum (with multiplicities) of the supra-Laplacian \(\mathcal{L}\) in equation (2) of a temporal network following a periodic constant block Jacobi model coincides with the union of the spectra of the matrices \(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W},\) i.e._
\[spec\left(\mathcal{L}\right)=\cup_{k=0}^{T-1}spec\left(\widetilde{L}+2\cos \left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\right) \tag{8}\]
**Proof 2**: _First, we prove the inclusion_
\[spec\left(\mathcal{L}\right)\subseteq\cup_{k=0}^{T-1}spec\left(\widetilde{L} +2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\right).\]
_Indeed, by the above arguments, if we have an eigenvalue \(\lambda\) with eigenvector \(\psi\) solving system (4), then for every \(k\) with \(0\leq k\leq T-1\) we have equation (7), i.e._
\[\left[\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\right] \widehat{\psi}\left(k\right)=\lambda\widehat{\psi}\left(k\right).\]
_Hence, \(\lambda\) is an eigenvalue for all matrices \(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\) with eigenvector \(\widehat{\psi}\left(k\right).\) Now, we prove the opposite inclusion:_
\[\cup_{k=0}^{T-1}spec\left(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right) \widetilde{L}_{W}\right)\subseteq spec\left(\mathcal{L}\right).\]
_Assume that \(\lambda^{\star}\) is an eigenvalue with eigenvector \(v^{\star}\) for the matrix \(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\), i.e._
\[\left[\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\right] v^{\star}=\lambda^{\star}v^{\star}.\]
_We define the vector \(\varphi\in\mathbb{R}^{NT}\) by putting_
\[\varphi_{k+1} =v^{\star}\] \[\varphi_{m} =0\qquad\text{for }m\neq k+1,m=1,2,...,T.\]
_By the inversion formula (6) we define the vector_
\[\psi_{j}:=\varphi_{k+1}e^{ijk\frac{2\pi}{T}}\qquad\text{for }j=1,2,...,T.\]
_We show that it satisfies the eigenvalue equation (4) since_
\[\widetilde{L}_{W}\psi_{j-1}+\widetilde{L}\psi_{j}+\widetilde{L}_{W}\psi_{j+1}= \lambda^{\star}\psi_{j}\]
_i.e._
\[e^{i\left(j-1\right)k\frac{2\pi}{T}}\widetilde{L}_{W}v^{\star}+e^{ijk\frac{2 \pi}{T}}\widetilde{L}v^{\star}+e^{i\left(j+1\right)k\frac{2\pi}{T}} \widetilde{L}_{W}v^{\star}=\lambda^{\star}e^{ijk\frac{2\pi}{T}}v^{\star}\]
_But the last is equivalent to equation_
\[e^{-ik\frac{2\pi}{T}}\widetilde{L}_{W}v^{\star}+\widetilde{L}v^{\star}+e^{ik \frac{2\pi}{T}}\widetilde{L}_{W}v^{\star}=\lambda^{\star}v^{\star}\]
_hence, to equation \(\widetilde{L}v^{\star}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}v^{ \star}=\lambda^{\star}v^{\star};\) which was our assumption. This completes the proof._
In Figure 1 we have displayed the first \(100\) eigenvalues of the matrix \(L=\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\) from equation (7), where we see that for every \(j\geq 1\), the \(j^{th}\) eigenvalue \(\lambda_{j}^{(k)}\) of all matrices \(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\) is monotonically increasing with \(k\) for
\[0\leq k\leq\frac{T-1}{2}-1\text{ if }T\text{ is odd}\] \[\text{and}\] \[0\leq k\leq\frac{T}{2}-1\text{ if }T\text{ is even}.\]
The following proposition explains the behavior of the eigenvalues.
**Proposition 3**: _Without loss of generality assume that \(T\) is odd. Then the \(j^{th}\) eigenvalues of the matrices \(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\) satisfy_
\[\lambda_{j}^{(0)}\leq\lambda_{j}^{(1)}\leq\cdots\leq\lambda_{j}^{\left(\frac {T-1}{2}-1\right)}.\]
_Proof 4_: _The proof of this proposition is direct consequence of Theorem 8.1.5. in Golub and Van Loan (1996) which states that for symmetric matrices \(V\) and \(E\) of size \(N\times N,\) and for all eigenvalues \(\lambda_{j}\), for \(j=1,2,...,N,\) hold the inequalities:_
\[\lambda_{j}\left(V\right)+\lambda_{\min}\left(E\right)\leq\lambda_{j}\left(V+ E\right)\leq\lambda_{j}\left(V\right)+\lambda_{\max}\left(E\right). \tag{9}\]
Figure 1: **The 100 smallest eigenvalues of matrices \(\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}\) for each \(k=0,1,2,...,29.\) The matrices \(\widetilde{L}\) and \(\widetilde{L}_{W}\) are obtained from a temporal benchmark network composed of \(T=30\) Erdos-Renyi random graphs each with \(N=100\) nodes and edge probability \(p=0.3\) (such dense consecutive ER networks are slowly-changing). The inter-layer weights \(\omega\) are fixed at \(1\). We include the additional plot of \(\cos\left(k\frac{2\pi}{T}\right)\) which determines the monotonically increasing behavior of eigenvalues corresponding to \(0\leq k\leq 14\) and monotonically decreasing behaviour of eigenvalues corresponding to \(15\leq k\leq 29\).**
_We take into account the fact that the eigenvalues of the diagonal matrix \(\widetilde{L}_{W}\) are non-negative since they coincide with all non-negative weights \(\omega_{j}^{t,p}\). In particular, if all they are equal to a constant \(\omega\), then we see that_
\[\lambda_{j}^{k}=\lambda_{j}(\widetilde{L})+2\cos\left(k\frac{2\pi}{T}\right)\omega.\]
_This completes the proof._
Now, by means of Theorem 1, we show how to construct a solution to eigenvalue equation (3) by using equality (7): Fix a \(k=\hat{k}\) and consider an eigenvector \(v\) with eigenvalue \(\hat{\lambda}\) solving the eigenvalue problem (7) for \(k=\hat{k}\). We assume that \(\hat{\lambda}\) is among the smallest eigenvalues, close to \(0\). We are seeking for a block-vector \(\Psi=(\psi_{1},\psi_{2},...,\psi_{T})\in\mathbb{R}^{NT}\) for which \(\widehat{\Psi}\left(k\right)=\varphi_{k}\), where the block-vector \(\Phi=(\varphi_{1},...,\varphi_{T})\in\mathbb{R}^{NT}\) is defined as
\[\varphi_{k}:=\left\{\begin{array}{ll}v&\text{ for }k=\hat{k}\\ 0&\text{ for }k\neq\hat{k}\end{array}\right.\]
Now we apply the inversion formula (6) to the vector \(\Phi\), and obtain the block-vector \(\Psi\in\mathbb{C}^{NT}\) with components
\[\psi_{j}=e^{\frac{2\pi}{T}ij\hat{k}}v\qquad\text{for }j=0,1,...,T-1. \tag{10}\]
Thus we have \(\varphi_{k}=0\) for \(k\neq\hat{k}\), and \(\Psi\) is a solution to the eigenvalue equation (3) with the same \(\hat{\lambda}\). Since the vector \(\Psi\) is complex valued, we obtain two real-valued vectors (\(\in\mathbb{R}^{NT}\)), by taking the real and imaginary parts of \(e^{\frac{2\pi}{T}ij\hat{k}}\), namely:
\[\psi_{j}^{R} :=\cos\left(\frac{2\pi}{T}j\hat{k}\right)\times v\qquad\text{ for }j=0,1,...,T-1 \tag{11}\] \[\psi_{j}^{I} :=\sin\left(\frac{2\pi}{T}j\hat{k}\right)\times v\qquad\text{ for }j=0,1,...,T-1\]
In Figure 2 we visualise solutions (11) for \(\hat{k}=1,2,3\), accompanied by the corresponding plots of \(\cos(\frac{2\pi}{T}j\hat{k})\) and \(\sin(\frac{2\pi}{T}j\hat{k})\) for \(j=0,1,...,T-1\).
Every eigenvalue in equation (7) has _even_ multiplicity due to the equality of the two matrices as indicated below:
\[\widetilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\widetilde{L}_{W}= \widetilde{L}+2\cos\left(\left(T-k\right)\frac{2\pi}{T}\right) \widetilde{L}_{W}\] \[\text{ for }0\leq k\leq\frac{T-1}{2}-1;\]
the double multiplicity of the eigenvalues is clearly observed in Figure 1. In the case of odd \(T\) there are unique eigenvalues just for \(k=\frac{T-1}{2}-1\); for even \(T\) all eigenvalues have even multiplicity. For \(\hat{k}=0\) we have one solution \(\Psi\) with \(\psi_{j}=v\) corresponding to the zero eigenvalue, \(\hat{\lambda}=0\).
By using the results of perturbation theory for invariant subspaces Golub and Van Loan (1996); Luxburg (2007) we see that for every eigenvalue with even multiplicity, we may estimate the perturbation of its eigenspace, i.e. the space of its eigenvectors. Thus we obtain the solutions which look like "block sinusoids" of \(\cos\) and \(\sin\) type, Figure 2. The perturbation of the two-dimensional space spanned by \(\cos\) and \(\sin\) type solutions, results in a two-dimensional space corresponding to the perturbed eigenvalue of the matrix \(\mathcal{L}\). These eigenvectors may differ from \(\cos\) or \(\sin\) type solutions.
The above theoretical results have a direct impact on the eigenvectors of the supra-Laplacian \(\mathcal{L}\), Figure 3. We show that the eigenvectors corresponding to the eigenvalues of the supra-Laplacian \(\mathcal{L}\), which are close to \(0\), are obtained by perturbation of the eigenvectors corresponding to the \(0\) eigenvalues of the separate layers \(L^{t}\), derived as \((D^{t})^{\frac{1}{2}}\)**1**. Thus they do not carry any information about the finer description of that layer as does the Fiedler vector. These eigenvectors of \(\mathcal{L}\) give us only information about all \(T\) time layers being separate from each other. The bigger eigenvalues of \(\mathcal{L}\) have eigenvectors which are perturbations of mixtures of higher eigenvectors for networks \(L^{t}\), i.e. they contain information from the Fiedler eigenvectors for the separate networks \(L^{t}\). We can conclude that only after the block nature of the constant block Jacobi model in the temporal network is captured the eigenvectors start capturing variability introduced by some certain within-layer patterns, which is clearly seen from Figure 3.
Figure 2: **Eigenvector estimations for supra-Laplacian matrix \(\mathbf{\mathcal{L}}\).** This figure visualizes eigenvectors from equation (11) for \(\hat{k}=1,2,3\), each accompanied by the corresponding graph of the \(\cos\) and \(\sin\) functions. The eigenvector \(v\) corresponds to the eigenvalue \(\lambda=0\) which is a solution to the eigenvalue problem (7). The matrices \(\hat{L}\) and \(\tilde{L}_{W}\) are obtained from a temporal network following the constant block Jacobi model composed of \(T=30\)**Erdos-Renyi random graphs** each with \(N=100\) nodes and edge probability \(p=0.3\). The inter-layer weights \(\omega\) are fixed at \(1\).
Properties of the eigenvectors corresponding to small eigenvalues of the supra-Laplacian \(\mathcal{L}\)
In this section we empirically showcase the theoretical results that eigenvectors corresponding to the small eigenvalues of \(\mathcal{L}\) are well-approximated by linear combinations of the eigenvectors (paired to the zero eigenvalue) of the separate layers. We investigate their behavior with respect to the edge density of the layers and the inter-layer weights.
Evaluating the approximation of the eigenvectors of \(\mathcal{L}\) using the eigenvectors of the separate time layers
Let \(\overline{\Lambda}\) be the set of smallest eigenvalues with paired eigenvectors well-approximated by the subspace of eigenvectors corresponding to the \(0\) eigenvalues for the separate layers. The theoretical results from Sec. 3 guarantee that the eigenvectors \(v\) corresponding to \(\lambda\in\overline{\Lambda}\) satisfy (see Sec. 2 for \(V^{t}\) def.)
\[\min_{\{\alpha_{t}\}}\left\|v-\sum_{t=1}^{T}\alpha_{t}V^{t}\right\|\leq\varepsilon \tag{12}\]
Figure 3: **Eigenvalues and eigenvectors for an Erdos-Renyi benchmark temporal network.** The Erdos-Renyi temporal benchmark network is composed of \(T=30\) random Erdos-Renyi graphs with \(N=100\) nodes and \(p=0.1\) edge probability. The inter-layer weights are set to \(\omega=0.01\). We plot the \(100\) smallest eigenvalues of the corresponding supra-Laplacian matrix, the \(6\) eigenvectors corresponding to the \(6\) smallest eigenvalues and the \(35^{th}\) eigenvector. The jump of the eigenvalue graph indicates precisely the position of \(\lambda^{*}\) for index 31 and all following eigenvectors look as the \(35^{th}\) eigenvector plotted which captures local variability. Colouring of each eigenvector is consistent with the components that belong to different time points.
for a small \(\varepsilon>0,\) not true for the rest of the eigenvalues.
We evaluate the approximation of each \(\mathcal{L}\)'s eigenvector \(v\) using the eigenvectors of each time layer corresponding to the zero eigenvalue, \(V^{t}\), by solving a regression problem where \(\varepsilon_{i}\) is the \(NT\times 1\) vector of residuals, and we denote the error at \(i\) to be \(\epsilon_{i}:=\|\varepsilon_{i}\|\). Denote by \(\lambda^{*}\) the first eigenvalue \(\lambda_{i}\) for which \(\epsilon_{i}>>\epsilon_{i-1}\).
Discussion on the relation between edge density, inter-layer weights and eigenvectors corresponding to the smallest eigenvalues
The present experimental results, in accordance with the developed theory, show that for a small eigenvalue of the supra-Laplacian \(\mathcal{L}\), the eigenvectors \(\psi^{R}\) and \(\psi^{I}\) are approximations to the corresponding eigenvectors of the supra-Laplacian \(\mathcal{L}\). In Figure 3 we observe the eigenvectors of the supra-Laplacian of a temporal network composed of random Erdos-Renyi graphs, Erdos and Renyi [1959]. The first few eigenvectors follow the same \(\sin\) and \(\cos\) functions as seen in Figure 2, and thus can be used to identify the first order approximation by the constant block Jacobi model structure of the temporal network.
We investigate how the approximation of these eigenvectors is affected by the inter-layer weights and the density of the edge weights within each time layer. To showcase this, we simulate various benchmark temporal networks composed
Figure 4: **Error \(\epsilon_{i}\) of approximating supra-Laplacian eigenvectors (corresponding to eigenvalue \(\boldsymbol{\lambda_{i}}\) for \(\boldsymbol{i=1,2,3,....,TN}\)) by their separate time layers eigenvectors for the benchmark temporal network.** All of the benchmark temporal networks were simulated using \(T=30\)**random Erdos-Renyi graphs** with \(N=100\) nodes and varying edge probabilities \(p=0.03,0.04,0.05,0.08,0.1,0.3\) edge probabilities. Each of the four plots captures the results for different inter-layer weights set to \(\omega=0.01,0.05,1,5\). For each parameter combination \((p,\omega)\) we simulate \(100\) networks and show their average error \(\epsilon_{i}\) with \(1\) st.dev. intervals. The obtained approximation average errors and st.dev. intervals are visualized for the first \(100\) eigenvectors although at most \(T+1\) regressions are needed to capture all \(T\) layers as separate layers.
of random Erdos-Renyi networks with a varying degree of edge probability \(p=0.03,0.04,0.05,0.08,0.1,0.3\) and inter-layer weights \(\omega=0.01,0.05,1,5\), which are two factors that affect the approximation of the eigenvectors of the investigated supra-Laplacians \(\mathcal{L}\), Figure 4.
Recall that we have denoted by \(\lambda^{*}\) the smallest non-zero eigenvalue sensitive to within-layer connectivity patterns, i.e. breaking (12). Then for all benchmark networks types it is true that the value \(\lambda^{*}\) is increasing with a decreasing \(\omega\) value: Smaller inter-layer weights \(\omega\) lead to greater separation between time layers, thus more eigenvectors behave as predicted by perturbation theory. More eigenvectors are needed to explain each layer as separate. Higher inter-layer weights influence more the resulting eigenvectors and fewer behave in a way as predicted by perturbation theory. Lower inter-layer weights interfere less and the behaviour of the eigenvectors resembles closely the behaviour of eigenvectors as predicted by perturbation theory.
When the probability \(p\) increases, the density within layers \(A^{t}\) increases. Since \(\omega\) is fixed it cannot reflect on the increasing density of \(A^{t}\) and the perturbation effect resulting from inter-layer matrices \(W^{t,t+1}\) is smaller. Thus for increasing \(p\), i.e. for increasing density, the behaviour of more eigenvectors resembles closely the behaviour of the eigenvectors as predicted by perturbation theory.
When \(p\) is decreasing, the eigenvalue \(\lambda^{*}\) indicates that more eigenvectors resemble closely the behaviour of eigenvectors as predicted by perturbation theory. This is a result of the sparseness of the time layers and the corresponding lower inter-layer weights \(\omega_{i}^{t,t+1}\). The above observations need further rigorous theoretical justification.
Relation between the multi-scale community structure of the layers of a supra-Laplacian network and its eigenvalues.
It is important to note that in Figure 1 the first few eigenvalues capture the block structure of the temporal network following the constant block Jacobi model, thus close to \(0\), however after they start monotonically increasing without any clear cuts. From spectral graph partitioning Ding et al. (2001) we know that this is indicative of the lack of structure within the networks, which is the case in here where each layer is a densely connected **Erdos-Renyi random graphs**
Figure 5: **The 100 smallest eigenvalues of matrices \(\tilde{L}+2\cos\left(k\frac{2\pi}{T}\right)\tilde{L}_{W}\) for each \(k=0,1,2,...,32\). The matrices \(\tilde{L}\) and \(\tilde{L}_{W}\) are obtained from a temporal network composed of \(T=33\) Sales-Pardo graphs each with \(N=640\) nodes. The inter-layer weights \(\omega\) are fixed at \(1\). We include the additional plot of \(\cos\left(k\frac{2\pi}{T}\right)\) which determines the monotonically increasing behaviour for eigenvalues for \(0\leq k\leq 15\) and monotonically decreasing behaviour for eigenvalues for \(17\leq k\leq 32\).**
with no community structure. In Figure 5, we demonstrate the behavior of the supra-Laplacian eigenvalues when each of the layers has multi-scale community structure simulated using the Sales-Pardo model, Sales-Pardo et al. (2007). Again the smallest eigenvalues capture the block structure of the temporal network, however, there are clear eigenvalue cuts where a new multi-scale community structure within the layers is captured.
## 5 Conclusions
The above results are crucial in interpreting spectral clustering properties of the supra-Laplacian matrix of all slowly-changing temporal networks that can be represented using a constant block Jacobi model. We have provided experimental results with Erdos-Renyi (unstructured) networks and Sales-Pardo hierarchical networks. Further investigation in these theoretical results will lead into more insights of the spectral properties of supra-Laplacian matrices for more general temporal networks. As presented in the paper, the above findings provide a fundamental understanding of the spectral properties of temporal networks on time periods where they are slowly changing which can significantly improve all spectral-based methods applied on temporal networks such as partitioning, node ranking, community detection, clustering, etc. The above results were successfully used to extend a multiscale community detection method, Tremblay and Borgnat (2014), based on a spectral graph wavelets approach, Hammond et al. (2011), to temporal networks. The extended method, Kuncheva and Montana (2017), takes advantage of the developed theory to automatically detect the different scales at which communities exist across layers, which is an advantage over the multilayer modularity maximization approach, Mucha et al. (2010), used for similar purposes. The above experimental results have been also replicated on temporal Sales-Pardo hierarchical benchmark networks, which are suitable for multi-scale community detection. There is also a detailed investigation of using inter-layer weights that account for the sparsity and similarity across layers, Kuncheva (2017), including a real life application example to social networks data.
## 6 Acknowledgements
The author OK acknowledges the project KP-06-N52-1 with Bulgarian NSF. The author ZK acknowledges the project KP-06-N32-8 with Bulgarian NSF and EPSRC scholarship (2012-2016) at Imperial College London.
|
2304.11190 | Kinematics of Galactic Centre clouds shaped by shear-seeded solenoidal
turbulence | The Central Molecular Zone (CMZ; the central ~ 500 pc of the Galaxy) is a
kinematically unusual environment relative to the Galactic disc, with high
velocity dispersions and a steep size-linewidth relation of the molecular
clouds. In addition, the CMZ region has a significantly lower star formation
rate (SFR) than expected by its large amount of dense gas. An important factor
in explaining the low SFR is the turbulent state of the star-forming gas, which
seems to be dominated by rotational modes. However, the turbulence driving
mechanism remains unclear. In this work, we investigate how the Galactic
gravitational potential affects the turbulence in CMZ clouds. We focus on the
CMZ cloud G0.253+0.016 (`the Brick'), which is very quiescent and unlikely to
be kinematically dominated by stellar feedback. We demonstrate that several
kinematic properties of the Brick arise naturally in a cloud-scale
hydrodynamics simulation that takes into account the Galactic gravitational
potential. These properties include the line-of-sight velocity distribution,
the steepened size-linewidth relation, and the predominantly solenoidal nature
of the turbulence. Within the simulation, these properties result from the
Galactic shear in combination with the cloud's gravitational collapse. This is
a strong indication that the Galactic gravitational potential plays a crucial
role in shaping the CMZ gas kinematics, and is a major contributor to
suppressing the SFR by inducing predominantly solenoidal turbulent modes. | Maya A. Petkova, J. M. Diederik Kruijssen, Jonathan D. Henshaw, Steven N. Longmore, Simon C. O. Glover, Mattia C. Sormani, Lucia Armillotta, Ashley T. Barnes, Ralf S. Klessen, Francisco Nogueras-Lara, Robin G. Tress, Jairo Armijos-Abendaño, Laura Colzi, Christoph Federrath, Pablo García, Adam Ginsburg, Christian Henkel, Sergio Martín, Denise Riquelme, Víctor M. Rivilla | 2023-04-21T18:00:25Z | http://arxiv.org/abs/2304.11190v2 | # Kinematics of Galactic Centre clouds shaped by shear-seeded solenoidal turbulence
###### Abstract
The Central Molecular Zone (CMZ) is a kinematically unusual environment relative to the Galactic disc, with high velocity dispersions and a steep size-linewidth relation of the molecular clouds. In addition, the CMZ region has a significantly lower star formation rate (SFR) than expected by its large amount of dense gas. An important factor in explaining the low SFR is the turbulent state of the star-forming gas, which seems to be dominated by rotational modes. However, the turbulence driving mechanism remains unclear. In this work, we investigate how the Galactic gravitational potential affects the turbulence in CMZ clouds. We demonstrate that several kinematic properties of the CMZ cloud G0.253+0.016 ('the Brick') arise naturally in a cloud-scale hydrodynamics simulation that takes into account the Galactic gravitational potential. These properties include the line-of-sight velocity distribution, the steepened size-linewidth relation, and the predominantly solenoidal nature of the turbulence. Within the simulation, these properties result from the Galactic shear in combination with the cloud's gravitational collapse. This is a strong indication that the Galactic gravitational potential plays a crucial role in shaping the CMZ gas kinematics, and is a major contributor to suppressing the SFR by inducing predominantly solenoidal turbulent modes.
keywords: stars: formation - ISM: clouds - ISM: evolution - ISM: kinematics and dynamics - Galaxy: centre - galaxies: ISM
## 1 Introduction
The Central Molecular Zone (CMZ) is one of the most extreme star-forming environments in the Milky Way. The region contains a large reservoir of molecular gas (\(\sim 10^{7}\) M\({}_{\odot}\); Dahmen et al. 1998) within the innermost few hundred parsecs of the Galaxy, with temperatures (\(\sim 100\) K; Ao et al. 2013; Ginsburg et al. 2016; Krieger et al. 2017), column densities (\(\sim 10^{23}\) cm\({}^{-2}\); Molinari et al. 2011) and pressures (\(P/k>10^{7}\) K cm\({}^{-3}\); Rathborne et al. 2014; Walker et al. 2018; Myers et al. 2022) much higher than in the Solar neighbourhood (Kruijssen and Longmore 2013). Despite that, the region as a whole has a star formation rate (SFR) which is an order of magnitude lower than expected based on the large amount of dense gas (e.g. traced by NH\({}_{3}\); Longmore et al. 2013), and is likely due to a current minimum within an episodic star formation cycle (Kruijssen et al. 2014; Armillotta et al. 2019; Callanan et al. 2021). Sgr B2 accounts for at least 50% of all star formation activity in the CMZ (possibly up to 89%; Barnes et al. 2017; Ginsburg et al. 2018), leaving the rest of the clouds with quiescent to intermediate levels of star formation (Lu et al. 2019; Walker et al. 2021; Williams et al. 2022).
Interstellar medium (ISM) structure and star formation arise in response to the kinematic state of the gas (Henshaw et al. 2020). Therefore, the kinematics of the star-forming gas in the CMZ could help us understand the low SFR. The kinematics in the CMZ are also unusual, with high line-of-sight velocity dispersions and reports of a steep size-linewidth relation relative to the solar neighbourhood (Shetty et al. 2012; Kauffmann et al. 2017). These phenomena are (at least partially) attributed to the effects of turbulence, which is known to play an important role in shaping the ISM (Elmegreen and Scalo 2004; Mac Low and Klessen 2004). Turbulent motions consist of solenoidal and compressive modes that coexist at varied relative strength (see e.g. Federrath et al. 2010). The compressive turbulent modes can lead to fragmentation and star formation by creating shocks and overdensities, while the solenoidal modes can prevent gravitational collapse. Within the CMZ we have an indication of predominantly solenoidal turbulence driving (Federrath et al. 2016), which is likely linked to the suppressed SFR. Orikisz et al. (2017) found an inverse relation between the fraction of solenoidal modes in the velocity field of the gas and SFR within Orion B. A later work by Rani et al. (2022) found the same type of relation for a large sample of Milky Way clouds at Galactocentric radii between 3\(-\)12 kpc.
Even though turbulence is likely responsible for the kinematic and physical state of the CMZ clouds, it is currently not understood what drives it. Based on energetic analysis of common turbulence driving mechanisms, the CMZ turbulence is most likely driven by supernova feedback, followed by gas inflow from the galactic bar and magnetorotational instabilities (Kruijssen et al. 2014; Henshaw et al. 2022). However, this type of analysis is sensitive to coupling parameters that determine what fraction of the total energy goes into turbulent motions, and these parameters are not very well constrained. Recent work by Tassis and Pavlidou (2022) suggested that the CMZ turbulence can be explained by feedback from massive stars with high vertical velocity dispersion that cross the clouds and deposit energy via stellar winds. The authors also demonstrated that this type of energy injection results in a steep size-linewidth relation.
An additional contribution to the gas turbulence may come from
the strong orbital shear resulting from the Galactic gravitational potential (Kruijssen et al., 2014; Krumholz and Kruijssen, 2015; Federrath et al., 2016; Meidt et al., 2018; Keto et al., 2020). This mechanism is expected to drive solenoidal turbulence within the gas, which is consistent with observational estimates (Federrath et al., 2016).
In this Letter, we investigate how the Galactic gravitational potential affects the turbulence in the CMZ clouds. In particular, we focus on the G0.253+0.016 cloud, also known as 'the Brick' (Longmore et al., 2012). This cloud is in the very early stages of star formation (e.g. Lis et al., 2019; Lu et al., 2019; Walker et al., 2021) and even though there is evidence that it may contain an H II region (Henshaw et al., 2022), its kinematics are not dominated by in-situ stellar feedback. We use a recent cloud-scale hydrodynamics simulation (Dale et al., 2019; Kruijssen et al., 2019; Petkova et al., 2021) which includes a model for the Galactic gravitational potential, and demonstrate that it matches very well the kinematic properties of the Brick. This analysis provides key predictions for the ongoing ALMA CMZ Exploration Survey (ACES) on the Atacama Large Millimeter/submillimeter Array (Longmore et al. in prep.), which will be able to characterise the driving mechanism(s) of turbulence in molecular clouds throughout the CMZ.
## 2 Simulation setup
We use the high-density (HDens) tidally-virialised simulation from Dale et al. (2019) (see their sect. 3 and tab. 1). Kruijssen et al. (2019) and Petkova et al. (2021) selected this particular model to represent the Brick as it best matches the properties of the cloud (i.e. size, column density and velocity dispersion). The simulation is performed with the smoothed particle hydrodynamics (SPH) code gandalf(Hubber et al., 2018). The simulation is three-dimensional, unmagnetised, and assumes an isothermal equation of state with temperature 65 K (Ao et al., 2013; Ginsburg et al., 2016; Krieger et al., 2017) and a mean molecular weight \(\mu=2.35\), corresponding to fully molecular gas. Self-gravity of the gas is included, whereas the field stars are included in the background potential (see below). The cloud is initialised as a sphere with total mass \(\sim 4.5\times 10^{5}\) M\({}_{\odot}\) and \(10^{6}\) SPH particles. The initial velocity field is turbulent with a power spectrum \(P(k)\propto k^{-4}\), and virial parameter \(\alpha_{\rm vir}=3.2\). These initial conditions are selected from a set of randomly generated velocity fields to have negative spin angular momentum with respect to the orbital motion, consistent with the shear observed upstream from the Brick.
The simulated cloud is evolved on an eccentric orbit around the Galactic Centre starting 0.41 Myr before the pericentre passage (see fig. 3 of Kruijssen et al., 2019) in the gravitational potential described in Appendix A of Kruijssen et al. (2015), which is based on the photometric model of Launhardt et al. (2002). Since no turbulence driving is included, the initial turbulent velocity field of the cloud is quickly dissipated (on a timescale \(\approx 0.56\) Myr; Kruijssen et al., 2019). Turbulence is generated during the simulation through gravitational collapse and shearing motions. Due to the lack of sufficient pressure support, the cloud fragments and forms sink particles (with threshold density of \(\rho_{\rm sink}=10^{-17}\)g cm\({}^{-3}\)). By the time the present-day position of the Brick is reached (after 0.74 Myr of evolution), \(\sim 55\%\) of the gas mass is transformed into sink particles.
For our analysis we focus mainly on the snapshot that corresponds to the present-day location of the Brick. We label this snapshot as being at \(t=0\) Myr. To facilitate analysis we bin SPH particles onto a 3D Cartesian grid with cell size 0.1 pc using splash(Price, 2007) and the exact mapping method of Petkova et al. (2018). For reference, the sink accretion radius is 0.035 pc, and the median particle smoothing length is 0.096 pc. With the exception of Figure 1, which uses the synthetic HNCO moment 1 map from Petkova et al. (2021), all of the analysis is performed on these mapped simulation density outputs.
## 3 Comparison to the Brick
In order to compare the kinematic state of the simulated and the observed cloud, we first consider their line-of-sight (LoS) velocities. Kruijssen et al. (2019) found that the simulation matches the LoS velocity dispersion of the real Brick, indicating a kinematic similarity between the clouds. In addition, the synthetic HNCO (\(4_{04}-3_{03}\); 87.925 GHz) moment 1 map constructed by Petkova et al. (2021) shows a clear gradient and a matching LoS velocity range to the Brick (see their Appendix B). Figure 1 presents histograms of the synthetic moment 1 map and of the observed HNCO moment 1 map of the Brick (Federrath et al., 2016). The two distributions span the same velocity range and have a double-peaked profile, with a minimum at \(\approx 20\) km s\({}^{-1}\). The results remain unchanged if we consider a synthetic moment 1 map that uses the density structure of the simulation instead of modelled HNCO emission. Note that both the spin angular momentum and the LoS velocity gradient of the simulation evolve with time (fig. 4 of Kruijssen et al., 2019), and the presented velocity distribution is not identical to the initial conditions. Furthermore, earlier simulation snapshots have very different LoS velocities.
The double-peaked velocity profile in Figure 1 is indicative of rotation along an axis perpendicular to the line-of-sight. However, the rotation is not necessarily global but it may be present in multiple structures within the Brick which are overlapping along the LoS Henshaw et al. (2019). This is consistent with the velocity structure of the simulation, where the rotation is multi-axial, and broken down into spatially-coherent regions.
The LoS velocities can be used to construct the size-linewidth relation (Larson, 1981). We defer a full exploration of this observable in our simulations to a future study (Petkova et al. in prep.), but mention our finding that the simulated and observed cloud both exhibit the same size-linewidth slope (\(\approx 0.7\)). This is consistent with other CMZ studies (Shetty et al., 2012; Kauffmann et al., 2017), but is steeper than in the Solar neighbourhood (0.5; Heyer and Dame, 2015). Our analysis considers the entire Brick cloud and follows the procedure of Shetty et al. (2012), which identifies structures in position-position-velocity space with a dendrogram. In contrast, Henshaw et al. (2020) performed a Gaussian decomposition of HNCO emission lines, and
Figure 1: The distribution of line-of-sight velocities in the first velocity moment map of HNCO (\(4_{04}-3_{03}\)) emission in the Brick. The blue histogram is obtained from synthetic observations (Petkova et al., 2021). The black data points shows the observed distribution in the Brick (Federrath et al., 2016).
found a much shallower size-linewidth slope within identified substructures of the Brick. This suggests that the steeper relation may be due to rotational motions on the cloud scale.
The similar (yet atypical) size-linewidth relation in the simulation and in the Brick is suggestive of a similar kinematic state, which is likely due to a combination of rotation and turbulence. Federrath et al. (2016) estimated the turbulence driving parameter of the Brick to be \(b=0.22\pm 0.12\), which is consistent with having predominantly solenoidal driving. In order to compare this result with the simulation, we split the 3D velocity field into a compressive (curl-free) and a solenoidal (divergence-free) component using Helmholtz decomposition (see e.g. Federrath et al., 2010), and calculate the power spectrum of each component multiplied by the square root of the local density (\(E_{\rm comp}\) and \(E_{\rm sol}\), respectively). We then find the compressive ratio, \(E_{\rm comp}/(E_{\rm comp}+E_{\rm sol})\), which represents the fraction of kinetic energy stored in the compressive modes of the velocity field. For supersonic clouds, the compressive ratio is always greater than 0, even if the driving force is purely solenoidal (Federrath et al., 2010, 2011). Figure 2 shows the compressive ratio of the simulation as a function of spatial scale (\(k\)), compared to the results of Federrath et al. (2011) for a Mach number of \(\approx 11\). For most spatial scales our simulation has a compressive ratio of \(0.2-0.3\), which is consistent with having predominantly solenoidal turbulence driving. This is also in agreement with the results of Federrath et al. (2016) for the Brick. Similar results are seen for earlier simulation snapshots.
All of the above measurements are consistent with the hypothesis that the Galactic shear is influencing the cloud kinematics. We explore this hypothesis further in the following section.
## 4 The role of the galactic potential
The Galactic gravitational potential can influence the evolution and dynamics of the CMZ clouds through two main effects: shear and tidal forces. The simulated cloud uses the Launhardt et al. (2002) potential, which has a scaling of \(M\propto R^{2.2}\) between the enclosed mass \(M\), and the Galactocentric radius \(R\) for radii between 60 pc and 100 pc (Kruijssen et al., 2015). Using this dependence, Kruijssen et al. (2019) derived the velocity differential due to shear:
\[\delta v_{\rm shear}=0.67\ {\rm km\ s^{-1}}\left(\frac{\Omega_{\rm rot}}{1.7 \ {\rm Myr^{-1}}}\right)\left(\frac{\delta{\rm R}}{1\ {\rm pc}}\right), \tag{1}\]
where \(\Omega_{\rm rot}\) is the mean orbital angular velocity of a cloud (for our simulation \(\Omega_{\rm rot}=1.7\ {\rm Myr^{-1}}\); Kruijssen et al., 2015), and \(\delta R\) is the difference in Galactocentric radius between two points in the cloud. While an updated potential (Sormani et al., 2022) has been constructed since the simulation run, the shape of the new potential within the orbit of the simulation is consistent with that of Launhardt et al. (2002), and hence the results of this paper remain unchanged.
The tidal radius of the cloud is (Mo et al., 2010, eq. 12.21):
\[r_{\rm tidal}=\left(\frac{m(r_{\rm tidal})/M(R)}{2+\frac{\Omega_{\rm rot}^{ 2}R^{3}}{GM(R)}-\frac{{\rm d}\ln M}{{\rm d}\ln R}}\right]_{R}\right)^{1/3}R, \tag{2}\]
where \(m(r_{\rm tidal})\) is the cloud mass enclosed within the tidal radius. By assuming that \(\Omega_{\rm rot}^{2}R^{3}/GM(R)=1\) (true for circular motion where \(m\ll M\)), and \({\rm d}\ln M/{\rm d}\ln R|_{R}=2.2\)(Launhardt et al., 2002; Kruijssen et al., 2015), we simplify the above expression to the following:
\[r_{\rm tidal}=5.36\ {\rm pc}\left(\frac{R}{70\ {\rm pc}}\right)\left(\frac{m(r_{\rm tidal })}{10^{5}\ {\rm M_{\odot}}}\right)^{1/3}\left(\frac{M(R)}{2.8\times 10^{8}\ {\rm M_{\odot}}}\right)^{-1/3}. \tag{3}\]
In eq. 3 we express the dependence of the tidal radius on \(m(r_{\rm tidal})\). This allows us to find \(r_{\rm tidal}\) iteratively within the simulation. Note that due to the adopted gravitational potential, the tidal field is fully compressive (Dale et al., 2019; Kruijssen et al., 2019).
We now study the effects of shear and tidal forces on the kinematics of the simulation. Figure 3 shows a top-down view of the simulated cloud with superimposed \(xy\)-velocity vectors, where the bulk motion of the gas has been subtracted. We include three snapshots of the cloud - one at the present location of the Brick (right), and two at earlier positions along the cloud's orbit. We find that as the cloud evolves it undergoes collapse towards a central dense region, which can be seen both in the more enhanced gas column density (grey scale in Figure 3), and in the gas velocities. The velocity vectors are coloured based on the ratio of their tangential and radial components with respect to the local minimum of the gravitational potential along the orbit (cyan cross; hereafter 'cloud centre'). Figure 3 shows that as the cloud evolves, there is more radial motion of the gas (blue arrows) concentrated within the tidal radius (cyan circle; see eq. 3), and the regions outside the tidal radius move predominantly in a tangential direction (red arrows). This is consistent with the interpretation that the periphery of the cloud is stretched due to shear, while its central region is collapsing (possibly with the help of tidal compression).
In order to quantify the effect of the shear, we consider the tangential velocity components of the gas with respect to the cloud centre, \(v_{\phi}\), and their dependence on the distance from this centre, \(r\) (see Figure 4). We also include the velocity ranges that we expect from a simple model of shear (outside the tidal radius) and collapse (inside the tidal radius). For the shear we consider two limiting cases. In the first case (lower estimate) we take each pixel from Figure 3 and we compute its shear velocity using eq. 1. This approach does not give axisymmetric results with respect to the cloud centre. We then divide the pixels in radial distance bin and compute the mean \(v_{\phi}\) in each bin. In the second case, we assume that a parcel of gas will maintain its tangential speed set by shear as the cloud rotates. This approach assumes that the effects of shear are effectively axisymmetric with respect to the cloud centre. To compute the upper velocity estimates, we use eq. 1 where we replace \(\delta R\) with \(r\). The grey shaded area is then continued within the tidal radius by assuming \(r^{-1}\) dependence for the upper and the lower velocity estimate. This is equivalent to a parcel of gas moving with the shear velocity at the tidal radius, and then being accreted while it conserves its angular momentum.
Figure 2: Compressive ratio as a function of spatial scale. The black line shows the ratio for our simulation, while the red and blue lines (and shaded areas) show the compressive ratio of simulations with purely solenoidal and compressive turbulence driving, respectively (Federrath et al., 2011). The arrow indicates the (inverse of the) initial cloud size.
Figure 4 shows that for all snapshots our lower theoretical prediction for the contribution of the shear (i.e. outside the tidal radius) overlaps with a prominent feature in the data. This feature is better defined in the early snapshots where the spread of velocities is smaller and there is less ongoing gravitational collapse. We also see an average increase of \(v_{\phi}\) inside the tidal radius in all snapshots, consistent with spin-up due to collapse. This effect is most prominent at \(t=0\) Myr where we have a better defined centre of cloud rotation.
## 5 Summary and Discussion
In this Letter, we demonstrated that several kinematic properties of the CMZ cloud known as the Brick arise naturally in a hydrodynamics simulation which takes into account the Galactic gravitational potential. These properties include the line-of-sight velocity distribution, the steep slope of the size-linewidth relation and the solenoidally-driven turbulence. Within the simulation, we explain these through the effect of shear. In the outskirts of the simulated cloud, shear stretches the gas, boosts the velocity dispersion and seeds solenoidal turbulence. Due to the kinematic similarities between the simulation and the Brick, we conclude that the dynamical state of the Brick is likely strongly influenced by the Galactic gravitational potential. Our findings trigger several important follow-up questions.
**Can the turbulence be driven by another mechanism?**_Within the simulation_: In addition to shear, turbulence can be driven by gravitational collapse within the cloud. Dale et al. (2019) compared clouds evolved with the Galactic potential to the same clouds evolved in isolation and found that the isolated clouds undergo more rapid collapse, but after the initial period of turbulent dissipation (\(\approx 0.56\) Myr) their velocity dispersions remain lower than in the clouds evolved within the potential (see fig. 14 and 15 of Dale et al., 2019). Together with the solenoidal nature of the turbulence (see Figure 2), this indicates that the gravitational collapse on its own is not a sufficient turbulence driver. However, CMZ simulations which include the Galactic gravitational potential but no gas self-gravity also lack sufficient turbulence (Hatchfield et al., 2021). Therefore, the most likely interpretation is that shear seeds solenoidal turbulence which is amplified through gravitational collapse. _Within the Brick:_ we cannot be sure that shear is the only factor contributing to the mode of the turbulence, but the agreement between simulations and observations suggest that it is likely to be an important factor. In addition to shearing motions within the cloud, there should also be shear with respect to the warmer diffuse gas surrounding the cloud, which can trigger Kelvin-Helmholtz instability. Other mechanisms can (and likely do) inject energy into the gas (e.g. stellar feedback; Tassis and Pavlidou, 2022; Henshaw et al., 2022), but this type of energy injection does not typically trigger solenoidal motions (Menon et al., 2020).
**Is the Galactic potential suppressing star formation in the Brick?** Many authors have argued in favour of the Galactic shear as the mechanism responsible for suppressing star formation in the CMZ (Kruijssen et al., 2014, 2019; Krumholz and Kruijssen, 2015; Federrath et al., 2016; Meidt et al., 2018, 2020; Keto et al., 2020). However, the SFR in our simulation (\(\sim 0.3\) M\({}_{\odot}\) yr\({}^{-1}\); Dale et al., 2019) is much higher than that of the Brick (\(10^{-4}-10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\); Rathborne et al., 2014; Walker et al., 2021). This discrepancy suggests that the low SFR in the Brick may be partially caused by physical factors missing from the simulation, such as magnetic and thermal support. Magnetic fields are known to delay star formation and prevent fragmentation. Petkova et al. (2021) found a difference in the width of the column density PDFs between the simulation and the Brick, which can be accounted for with the estimated turbulent plasma \(\beta\) of the cloud (Federrath et al., 2016), indicating that magnetic fields are likely important for shaping the cloud structure. Additionally, the high gas temperature of the Brick is explained with shock heating (Ginsburg et al., 2016), as well as high levels of cosmic rays and interstellar radiation (Clark et al., 2013), that are not captured in our simulation.
Another reason for the different SFR in the simulation and the Brick may be the idealised simulation assumptions. The simulation was initialised as a gas sphere, which differs from the expected complex filamentary clouds that enter the CMZ (Tress et al., 2020). The assumed spherical initial state is unstable under the strong compressive tide in the vertical direction, and hence our simulation flattens rapidly. This vertical collapse may be artificially boosting the SFR, and the discrepancy with the Brick may be improved by assuming more realistic initial conditions. Furthermore, the simulated cloud exists in isolation and it is possible that the Brick has formed through gradual accretion of (higher kinetic energy) material, shifting the timeline of star formation to a later point along the Brick's orbit.
Our analysis concludes that the dynamical state of the Brick is likely strongly influenced by the Galactic gravitational potential. These findings are extendable to the rest of the quiescent CMZ clouds and make predictions for their turbulent state.
Figure 3: Top-down view (\(xy\)-plane) of three snapshots of the simulated cloud (see time stamps). The column density is shown in grey scale, while the \(xy\)-velocities (mass-weighted averages along the \(z\)-axis) are shown as arrows. The length of each arrow indicates the magnitude of the corresponding velocity, with a \(10\) km s\({}^{-1}\) arrow drawn at the top of each panel for reference. Each arrow shows the velocity average within squares of \(10\times 10\) pixels. The cyan cross in each panel marks the location of the local minimum of the gravitational potential within the cloud, and the cyan circle shows the size of the tidal radius (see eq. 3) around the cyan cross. The arrows are coloured based on the ratio of azimuthal to radial kinetic energy with respect to the position of the cyan cross. In this coordinate system, Sgr A\({}^{*}\) is located at \((8.08,0.00,-6.68)\) pc, and an observer on Earth is looking along the \(y\)-axis (see Dale et al., 2019, fig. 2).
## Acknowledgements
This work was carried out by the ACES Collaboration as part of the ALMA CMZ Exploration Survey. MAP and JMDK acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 (ERC Starting Grant MUSTANG; 714907). MAP, JMDK, and SCOG acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) via the collaborative research center (SFB 881, Project-ID 138713538) "The Milky Way System" (MAP, JMDK: subproj. B2; SCOG: subproj. A1, B1, B2, B8). JMDK acknowledges funding from the DFG through an Emmy Noether Research Group (KR4801/1-1). COOL Research DAO is a Decentralised Autonomous Organisation supporting research in astrophysics aimed at uncovering our cosmic origins. SCOG acknowledges subsidies from the Heidelberg Cluster of Excellence STRUCTURES in the framework of Germany's Excellence Strategy (EXC-2181/1 - 399090948) and funding from the ERC via the ERC Synergy Grant ECOGAL (855130). AG acknowledges support from the NSF under grants AST 2008101, 2206511, and CA-REER 2142300, and from STSCI under grant JWST-GO-02221.001-A. JDH gratefully acknowledges financial support from the Royal Society (University Research Fellowship). VMR has received support from the project RYC2020-029387-I funded by MCIN/AEI /1.10309/501100011033. C.F. acknowledges funding by the Australian Research Council (Future Fellowship FT180100495 and Discovery Projects DP230102280), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). L.C. acknowledges financial support through the Spanish grant PID2019-105552RB-C41 funded by MCIN/AEI/10.13039/501100011033.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2307.08191 | Unleashing the Potential of LLMs for Quantum Computing: A Study in
Quantum Architecture Design | Large Language Models (LLMs) contribute significantly to the development of
conversational AI and has great potentials to assist the scientific research in
various areas. This paper attempts to address the following questions: What
opportunities do the current generation of generative pre-trained transformers
(GPTs) offer for the developments of noisy intermediate-scale quantum (NISQ)
technologies? Additionally, what potentials does the forthcoming generation of
GPTs possess to push the frontier of research in fault-tolerant quantum
computing (FTQC)? In this paper, we implement a QGAS model, which can rapidly
propose promising ansatz architectures and evaluate them with application
benchmarks including quantum chemistry and quantum finance tasks. Our results
demonstrate that after a limited number of prompt guidelines and iterations, we
can obtain a high-performance ansatz which is able to produce comparable
results that are achieved by state-of-the-art quantum architecture search
methods. This study provides a simple overview of GPT's capabilities in
supporting quantum computing research while highlighting the limitations of the
current GPT at the same time. Additionally, we discuss futuristic applications
for LLM in quantum research. | Zhiding Liang, Jinglei Cheng, Rui Yang, Hang Ren, Zhixin Song, Di Wu, Xuehai Qian, Tongyang Li, Yiyu Shi | 2023-07-17T01:39:38Z | http://arxiv.org/abs/2307.08191v1 | # Unleashing the Potential of LLMs for Quantum Computing: A Study in Quantum Architecture Design
###### Abstract
Large Language Models (LLMs) contribute significantly to the development of conversational AI and has great potentials to assist the scientific research in various areas. This paper attempts to address the following questions: What opportunities do the current generation of generative pre-trained transformers (GPTs) offer for the developments of noisy intermediate-scale quantum (NISQ) technologies? Additionally, what potentials does the forthcoming generation of GPTs possess to push the frontier of research in fault-tolerant quantum computing (FTQC)? In this paper, we implement a QGAS model, which can rapidly propose promising ansatz architectures and evaluate them with application benchmarks including quantum chemistry and quantum finance tasks. Our results demonstrate that after a limited number of prompt guidelines and iterations, we can obtain a high-performance ansatz which is able to produce comparable results that are achieved by state-of-the-art quantum architecture search methods. This study provides a simple overview of GPT's capabilities in supporting quantum computing research while highlighting the limitations of the current GPT at the same time. Additionally, we discuss futuristic applications for LLM in quantum research.
Large language models, quantum computing
## I Introduction
Large-scale language models (LLMs), such as ChatGPT [1], can learn autonomously from data and communicate with humans using reinforcement learning with human feedback (RLHF) mechanisms [2]. Since the release of ChatGPT by OpenAI, this type of AI technology has made a huge impact on a large number of research areas [3, 4]. LLMs are being rapidly adopted in areas such as advanced chemistry [5], healthcare [6], protein design [7], etc. However, how can LLMs help for emerging fields such as quantum computing? We have seen many recent articles discussing how quantum computing can play an important role in the development of generative pre-trained transformers (GPTs) in the form of exploiting more complex, larger search spaces and more efficient training [8], but no one has yet discussed whether GPTs can contribute to the development of quantum computing.
As the field of quantum computing evolves, the design of quantum architecture stands as one of the most critical aspects. Quantum architectures define the structure and behavior of quantum circuits, which underpin every quantum algorithm. The correct selection and optimization of quantum circuits are crucial to achieving superior computational speed-ups and minimizing error rates inherent in quantum systems. As such, the development of effective methodologies for quantum architecture design is a pressing need.
The design of quantum architecture is a complex task that requires a deep understanding of quantum mechanics [9, 10, 11, 12, 13, 14, 15, 16, 17], computer science [18, 19, 20, 21, 22], and optimization algorithms [23, 24]. Traditional methods have relied heavily on human expertise and intuition, which, while invaluable, are inherently limited by our cognitive capabilities and biases. This is where large language models (LLMs) like GPT can make a significant contribution. GPT models are renowned for their ability to understand, generate, and reason about human language. They can absorb vast amounts of information, identify patterns within it, and make inferences based on these patterns. These capabilities make them particularly suited for the task of quantum architecture design. In the context of quantum architecture design, GPT can serve in multiple capacities. For instance, in tasks like Variational Quantum Eigensolvers (VQE), LLMs can act as controllers to conduct classical optimization. LLMs can effectively navigate the vast and complex search spaces of classical parameters, potentially outperforming traditional optimization techniques.
Furthermore, LLMs can be leveraged to explore the design space of the ansatz circuit. They can generate efficient architectures for the ansatz based on patterns and principles gleaned from large datasets of quantum circuits and their performance metrics. By doing so, they can contribute to the development of more expressive and efficient quantum architectures.
In summary, the application of GPT and similar models to quantum architecture design represents a promising avenue for accelerating the progress of quantum computing. By integrating human expertise with the power of advanced machine learning, we stand to make significant strides in our quest to harness the full potential of quantum computing. In this paper, we aim to delve deeper into this intriguing prospect and explore how GPT can contribute to the design of quantum architectures.
## II Related Works
**Ansatz Architecture Search:** The design of ansatz architectures plays a crucial role in the research of VQAs. A well-crafted ansatz architecture can lead to more accurate
computational results [25]. In the past, researchers derived problem-specific ansatz structures by analyzing particular problems [26, 27]. For instance, the Unitary Coupled-Cluster Singles and Doubles (UCCSD) scheme [28] is still considered the "golden" ansatz for solving molecular energy problems within the VQE framework.
QuantumNAS [29] and QAS [30] introduced noise-aware frameworks for the automated search of ansatz architectures. On the other hand, PAN [31] and Layer VQE [32] proposed ansatz frameworks that employ progressive training and search approaches at both pulse and gate levels, respectively. These advancements have expanded the range of possibilities for designing ansatz structures and optimizing their performance in various quantum computing applications. In contrast to conventional search strategies, we employ a method that iteratively prompts GPT-4 to propose ansatz architecture from a given search space, this kind of paradigm that combines human feedback and the ability of GPT-4 can greatly reduce the cost for search ansatz structure.
**Powerful Capabilities of LLMs:** The remarkable performance of Large Language Models (LLMs), such as GPT, has left a huge impression on researchers. These models have demonstrated significant assistance or indispensable acceleration and resource-saving contributions in various research domains [3, 33, 34]. We have observed that in closely related research directions, researchers have begun to appreciate and utilize LLMs to generate datasets [35] and accomplish sentence embedding tasks more effectively [36]. In the field of computer architecture, there have been efforts to use GPT for neural architecture search, and the effectiveness of GPT has been demonstrated through several simple examples [37, 38]. Beyond computer science, researchers in health care [6], advanced chemistry [5, 39], and other domains [7] have explored the integration of GPT with existing execution tools to further advance the research process in their respective fields. Our work aims to explore the roles and limitations of GPT in the emerging field of quantum computing, thereby contributing to the understanding of the potential and challenges in applying these powerful models to this exciting new domain.
## III Understanding the Design of Quantum Architecture for VQAs
The architecture of the ansatz circuits in Variational Quantum Algorithms (VQAs) is pivotal, as it underpins the ability of these algorithms to tackle complex computational tasks effectively and efficiently. The ansatz circuit's task is to approximate the lowest energy quantum state, commonly known as the ground state. A direct correlation exists between the performance of the VQA and the ansatz circuit's proficiency in emulating this quantum state.
Specifically, the Hamiltonian associated with the target molecule is converted into a sequence of Pauli matrices, thereby transforming the continuous problem of identifying the molecular ground state energy into a discrete optimization task. This transformation is a core operation in quantum computing, demonstrating the potential of VQAs for quantum supremacy, particularly in situations where classical computers encounter significant computational hurdles. The architecture of the ansatz circuit is deeply entrenched in the principles of quantum entanglement and superposition. Quantum gates, the primary building blocks of ansatz circuits, are strategically arranged to manipulate qubits, the fundamental units of quantum information. These gates facilitate unitary transformations, driving the evolution of the quantum system from an initial state to a final state that closely approximates the quantum system's ground state.
The design of the ansatz circuit often adheres to heuristic principles and is highly tailored to the specific problem being addressed. Factors such as the selection and sequencing of quantum gates, as well as the degree of qubit entanglement, are critical elements that require customization. An effectively designed ansatz circuit should allow efficient exploration of the Hilbert space, the vector space encompassing all conceivable states of the quantum system. Researchers have proposed various strategies for designing ansatz circuits, such as:
1. Hardware-efficient ansatz [25, 31]: These are circuits designed to be compatible with specific quantum hardware, considering the connectivity and native gate set of the quantum processor.
2. Problem-specific ansatz [26, 27]: This approach incorporates prior knowledge of the problem at hand (e.g., the structure or symmetries of the target molecule) to design the ansatz circuit more efficiently.
3. Machine learning-guided ansatz [29, 40]: Machine learning techniques are employed to generate and optimize ansatz circuits by learning from previously solved instances or based on heuristics.
In conclusion, the architecture design of the ansatz circuit is a key factor determining the performance of VQAs. Researchers continue to explore novel ways to improve the design and optimization of ansatz circuits to maximize the potential of VQAs for practical applications.
## IV Methodology
In the search for quantum circuit architectures, GPT-4 is capable of recommending ansatz structures under the guidance of prompts iteratively. Variational Quantum Algorithms (VQAs) represent a quantum computing approach that utilizes hybrid quantum-classical algorithms to solve optimization problems and simulate quantum systems [26]. The choice of ansatz is of critical importance in VQAs as it determines the efficiency and accuracy of the algorithm. In this section, we introduce our proposed Quantum GPT-Guided Architecture Search (QGAS) model and its training process as shown in the Fig. 1. The core objective of QGAS is to leverage GPT as a controller to recommend high-quality ansatz structures for VQAs.
### _Ansatz Architecture Generation_
We initially provide GPT-4 with a description of the corresponding problem and our requirements, ensuring the granularity of the description is as detailed as possible to obtain reliable
responses from GPT-4. For instance, in the case of quantum chemistry problems related to molecular ground state energy, we provide information about the molecule, the basis for fitting molecular electron orbitals, and the required number of qubits. For quantum finance investment optimization problems, we supply information regarding the budget, the number of assets, the risk factor, and stock details.
In our preliminary model, we present GPT with six design spaces [29, 41, 42, 43, 44, 45], corresponding quantum circuit interfaces, and code:
(1) U3+CU3 - One block has a U3 layer with one U3 gate on each qubit and a CU3 layer.
(2) ZZ+RY - One block contains one layer of ZZ gate and one RY layer.
(3) RXYZ - One block has four layers: RX, RY, RZ, and CZ.
(4) ZX+XX - Based on their MNIST circuit design, one block has two layers: ZX and XX.
(5) RXYZ+U1+CU3 - Based on their random circuit basis gate set, we propose a design space in which one block has six layers in the order of RX, RY, RZ, CZ, U1, and CU3.
(6) IBMQ Basis - One block with the basis gate set of IBMQ devices, in which one block has six layers in the order of RZ, X, RZ, SX, RZ, and CNOT.
Simultaneously, we grant GPT-4 the choice of qubit placement of the design spaces. The default number of circuit blocks is set to six, and each circuit block takes two qubits, e.g., the output should be an ID list for the ansatz as well as the selected qubits for each block. For example: [1, (0,1)], [2, (1,2)],..., [0, (4,5)] means we use operation1 for block1 and the block1 is on qubits(0,1), operation2 for block2 and block2 is on qubits (1,2),..., operation 0 for block6 and the block6 is on qubits (4,5). Subsequently, we request GPT-3.5 to output the recommended ansatz structure in QASM format since the resource for GPT-4 is limited for regular users, some sub-tasks can be efficiently done by GPT-3.5 to save the cost on GPT-4.
### _Training on the Generated Ansatz_
Utilizing the generated quantum circuit structures, we train an ansatz within the context of the corresponding problem to achieve enhanced performance. Considering different problems with distinct characteristics, we obtain the Hamiltonians corresponding to these problems using various methods. For instance, in quantum chemistry for molecular ground state energy, we perform a fit of the electronic structure of the molecule and subsequently obtain the Hamiltonian of the related molecule through fermionic mapping [46, 47]. In contrast, for general quantum finance problems, we encode the problem into a quadratic problem structure and then translate this quadratic problem into an Ising Hamiltonian [48, 49].
Following this, we execute the relevant problem with the proposed ansatz using a quantum processor or simulator. Generally, in gate-level quantum circuit training, we employ gradient-based training methods, and the optimization process relies on calculating the gradients of the parameterized quantum circuits with respect to their parameters. These gradients are utilized to update the parameters in order to minimize the
Fig. 1: Illustration of design and implementation of QGAS. Following an initial encoding issue, GPT-4 suggests an ansatz, which is subsequently processed by a sub-model constructed by GPT-3.5 to convert it into QASM format. To assess the effectiveness of the ansatz, a benchmarking application is executed, enabling the evaluation of its quality. The obtained results are then fed back to GPT-4 through a natural language prompt, facilitating further iterations and refinements.
target objective function, such as the difference between the calculated and target energy values.
### _Human Feedback & Capability of GPT_
Quantum architectures are crucial for quantum computing applications. The selection of an appropriate ansatz can significantly impact the computational cost and the accuracy of results. In this section, we investigate the cooperation between human expertise and the GPT model to provide a more efficient structure for the ansatz, a framework we refer to as QGAS.
Previous results on quantum architecture search, such as the state-of-the-art framework QuantumNAS [29], often proceed in two steps. The first step involves searching for a supercircuit within a large predefined design space. The second step involves searching for a subcircuit within this supercircuit. In this process, every design space within this area is treated with equal importance when sampled. The QGAS approach also begins with a search within a large predefined design space. However, it differentiates by incorporating human feedback and specific design space characteristics to perform a ranking. The ranked design spaces are then used to guide the second step of the search for the ansatz. In the context of the QGAS framework, each design space is treated differently based on its unique features such as expressivity, entanglement power, and others. With the assistance of the GPT model, more granular supervision and consideration are performed within the same predefined space. Initially, experts in quantum computing bring their deep understanding of quantum systems' complex behavior to bear, providing feedback on specific search strategies. They advise the search algorithm to focus on particular parameters or structures that have shown promise in past research, leveraging established quantum theories and past experiences. This guidance helps eliminate redundant exploration of the search space, improving the search's efficiency and accuracy. The GPT-4 model also contributes to this process by suggesting additional strategies for quantum circuit optimization. For instance, it may recommend leveraging initial parameters for a "warm start" or incorporating certain error mitigation methods. However, it is crucial to remember that the suggestions from the GPT-4 model can be a mixture of effective and incorrect strategies. Here, human expertise is necessary to discern the effective methods from the incorrect ones, preventing wastage of resources on the latter.
Furthermore, human feedback is paramount in evaluating the search outcomes. The final design of a quantum architecture must be theoretically feasible and practically superior in performance. Even if a design scores highly in the algorithm, it cannot be accepted if it fails to pass the rigorous evaluation by human experts. These evaluations consider various factors such as theoretical consistency, practical viability, and comparison with existing architectures. This feedback then feeds back into the system to update and adjust the search strategies, establishing an iterative feedback loop. The integration of human feedback and GPT's power thus enables a more efficient and effective exploration of the quantum circuit architecture. This is especially beneficial for complex quantum computations where the optimal circuit design is not intuitively obvious, and the number of possible circuit configurations can be prohibitively large.
## V Experiments
### _Experiment Setup_
Our experiment relies on GPT-4 as the selected LLM, and we both use Torchquantum [29] and qiskit-runtime [50] to execute experiments. As for the experiments on qiskit-runtime, we import the system model from \(ibmq\_manila\).
### _Application Benchmarking for QGAS_
In order to verify the efficacy of the proposed QGAS framework, we employed a series of application benchmarks. These benchmarks have been chosen to represent diverse domains of quantum computing, showcasing the versatility and broad applicability of the QGAS framework. Fig. 2 illustrates five distinct applications, namely: Portfolio Optimization, MaxCut Problem, Traveling Salesman Problem (TSP), and the estimation of Molecule Ground State Energy for both Lithium Hydride (LiH) and Water (H\({}_{2}\)O).
The Portfolio Optimization application originates from the sphere of quantum finance. This problem involves the selection of a set of investments that provides the greatest expected return for a given level of risk. By leveraging the strengths of quantum computing, we can optimize complex financial portfolios in quantum algorithms. And we set the risk factor as 0.5, the number of assets as four, the budget as half of the number of assets, and the penalty as equal to the number of assets, the visualization of the problem is shown in the Fig. 2(c). The MaxCut problem and the TSP, on the other hand,
are examples of quantum optimization problems. MaxCut is a well-known problem in graph theory, and it involves the partitioning of a graph into two subsets to maximize the sum of the weights of the edges crossing the subsets, we generate a five nodes graph as shown in the Fig. 2(a). The TSP is a classic algorithmic problem focused on optimization, where it is necessary to find the shortest possible route that visits a set of locations and returns to the origin, we generate a graph with three destinations for the traveling salesman as shown in the Fig. 2(b), to be noticed, this problem will request 8 qubits to solve. Lastly, the estimation of Molecule Ground State Energy for LiH and H\({}_{2}\)O is a fundamental problem in quantum chemistry, the structure of these two molecules has shown in Fig. 2(d) and (e). The ability to precisely calculate the ground state energy of molecules is key for understanding chemical reactions and designing new molecules and materials.
The optimization and finance problems are firstly encoded to quadratic problems and then mapped to Ising Hamiltonian. For the chemistry tasks, the molecules were first subjected to STO-3G approximation to model the electronic orbitals. Subsequently, a fermionic mapping was employed to derive the corresponding Hamiltonian.
### _Evaluation and Comparison_
We evaluated our QGAS framework on both TorchQuantum and a simulator with the noise and system model of IBMQ Belem. The results were extensively compared with different existing ansatzes and state-of-the-art ansatz architecture search methods.
From Table I, we can gain multiple observations, in the case of the Portfolio Optimization problem, the ansatz architecture found by QGAS outperformed the TwoLocal and RealAmplitudes architectures. However, the gate count was seven more than that of TwoLocal. Despite this, we believe the trade-off is justifiable considering the performance gain. The performance of the ansatz architectures and corresponding estimated gate counts at different iterations during model optimization is presented in Fig. 3. It can be observed that during the optimization iterations, QGAS actively improves the quantum circuit's generalizability by increasing gate counts, thereby improving performance. And for both two trails, we obtain
Fig. 4: Experiments for two trials of the quantum chemistry molecule ground state energy tasks and evaluation of the ansatz architecture generated by QGAS. We show both the epochs and estimated energy for each iteration, where a lower estimated energy with fewer epochs indicates a better ansatz.
Fig. 3: Experiments for two trials of the portfolio optimization problem and evaluation of the ansatz architecture generated by QGAS. We show both the gatecounts and estimated value for each iteration, a lower estimated value with smaller gatecounts indicates a better ansatz. In both trials, the results demonstrate in a limited number of prompt guidelines and iterations, we can obtain a high-performance ansatz generated by QGAS.
Fig. 2: Visualization of benchmark applications. a) MaxCut problem with 5 nodes. b) Traveling Salesman Problem with 3 nodes. c) Portfolio Optimization problem. d) Molecule structure of \(H_{2}O\). e) Molecule Structure of \(LiH\).
high-performance ansatz in a limited number of iterations. Moreover, we observe that when the gate counts reach a threshold, the performance then cuts down. We validated this gate count threshold for the portfolio optimization problem in our experiments with RealAmplitudes and TwoLocal as well. In the 8-qubit Traveling Salesman Problem, we observed a similar advantage that QGAS has over the other two ansatzes. For the 5-qubit Max-Cut problem, RealAmplitudes produced the best results, using eight more gates than the ansatz architecture found by QGAS.
In the experiment determining molecular ground state energy, as shown in Fig. 5, we compared QGAS with the state-of-the-art ansatz architecture search framework QuantumNAS and the problem ansatz UCCSD. In noise-free conditions, QGAS achieved slightly better results than UCCSD and was comparable with QuantumNAS. We also plotted the performance of the ansatz architectures and corresponding estimated energy at different iterations during model optimization in Fig. 4. We then tested QGAS's adaptability to noise in a quantum environment. As seen in Figure 5(a), for the MNIST classification problem in a noisy environment, QGAS performed worse than QuantumNAS but significantly better than a randomly generated ansatz. We believe that enhancing QGAS's adaptability to noise should be closely linked to the combination of human feedback with the capability of GPT, which will be discussed in the next section. The primary reason for the current lack of adaptability to noise is the difficulty in successfully conveying a complex quantum noise environment to GPT. This requires further contemplation and more meticulous work to devise specific prompts.
### _Observation from Human Feedback & Capability of GPT_
Throughout the experimental process, human feedback played an indispensable role in achieving exceptional performance with GPT-4. In the most fundamental framework, we initially had to fine-tune the system with specific prompts. Without this adjustment, GPT-4 would yield choices outside of the given design space. In the experiments concerning molecular ground state energy, GPT-4 proactively analyzed and opted to improve the circuit's expressive power by increasing the circuit depth. However, when the circuit depth became too great, we observed a decrease in accuracy instead of an expected increase. Moreover, the number of epochs in the optimization process was substantial. We converted these observations into human-readable text and submitted them to GPT-4. Upon receiving our feedback, GPT-4 quickly provided several alternative solutions.
We prompt GPT-4 to generate new ansatz architecture based on itself's suggest solutions on both LiH and H\({}_{2}\)O tasks. In the H\({}_{2}\)O task, GPT-4 suggested initializing all parameters on the rotation gates to a single value, this set to a value related to molecular characteristics based on GPT-4's analysis. However, this analysis was not entirely scientific based on our experience. Ultimately, this approach reduced the number of epochs by four while maintaining the circuit depth and performance as shown in Fig. 4(a). In the LiH task, GPT-4 deployed a \(\sqrt{H}\) gate on all qubits initially, followed by an identical ansatz structure behind these \(\sqrt{H}\) gates. GPT-4 referred to this form as VQE-I (VQE with Initial Guess). Remarkably, this approach reduced the number of epochs by 47 while maintaining the circuit depth and performance as shown in Fig. 4(b).
Fig. 5: The application benchmarks for comparing the state-of-art ansatz and QGAS-generated ansatz. a) Molecule ground state energy estimation tasks for \(H_{2}O\) and \(LiH\), compare the ansatz generated by QGAS, UCCSD, and QuantumNAS. b) Machine learning tasks for MNIST-2 and MNIST-4 classification, compare the QGAS-generated ansatz, random generated ansatz, and QuantumNAS.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Portfolio Optimization (4 Assets, 4q)} & \multicolumn{3}{c|}{Max-Cut (5nodes, 5q)} & \multicolumn{3}{c|}{TSP (3nodes, 8q)} \\ \cline{2-11} & Repeats & GateCounts & Value & Reference & Repeats & GateCounts & Value & Reference & Repeats & GateCounts & Value & Reference \\ \hline \multirow{3}{*}{TwoLocal} & 2 & **24** & -0.01112 & \multirow{3}{*}{-0.0149} & \multirow{3}{*}{-0.0149} & \multirow{3}{*}{2} & **29** & **-1.99694** & \multirow{3}{*}{-2.0} & **53** & \multirow{3}{*}{-7317.07} & \multirow{3}{*}{-7379.0} \\ \cline{2-11}
These findings underscore the essential role of human feedback in guiding GPT-4's performance in quantum circuit architecture design. By refining the system's understanding and approaches, human feedback helps to optimize the balance between circuit depth, performance, and computational efficiency. This form of interactive cooperation between humans and artificial intelligence provides a promising avenue for progress in quantum computing applications.
## VI Discussion
It is important to recognize that quantum computing is currently in its nascent stage, often referred to as the Noisy Intermediate-Scale Quantum (NISQ) era. In the NISQ regime, which is characterized by a limited number of imperfect qubits without error correction, most algorithms are variational, used to address problems in quantum chemistry, combinational optimization, and quantum machine learning. These algorithms use a measurement scheme on parameterized quantum circuits, with variations of the parameters used to optimize a cost function that estimates target observables.
* GPT can help improve these components by designing better quantum architectures. By absorbing information from classical computer architecture.
* GPT can propose architectures that are efficient in terms of experimental resources and robust to hardware noise. Additionally, the quality of the initial parameter guess heavily influences the number of iterations required for optimization.
* GPT can assist in this regard by utilizing its ability to absorb broad knowledge to make a good initial parameter guess. For example, GPT can fine-tune a parameterized circuit with initial guesses by leveraging observations from the quantum chemistry community on the properties of molecules such as electronic symmetry. This substantially reduces the quantum resources required by maximizing the usage of the prior knowledge of the interdisciplinary research community.
As quantum hardware improves, fault-tolerant (FT) quantum computers may become more feasible, allowing for perfect manipulation of logical qubits. Looking forward to the future of GPT applications in this field, GPT can be used to design and optimize fault-tolerant quantum algorithms by absorbing knowledge from a variety of sources, like classical computing and quantum error correction theory. This can lead to the development of more robust and efficient quantum algorithms, bringing us closer to achieving practical quantum computing capabilities. When the era of FT quantum computing arrives, boasting formidable computational capabilities, FT quantum machines will have the potential to enable a paradigm of quantum-assisted GPT. This will facilitate large-scale training with few reliable resources, leveraging the inherent properties and immense computational power of quantum computing.
It is important to recognize that GPT's knowledge is derived from vast data models available on the internet [51], and much of the public information surrounding the nascent field of quantum computing can be misleading. Moreover, a significant amount of bias exists that questions the validity of quantum computing. Due to the complexity of the knowledge required in this domain, even researchers may inadvertently propagate incorrect ideas in their publications. GPT takes all of this information into account, which may result in responses that exhibit an inadvertent bias against quantum computing. Such an effect can be particularly detrimental at this stage of quantum computing's development, as negative and harmful information could impact researchers' motivation and enthusiasm for their work. Therefore, by exploring how GPT can help quantum computing, in what form one should use GPT in the development of quantum computing, and what kind of connection exists between quantum computing and GPT, sorting out these can reduce the risks associated with combining quantum computing and GPT.
### _Beyond Reach_
GPT is not currently a general artificial intelligence; rather, it is a form of seemingly sophisticated artificial intelligence. Therefore, at least for the time being, it cannot think like a human with emotions and cannot perform any tasks that require humanity. Although GPT performs well in reading comprehension and assessment on tests such as the SAT and GRE, it is limited by the large-scale data on which it is based [52], i.e. it cannot observe scientific phenomena in nature by itself, it cannot monitor the entire physical environment of scientific experiments, it loses coherence in lengthy conversations, and it even contradicts its own previous conclusions [33]. For quantum computing, an emerging, complex, and still-evolving field, GPT is possible to obtain incorrect information from a massive dataset. Furthermore, it cannot "think" dynamically about quantum physics and quantum information theory to design new quantum algorithms. In addition, it cannot construct a quantum computer "by hand" nor can it detect emergencies during physical experiments or interactions between quantum hardware researchers.
### _Looking Into the Future_
Currently, the GPT model has demonstrated its capability in learning from large data and generate reasonable answers to proper questions. We expect to see it integrates better capability to reason logically for math equations. Harnessing the ability to recognize the patterns in the text and statistical associations between words and phrases to make predictions about what comes next (e.g., mathematical symbols and operations), GPT is developing a better understanding of the underlying mathematical concepts. If it were able to produce accurate logical reasoning for mathematical equations, the sprinkled knowledge across the vast quantum world could be more connected, which expedites the discovery of new quantum algorithms including more efficient error correction codes and improved mappings between fermions and qubits for Hamiltonian simulation.
For the aspect of quantum control, calibration is crucial for achieving optimal system performance. Quantum computing systems require precise calibration to ensure accurate and
efficient processing of information. In this context, two typical parameters that need to be calibrated are amplitude and duration. A traditional calibration process begins by assigning a default value to the amplitude parameter. Subsequently, the duration parameter is tuned and the optimal value is determined by identifying the highest point of fidelity. Once the optimal duration is established, the amplitude is swept to further refine the system's performance. By iteratively adjusting these parameters, the quantum state can approximate the desired state. When prior knowledge of the quantum hardware is available, the noise's dominant effect can be evaluated. For instance, assume that in a 3-D plot, the amplitude and duration parameters are the X and Y axes, the fidelity of the quantum gate is the Z-axis. This configuration produces a mountainous landscape that characterizes the noise feature. If experimental data indicates that the landscape has changed, the objective is to pinpoint the optimal peak's new location. By scanning additional points, the system can be re-calibrated to account for the landscape shift caused by the hardware. Incorporating prior knowledge about the hardware and the general shape of the landscape from a few testing samples, the next generations of GPT potentially could be utilized to estimate the extent of the landscape shift caused by experimental factors. This application of GPT can potentially expedite the calibration process.
In addition to the algorithm innovations, GPT also exposes precious opportunities to validate them in an agile manner, which is critical given how fast these innovations could be. We have already witnessed the power of GPT to facilitate software development, and we believe that GPT is also capable of advancing the simulation of quantum computers. GPT can summarize the mandatory steps that occur in distinct candidate algorithms, standardize them in a universally compatible simulation framework, and provide suggestions on hardware and software resources to rapidly construct such simulation frameworks and validate the innovations. To live up to these promises, GPT needs to be further trained on quantum-computing specific datasets, and gain deeper and broader insights about how quantum computing can evolve in the foreseeable future, and we believe this is an interdisciplinary mission that brings the best of all worlds into quantum computing.
|
2303.15351 | Formation and Evolution of Coherent Structures in 3D Strongly Turbulent
Magnetized Plasmas | We review the current literature on the formation of Coherent Structures
(CoSs) in strongly turbulent 3D magnetized plasmas. CoSs (Current Sheets (CS),
magnetic filaments, large amplitude magnetic disturbances, vortices, and
shocklets) appear intermittently inside a turbulent plasma and are collectively
the locus of magnetic energy transfer (dissipation) into particle kinetic
energy, leading to heating and/or acceleration of the latter. CoSs and
especially CSs are also evolving and fragmenting, becoming locally the source
of new clusters of CoSs. Strong turbulence can be generated by the nonlinear
coupling of large amplitude unstable plasma modes, by the explosive
reorganization of large scale magnetic fields, or by the fragmentation of CoSs.
A small fraction of CSs inside a strongly turbulent plasma will end up
reconnecting. Magnetic Reconnection (MR) is one of the potential forms of
energy dissipation of a turbulent plasma. Analysing the evolution of CSs and MR
in isolation from the surrounding CoSs and plasma flows may be convenient for
2D numerical studies, but it is far from a realistic modeling of 3D
astrophysical, space and laboratory environments, where strong turbulence can
be exited, as e.g. in the solar wind, the solar atmosphere, solar flares and
Coronal Mass Ejections (CMEs), large scale space and astrophysical shocks, the
magnetosheath, the magnetotail, astrophysical jets, Edge Localized Modes (ELMs)
in confined laboratory plasmas (TOKAMAKS), etc. | Loukas Vlahos, Heinz Isliker | 2023-03-27T16:09:37Z | http://arxiv.org/abs/2303.15351v1 | # Formation and Evolution of Coherent Structures in 3D Strongly Turbulent Magnetized Plasmas
###### Abstract
We review the current literature on the formation of Coherent Structures (**CoSs**) in strongly turbulent 3D magnetized plasmas. CoSs (Current Sheets (**CS**), magnetic filaments, large amplitude magnetic disturbances, vortices, and shocklets) appear intermittently inside a turbulent plasma and are collectively the locus of magnetic energy transfer (dissipation) into particle kinetic energy, leading to heating and/or acceleration of the latter. CoSs and especially CSs are also evolving and fragmenting, becoming locally the source of new clusters of CoSs. Strong turbulence can be generated by the nonlinear coupling of large amplitude unstable plasma modes, by the explosive reorganization of large scale magnetic fields, or by the fragmentation of CoSs. A small fraction of CSs inside a strongly turbulent plasma will end up reconnecting. Magnetic Reconnection (**MR**) is one of the potential forms of energy dissipation of a turbulent plasma. Analysing the evolution of CSs and MR in isolation from the surrounding CoSs and plasma flows may be convenient for 2D numerical studies, but it is far from a realistic modeling of 3D astrophysical, space and laboratory environments, where strong turbulence can be exited, as e.g. in the solar wind, the solar atmosphere, solar flares and Coronal Mass Ejections (CMEs), large scale space and astrophysical shocks, the magnetosheath, the magnetotail, astrophysical jets, Edge Localized Modes (ELMs) in confined laboratory plasmas (TOKAMAKS), etc.
Suggested keywords
## I Introduction
Strong turbulence is a complex nonlinear dynamic phenomenon, which has a great impact on the heating and acceleration of particles in space and laboratory plasmas [1; 2]. Unfortunately, courses for the study of turbulence are little present in university graduate programs. As a result, strong turbulence is also absent from the modeling of laboratory, astrophysical and space phenomena when they enter into a fully developed turbulent stage. The basic plasma physics courses at the universities start with the exploration of normal modes and linear instabilities. The nonlinear evolution of unstable waves is analysed with the use of the quasilinear approximation. In laboratory, space and astrophysical plasmas the "linear" phase of a normal mode has no meaning since the fluctuations grow in the presence of strong turbulence. The estimate of the growth time of the fluctuations in the presence of fully developed turbulence remains an open problem. Recently, Prof. William H. Matthaeus wrote a review article with the provocative title _"Turbulence of space plasmas: Who needs it?"_[3] to stress the following fact: the scientific community avoids the use of strong turbulence in the interpretation of many astrophysical or laboratory plasma phenomena. Most studies treat the linear part of the evolution of a system very carefully, but when their models enter into the regime of fully developed turbulence, the intermittent appearance of Coherent Structures (CoSs) and their multi-scale evolution fall beyond the ability of their numerical tools to handle them with present day computers. Therefore, the interpretations of many 3D strongly turbulent space and laboratory phenomena remain unexplored.
### Weak vs strong turbulence
The study of turbulence can be divided into "weak" or "wave" turbulence and "strong" turbulence. We define as "weak" or "wave" turbulence the magnetic fluctuations resulting from the superposition of any spectrum of \(N\) linear modes,
\[\mathbf{b}(\mathbf{r},t)=\sum_{i=1}^{N}\mathbf{b}_{0i}e^{i(\mathbf{k}_{i}\cdot \mathbf{r}-\omega(\mathbf{k}_{i})t+\phi_{i})}, \tag{1}\]
where \(\mathbf{k}_{i}\) is the wave vector, \(\omega(\mathbf{k}_{i})\) the dispersion relation derived through linearization, \(\mathbf{b}_{0i}\) the amplitude and \(\phi_{i}\) the random phase of the the weakly damped/amplified wave mode \(i\). This is a correct representation of a physical system if its unstable fluctuations have very small amplitude (i.e. for magnetized plasmas the fluctuations of the magnetic field \(\mathbf{b}\) are very weak, \(|\mathbf{b}|<<|\mathbf{B}_{\mathbf{0}}|\), where \(\mathbf{B}_{\mathbf{0}}\) is the ambient magnetic field of the plasma).
In weak (wave) turbulence there is spectral transfer of energy through the resonant three wave interaction, analyzed with the use of the quasi linear theory and the prescribed energy dissipation at the small wave lengths [4; 5].
Many references to "turbulence" in the current literature refer to "weak" turbulence, since its mathematical description is relatively easy, and with the use of the quasilinear approximation one can estimate the transport properties of the particles [4]. The regime where the
unstable waves reach large amplitudes (\(|\mathbf{b}|\geq|\mathbf{B_{0}}|\) ) is called "strong" turbulence if the turbulence is nearly isotropic [6; 7]. The non-linear evolution of the magnetic disturbances controls the energy transfer between the different scales and the particles. The most important characteristic of strong turbulence, which is not present in weak turbulence, is the intermittent appearance of CoSs. In this review, we focus on the 3D aspects of strong turbulence, and especially we address the question how CoSs are formed and evolve. The role CoSs play in the fast heating and acceleration of particles during explosive phenomena in astrophysical and laboratory plasmas is currently an open problem and cannot be addressed properly before having a good understanding of the statistical properties of the multi-scale evolution of CoSs [8].
#### i.0.2 Intermittency and coherent structures
The magnetic energy and its dissipation in strongly turbulent systems is concentrated into intermittently appearing and disappearing CoSs. However, it is not well understood how CoSs are formed and distributed inside a turbulent volume. Also, the CoSs play a crucial role in how the dissipated energy is partitioned, and they operate on different scales [9]. An important assumption in fluid turbulence is that the energy injected at large scales is transferred to smaller and smaller scales by non-linear processes, where it is dissipated when it reaches the kinetic scales [10]. This process is known as the turbulent **energy cascade**. If CoSs of all sizes are distributed inside a strongly turbulent volume, the dissipation of energy via heating and acceleration of particles is distributed over all scales, and not only the small (kinetic) scales. 3D numerical simulations generally show turbulence to be quite intermittent, and scaling models have been developed to incorporate this effect [11]. Particles interacting with a collection of CoSs (CSs, shocks, large amplitude magnetic perturbations) on all scales gain or lose energy as they travel through the turbulent volume before escaping [12]. It is not apparent how the energy is dissipated on the different scales, and a detailed study is needed to clarify this point.
As we are going to show in this review, turbulence can generate, among other types of CoSs, current sheets (CSs) at all scales, and it has been proposed that a fraction of the CSs undergo magnetic reconnection as part of their dissipation process [13]. It is important to stress that CSs are only one of the many types of CoSs appearing in strongly turbulent magnetised plasma. The other types include large amplitude magnetic disturbances, magnetic filaments, vortices, shocklets, and tangential discontinuities [14; 15] (the list is still not complete).
#### i.0.3 2D vs 3D coherent structures and magnetic reconnection
Almost all studies on magnetic reconnection so far start with a current sheet already formed in the middle of a 2D periodic simulation box (see Fig. 1) [16; 17; 18; 19; 20]. The problem of CS formation in 3D magnetic topologies and the characteristics of convective flows that drive the reconnection process remained outside the scope of these studies for many years. The evolution of an isolated CS in the presence of weak turbulence in the incoming flows has been analysed recently [21; 22; 23; 24]. The characteristics of the convective flows, which, among others, act as the driver for the reconnection process and are responsible for the breaking of the initial CS and the formation of plasmoids or secondary CSs in 2D numerical simulations, have been analyzed extensively [18; 19; 25].
With the presence of a second CS inside the simulation box, the evolution of magnetic reconnection leads to a multi island environment as well [26; 27; 28; 29; 30]. When starting a simulation with strong turbulence, the CSs appear intermittently at random places inside the 2D periodic simulation box, and their evolution depends on the complex drivers and the CoSs surrounding the reconcecting CS [31] (see Fig. 1), and finally, in 3D simulations, the formation of a CS driven by a strongly turbulent surrounding is shown in Fig. 1 [32]. The motion of CoSs also generates waves that are emitted into the ambient plasma. [11] Therefore, the intermittent formation of CSs in a 3D strongly turbulent plasma departs radically from the evolution of an isolated CS in 2D, as shown in Fig. 1.
In the early 80's, the link between reconnection and turbulence has been established [27], and a few years later the link between turbulence and reconnection has also been analyzed [33]. Several recent reviews discuss the way how turbulence can become the host of reconnecting current sheets and how reconnecting current sheets can drive turbulence [34; 35; 36; 37]. The link between shocks and turbulent reconnection has also been analyzed [37].
Strong turbulence and CoSs in the solar atmosphere are driven by the convection zone, and the spontaneous formation of reconnecting and non-reconnecting CSs has been analyzed in several articles [38; 39; 40; 41; 42; 43].
Our review will focus on the formation and evolution of CoSs inside a 3D strongly turbulent magnetized plasma. In section II, we explore the way how strong turbulence generates CoSs, in section III we analyse the fragmentation and filamentation of a 3D large scale isolated CS, which eventually leads to the formation of a cluster of CoSs and strong turbulence. In section IV, we discuss the presence of strong turbulence and CoSs upstream and downstream of a shock. In Section V, we explore the way how strong turbulence is driven by the convection zone, by emerging magnetic flux, or by unstable large scale magnetic structures in the solar atmosphere. In section VI, we use methods from complexity theory (e.g. Cellular Automata and Self Organized Criticality) to explore the formation and evolution of CoSs (mainly CSs) in a
strongly turbulent magnetized plasma. In section VII, we summarise the main points of this review.
## II Formation of coherent structures in strong turbulence
There are several ways to initiate strong turbulence in 2D and 3D numerical simulations [31; 33; 37; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. In this section, we follow the approach used initially by Dmitruk et al. [54] and later by Arzner at al [15], Zhdankin et al. [48] and Isliker et al. [51]. In these articles, the authors did not set up a specific geometry of a reconnection environment or prescribe a collection of waves [55] as turbulence model, but allow the MHD equations themselves to build naturally correlated field structures (which are turbulent, not random) and coherent regions of intense current densities (current filaments or CSs). In this approach, turbulence is freely evolving and ultimately decaying.
The 3D, resistive, compressible and normalized MHD equations used in Isliker, Vlahos, and Constantinescu [51] are
\[\partial_{t}\rho=-\nabla\cdot\mathbf{p} \tag{2}\]
\[\partial_{t}\mathbf{p}=-\nabla\cdot(\mathbf{p}\mathbf{u}-\mathbf{B}\mathbf{B} )-\nabla P-\nabla B^{2}/2 \tag{3}\]
\[\partial_{t}\mathbf{B}=-\nabla\times\mathbf{E} \tag{4}\]
\[\partial_{t}(S\rho)=-\nabla\cdot[S\rho\mathbf{u}] \tag{5}\]
with \(\rho\) the density, \(\mathbf{p}\) the momentum density, \(\mathbf{u}=\mathbf{p}/\rho\), \(P\) the thermal pressure, \(\mathbf{B}\) the magnetic field,
\[\mathbf{E}=-\mathbf{u}\times\mathbf{B}+\eta\mathbf{J} \tag{6}\]
the electric field, \(\mathbf{J}=\nabla\times\mathbf{B}\) the current density, \(\eta\) the resistivity, \(S=P/\rho^{\Gamma}\) the entropy, and \(\Gamma=5/3\) the adiabatic index.
In Isliker, Vlahos, and Constantinescu [51], the MHD equations are solved numerically in Cartesian coordinates with the pseudo-spectral method [56], combined with the strong-stability-preserving Runge Kutta scheme [57], and by applying periodic boundary conditions to a grid of size \(128\times 128\times 128\). A fluctuating magnetic field \(\mathbf{b}\) consist of a superposition of Alfven waves, with a Kolmogorov type spectrum in Fourier space, together with a constant background magnetic field \(B_{0}\) in the \(z\)-direction, so the magnetic field used as initial condition is \(\mathbf{B}=\mathbf{B}_{0}+\mathbf{b}(x,y,z,t)\). The mean value of the initial magnetic perturbation is \(<b>=0.6B_{0}\), its standard deviation is \(0.3B_{0}\), and the maximum equals \(2B_{0}\), so that indeed strong turbulence is considered. The initial velocity field is \(0\), and the initial pressure and energy are spatially constant.
The structure of the \(z\)-component of the current density \(J_{z}\) is shown in Fig. 2 ( the threshold value is chosen as the value above which the frequency distribution of the current density values deviates from Gaussian statistics, forming an exponential tail.).
Figure 1: (a) Most numerical studies start with a CS already present in the middle of a 2D periodic simulation box, the driver for the reconnecting CS is either a laminar or “weakly” turbulent flow. Reproduced with permission from Hesse and Cassak, Journal of Geophysical Research (Space Physics) **125**, e25935 (2020). Copyright 2020 Wiley. (b) The formation of CSs in 2D strong turbulence is driven by large scale magnetic fluctuations and/or other coherent structures intermittently formed in their vicinity. The CSs are never alone and isolated. The reconnecting sites are marked with crosses and represent a small fraction of the CSs formed. Reproduced with permission from Servidio et al., Physical Review Letters, **102** 115003 (2009). Copyright 2009 APS. (c) A snapshot of the spatial distribution of the electric current density in 3D MHD strong turbulence. The formation of 3D CSs is a fundamental aspect of 3D strong turbulence. Reproduced with permission of Minnini et al., Journal of Plasma Physics, **73**, 377-401 (2007) Copyright 2007 Cambridge University Press.
For the CoSs to form, Isliker et al.[51] let the MHD equations evolve. until the largest velocity component starts to exceed twice the Alvfen speed. The magnetic Reynolds number at final time is \(<|\mathbf{u}|>l/\eta=3.5\times 10^{3}\) (being actually rather constant over time), with \(l\approx 0.01\) a typical small scale eddy size, and the ratio of the energy carried by the magnetic perturbation to the kinetic energy is \((0.5<b^{2}>)/(0.5<\rho\mathbf{u}^{2}>)=1.4\), which is a clear indication that they were dealing with strong turbulence.
The overall picture demonstrates the spontaneous formation of CoSs, with the intermittent appearance and disappearance of CSs dominating the overall evolution of the strongly turbulent environment. This result resembles the 2D simulations of Biskamp and Walter[33] about thirty years ago. The perpendicular component of the current fluctuates rapidly but lacks the coherent structures shown in \(J_{z}\). Similar results were obtained by Arzner et al.[45; 55], using strong Gaussian fields or a large eddy simulation scheme.
It is of foremost importance to find ways to identify 3D CoSs inside a turbulent plasma and measure their statistical characteristics. Several algorithms have been proposed in order to identify and characterize the geometrical structures of CoSs in numerical simulations and observations[46; 47; 48; 49; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 285; 286; 287; 288; 289; 291; 288; 289; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 329; 320; 321; 322; 324; 325; 326; 327; 328; 333; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 3777; 378; 378; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 411; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 84; 86; 88; 87; 89; 92; 93; 94; 95; 96; 97; 98; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19; 19; 120; 13; 15; 17; 19; 18; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 43; 44; 45; 46; 47; 49; 51; 53; 54; 55; 57; 58; 59; 61; 70; 71; 73; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 13; 14; 15; 16; 17; 18; 19; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 42; 43; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 54; 53; 55; 56; 57; 58; 59; 60; 62; 61; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 82; 85; 86; 87; 88; 93; 95; 96; 97; 98; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 35; 37; 38; 39; 40; 41; 42; 43; 43; 44; 45; 46; 47; 48; 58; 59; 62; 70; 71; 72; 73; 73; 74; 75; 76; 77; 78; 89; 99; 101; 11; 13; 15; 16; 17; 18; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 40; 41; 42; 43; 45; 46; 47; 48; 59; 50; 51; 52; 53; 54; 57; 58; 59; 62; 71; 74; 75; 76; 77; 78; 83; 84; 85; 86; 87; 88; 97; 98; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19; 21; 23; 24; 25; 26; 27; 28; 29; 30; 32; 33; 34; 35; 36; 37; 38; 39; 50; 39; 51; 30; 31; 32; 34; 36; 38; 39; 52; 33; 39; 53; 54; 56; 57; 58; 59; 63; 70; 71; 72; 73; 74; 75; 76; 78; 79; 81; 99; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 101; 11; 12; 14; 15; 16; 17; 18; 19; 19; 13; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 33; 34; 3
turbulence. Their study proved that reconnecting current sheets are a common feature of not only the MHD models but also of the more complete fully kinetic models.
Zhdankin et al. [48] developed a framework for studying the statistical properties of CSs formed inside a magnetized plasma by using a 3D reduced MHD code. The distribution of the current fragmentation forming CSs in the \(x\)-\(y\)-plane is shown in Fig. 3. They were able to show that a large number of CSs do not contain reconnection sites, and likewise, many reconnection sites do not reside inside 3D CSs.
The most striking characteristic of the CSs formed spontaneously inside the strongly turbulent plasma is the probability distribution of the dissipated energy, \(\varepsilon=\int\eta j^{2}dV\), and of the characteristic lengths of the CSs, which are shown in Fig. 4, as reported by Zhdankin et al [48]. The techniques applied by Zhdankin et al. [48] for the analysis of their numerical simulations have initially been developed by Urisky et al. [59]. Recently, a number of attempts were made to extend the search for 3D CoSs to satellite data [62, 64].
The distribution in real space of CoSs in turbulence influences their dynamics and dissipation characteristics through the complex interrelationship. In Fig. 6, the probability distribution function (PDF) of the electric current density is shown, and corresponding regions in real space are indicated [58]. The low values of the current density (region I) follow a supergaussian distribution and are related with the lanes between the islands. The intermediate values of the current density (region II) correspond to cores (filaments in 3D) and follow a subgaussian distribution, and finally the supergaussian tails of the distribution (region III) with the strongest current densities possibly represent the current sheets between interacting magnetic islands (filaments in 3D).
When confronted with a dataset that samples a turbulent plasma system spatially, an important task is to find the subset of the data that corresponds to the underlying CoSs. In recent years, a plethora of methods has been suggested for the identification of intermittent structures and discontinuities in the magnetic field. These include the Phase Coherence Index method [68] and the wavelet-based Local Intermittency Measure (LIM) [69]. A simple and well-studied method that has been effectively used in the past for the study of intermittent turbulence and the identification of CoSs, both in simulations [47] and observations [70, 71], is the Partial Variance of Increments (PVI) method. The advantage of the PVI method is that it provides an easy-to-implement tool that measures the sharpness of a signal relative to the neighborhood of a point. For a lag \(\tau\), the normalized PVI at time \(t\) is defined as
\[PVI(t,\tau)\ =\ \frac{|\Delta\mathbf{B}(t,\tau)|}{\sqrt{\langle|\Delta\mathbf{B} (t,\tau)|^{2}\rangle}}, \tag{7}\]
where \(|\Delta\mathbf{B}(t,\tau)|\,=\,|\mathbf{B}(t\,+\,\tau)\,-\,\mathbf{B}(t)|\) is the magnitude of the magnetic field vector increments, and \(\langle...\rangle\) denotes the average over a window that is a multiple of the estimated correlation time. PVI is a threshold method, so to proceed with the analysis, one imposes a threshold \(\theta\) on the PVI and selects portions in which \(PVI>\theta\). Greco et al. [72] have shown that increments with \(PVI>3\) lie in the "heavy tails" observed in the distribution of increments and can thus be associated with Non-Gaussian structures (see Fig. 6). By increasing the threshold value \(\theta\), one can thus identify the most intense magnetic field discontinuities like CSs and reconnection sites. Finally, note that the method is insensitive to the mechanism that generates the coherent structures and it is more general and could be applied to 3D structures as well with appropriate modifications. This means that the PVI can be implemented for the identification of any form of sharp gradients in the magnetic field. A more comprehensive
Figure 5: Volume rendering of the current density \(|J|\) in the entire domain at a stage when turbulence is fully developed. A myriad of current sheets is evident in the plane perpendicular to the mean magnetic field \(B_{z0}\) (for details of the simulation see Dong et al. [66]). Reproduced with permission from Dong et al., Sience Advances, **8**, 7627 (2022), Copyright 2022 AAAS.
Figure 6: The PDF of the electric current density from 2D MHD simulations. Real space locations belonging to the regimes I, II and III are shown in the other three panels.Reproduced with permission from Greco et al., The Astrophysical Journal Letters, **691**, L111 (2009), Copyright 2009 AAS.
review of PVI, as well as a comparison with the aforementioned methods, appropriate for identifying discontinuities, can be found in Greco et al. [72].
The histogram of the frequency of occurrence of PVI values in the solar wind data suggests that the most probable value of PVI is about 0.5 (see Fig. 7), where the majority of nonintermittent events takes place [71]. A large number of non-Gaussian (\(3<PVI<6\)) events appear in the histogram in Fig. 7. The percentage of possibly reconnecting events (\(PVI>8\)) drops dramatically [65; 71], suggesting, as we have stressed earlier [3; 48], that only a small fraction of the detected CSs are actually reconnecting.
The techniques to explore the formation and evolution of CoSs at different scales in strongly turbulent magnetized plasma are in their infancy, and novel techniques are needed to understand the statistical properties of CoSs. We just mention that Jiang et al. [73] used convolutional neural networks to model strong hydrodynamic turbulence and to follow the formation and evolution of CoSs.
The main focus in most of the studies reported so far was on the presence of intense CSs and especially reconnecting CSs inside the turbulent medium. Only recently the analysis was extended to other types of CoSs [63], e.g. vortex-like structures, wave packets, and Alfvenic fluctuations.
Karimabadi et al. [37] stress the fact that the motion of CoSs generates waves that are emitted into the ambient plasma in the form of highly oblique compressional Alfven modes, as well as large amplitude magnetic disturbances. This indicates that strong turbulence will in general consist of CoSs and waves, therefore "weak" and strong turbulence co-exist in the multiscale evolution of a strongly turbulent plasma. Groselj et al. [74] explore the very important question of the relative importance of coherent structures and waves in strongly turbulent plasma, utilizing high-resolution observational and simulation data. They investigate the nature of waves and structures emerging in a weakly collisional, turbulent kinetic plasma. Their observational results are based on in situ solar wind measurements from the Cluster and the MMS spacecraft, and the simulation results are obtained from an externally driven, three-dimensional fully kinetic simulation. Using a set of novel diagnostics described in their article in detail, they show that both the large-amplitude structures and the lower-amplitude background fluctuations preserve linear features of kinetic Alfven waves to order unity. This quantitative evidence suggests that kinetic turbulence cannot be described as a mixture of mutually exclusive waves and structures, but may instead be pictured as an ensemble of localized, anisotropic wave packets or "eddies" of varying amplitudes, which preserve certain linear wave properties during their nonlinear evolution. This important finding and its role in the energy dissipation in strong turbulence, as well as its role in particle heating, has not been evaluated properly till now.
Large scale magnetic disturbances and CoSs in fully developed turbulence exhibit a monofractal or multi-fractal structure, both in space and astrophysical plasma [75; 76; 77; 78; 79; 80]. This information is very important for analysing the interaction of particles with CoSs [81].
As we have already stressed, CSs are not the only coherent structures that appear in MHD turbulence. One also finds current cores, vorticity concentrations, and density structures [82; 83; 84]. An informative example of neighboring coherent structures is found in the phenomenon of magnetic reconnection. It involves the formation of current sheets or filaments that become the organizational focus of the reconnection process as a whole. This subtle interplay between coherent structures of different types (magnetic filaments, vortices, small-scale shocks, large amplitude magnetic disturbances) is indicative of the physics that controls the lower end of the MHD/fluid cascade. These CoSs are likely the dominant loci of dissipative processes, not only in fluid models but also in kinetic approaches to plasmas [3].
Concerning laboratory plasmas in tokamak devices, we note that turbulence appears predominantly at different spatial and temporal scales throughout the plasma [85; 86; 87]. Microscopic turbulence dominates in the plasma core, and it limits the steepness of the plasma temperature and density profiles [88]. In the plasma edge region, microscopic and fluid turbulence can also be present in a dominant way when the plasma is in the so-called low confinement regime (L-mode). On the other hand, strong suppression of turbulence leads to the formation of a transport barrier and gives rise to the high confinement regime (H-mode), which exhibits a pedestal (extending over the edge transport barrier) that increases the core pressure [89; 90].
Edge localized modes (ELMs) are violent and transient MHD instabilities that repeatedly take place in tokamak H-mode plasmas and are caused by large current densities and pressure gradients in the pedestal region. ELMs lead to a repeating loss of the plasma confined in the edge region, and particles and heat are lost to the wall on a time scale of \(\lesssim 1\,\mathrm{ms}\)[91; 92; 93]. During ELMs, filamentary eruptions are observed, as well as magnetic field stochastization [94]. Magnetic perturbations during ELMs
Figure 7: Histograms (frequency of occurrence, or number of counts) of PVI values for different lags \(\tau\). Note the elevated likelihood of large PVI values at shorter lags, which is indicative of enhanced small-scale intermittency, typical of non-Gaussian processes and turbulence. Reproduced with permission from Chhiber et al., The Astrophysical Journal Supplement, **246**, 31 (2020), Copyright 2020 AAS.
are linked to reconnection, since they are the result of resistive peeling-ballooning modes, which trigger magnetic reconnection at the resonant surfaces.
To some degree, ELMS can be viewed as the analog of coronal emerging flux scenarios in laboratory plasmas, and it is worthwhile discussing briefly the commonalities and differences between laboratory and astrophysical plasma eruptions, on the example of ELMs in tokamaks and solar flares.
Both, solar corona and tokamak plasmas, are characterized by an electrical resistivity that is low in the sense that the Lundquist number is much larger than unity if the length scale considered is the macroscopic system size [95]. The macroscopic lengths are also much larger than the kinetic scales (electron and ion Larmor radii) in both systems. The strong toroidal field in tokamaks ensures that the plasma beta \(\beta=p/(B^{2}/2\mu_{0})\) (with \(p\) the plasma pressure and \(B\) the magnetic field) is smaller than unity, as it also holds in the flaring corona. Of course, there are enormous differences in absolute system size and duration of eruptive events, which both are larger by a factor of about \(10^{7}\) in flares than in ELMs [95]. Yet, when considering the kinetic scales, which definitely are relevant for particle acceleration and heating, the respective plasma parameters have ratios of solar coronal values to tokamak values rather close to unity [95; 96].
There is an important topological difference between the two systems. In tokamak plasmas, the magnetic field lines in almost the entire plasma form a set of closed, nested, toroidal surfaces, whereas in flare plasmas, the coronal magnetic field is anchored in the much denser and cooler convection zone. As McClements [96] notes, there is always an outermost surface of closed magnetic flux in tokamaks, beyond which there is a relatively thin scrape-off layer (SOL) of plasma exhausted from the closed flux region. In most currently operating tokamaks, the magnetic field lines in the SOL are connected to a solid surface at the top or bottom of the vacuum vessel, called divertor. The SOL magnetic field topology thus resembles somehow to that of a flaring coronal loop, with the divertor playing the role of the convection zone. We though note that this is rather an analogy than a one-to-one correspondence, since the divertor is a passive device and does not drive the magnetic field in any way, whereas the convection zone is the main driver and ultimate energy source for coronal magnetic activity, such as flares.
A different, but still qualitative view, is to relate the solar convection zone with the well-confined region within the last closed magnetic surface of a tokamak, and to view the SOL as the analogue of the solar corona, into which ELMs break out from the well-confined region and cause eruptive events, very much like magnetic flux that emerges from the solar convection zone into the solar corona and leads to destabilization and eventually to explosive events, such as flares. In this view, turbulence in the well confined region of a tokamak and in the solar convection zone have in common to be the driver of the eruptive events, ELMs and flares, respectively.
Isliker _et al._[97] presented test-particle simulations of electrons during a nonlinear MHD simulation of an ELM. Their aim was to explore the effect of an eruptive plasma filament on the particle dynamics. They found that the electrons are moderately heated and accelerated during the ELM, on a fast time scale of the order of 0.5 ms. Also, the distribution of the kinetic energy exhibits a non-thermal tail, which is of power-law shape, reaching up to 90 keV. The acceleration exclusively takes place in the direction parallel to the magnetic field, and they showed that the parallel electric field is the sole cause of the particle acceleration. Most particles that escape from the system leave at one strike-line in the bottom region of the device (the outer divertor leg). The escaping high energy electrons in the tail of the energy distribution have characteristics of runaway electrons. The mean square displacement in energy space indicates that transport is super-diffusive, and, when viewing the acceleration process as a random walk, they found that the tails of the distributions of energy increments are of exponential shape. They also noted that transport in energy space is equally important of diffusive (stochastic) and convective (systematic) nature.
Isliker et al. also analyzed the MHD simulations per se (see Fig. 8), with the main finding that the histograms of the parallel electric field in the edge region adopt power-law shapes, and they concluded that this clearly non-Gaussian statistics is one of the main reasons for the moderately anomalous phenomena of particle transport in energy space that they find.
Solar wind [98], the Earth's magnetosheath [99; 100], astrophysical jets [101; 52; 102], and edge localised turbulence in tokamaks [87] are only a few space and laboratory examples of CoSs formation in strongly turbulent plasmas.
Figure 8: Iso-contours of the parallel electric field \(E_{||}\) (positive at \(+10\) V/m in red, and negative at \(-10\) V/m in green), for four different times during the ELM, together with the separatrix (orange) and the plasma-boundary (violet) (both surfaces are half cut out). Reproduced with permission from Isliker et al., Physics of Plasmas, **29**, 124 (2022), Copyright 2022 AIP.
## III Formation of coherent structures from the fragmentation of large scale current sheets
The formation and fragmentation of large scale CSs in many space and astrophysical settings are worthwhile being studied in detail, as we will see at the end of this section. The fragmentation of an isolated large scale CS in 2D magnetic topologies, with the formation of plasmoids during the linear phase of the plasmoid instability, has been analysed extensively [103; 104]. Our main interest here is though in the 3D evolution and fragmentation of CSs in strongly turbulent environments.
Matthaeus and Laskin [27] were the first to move away from the laminar reconnection flows and to explore the evolution of a 2D periodic CS in the presence of low level broadband fluctuations. It is worthwhile quoting the main findings from the abstract of their seminal article with the title _"Turbulent magnetic reconnection"_: _"Nonlinear features of the evolution, appropriately described as turbulence, are seen early in the solutions and persist throughout the runs. Small scale, unsteady coherent electric current and vorticity structures develop in the reconnection zone, resulting in enhanced viscous and resistive dissipation. Unsteady and often spatially asymmetric fluid flow develops. Large scale magnetic islands, produced by reconnection activity, undergo internal pulsations. Small scale magnetic islands, or bubbles, develop near the reconnection zone, producing multiple X points. Large amplitude electric field fluctuations, often several times larger than the reconnection electric field, are produced by large island pulsations and by motion of magnetic bubbles. Spectral analysis of the fluctuations shows development of broad band excitations, reminiscent of inertial and dissipation range spectra in homogeneous turbulence. Two dimensional spectra indicate that the turbulence is broadband in both spatial directions. It is suggested that the turbulence that develops from the randomly perturbed sheet bears a strong resemblance to homogeneous magnetohydrodynamic turbulence, and that analytical theories of reconnection must incorporate these effects."_
Lazarian and Vishniac [21] returned to the formation of a CS in the middle of weakly turbulent flows (see their cartoon in Fig. 9). They generalised the results of Matthaeus and Laskin [27], using a 3D magnetic topology, and stressed the importance of stochastic magnetic field wandering, which causes fragmentation of the initial large scale astrophysical current sheet. The reconnection in small fragments of the magnetic field determines the local reconnection rate and the local strength of the electric field. The global reconnection rate is substantially larger, as many independent fragments reconnect simultaneously. Lazarian and Vishniac also obtained quantitative predictions for the reconnection rate (see details in Lazarian _et al._[105]).
Onofri et al. [106; 107] numerically solved the incompressible, dissipative, magnetohydrodynamics (MHD) equations in dimensionless units in a three-dimensional Cartesian domain, with kinetic and magnetic Reynolds numbers \(R_{v}=5000\) and \(R_{M}=5000\). They set up the initial condition in such a way as to have a plasma that is at rest, in the frame of reference of the computational domain, permeated by a background magnetic field sheared along the \(\hat{x}\) direction, with a current sheet in the middle of the simulation domain. They perturb these equilibrium fields with three-dimensional divergenceless large amplitude fluctuations. (For details about this MHD simulation see Onofri _et al._[106].)
The nonlinear evolution of the system is characterized by the formation of small scale structures, especially on the lateral regions of the computational domain, and coalescence of current filaments in the center. This behavior is reflected in the three-dimensional structure of the electric field, which shows that the initial equilibrium is destroyed by the formation of current filaments. After about \(t=50\tau_{A}\) (where \(\tau_{A}\) is the Alfven time), the current sheet starts to be fragmented, as can be seen in Fig. 10, where we show the configuration of the electric field \(\mathbf{E}=\eta\mathbf{J}-\mathbf{v}\times\mathbf{B}\), calculated from the MHD simulation data. The iso-surfaces of the electric field in Figure 10 are shown for different times, and they are calculated for two different threshold values of the electric field: the red surfaces represent higher values and the blue surfaces represent lower values. The structure of the electric field is characterized by small regions of space where the field is stronger, surrounded by larger volumes occupied by lower electric field values. At later times, the fragmentation is more evident, and at \(t=400\tau_{A}\), the initial current sheet has been completely destroyed and the electric field is highly fragmented. To give a measure of the fragmentation of the electric field, Onofri et al. [107] calculated the fractal dimensions of the fields shown in Fig. 10, using the box counting definition of fractal dimension. The applied thresholds are the same as those that have been used to draw the isosurfaces shown in Fig. 10. For the fields represented by the blue surfaces in Fig. 10, they found fractal dimensions \(d=2\), \(d=2.5\), and \(d=2.7\) at
Figure 9: The Lazarian and Vishniac model [21] for a 3D weakly turbulent reconnection driver. The width of the outflow can be compatible with the large scale characteristic of the turbulence due to stochastic wandering of field lines and the fragmentation of the initial current sheet. Reproduced with permission from Lazarian and Vishniac, The Astrophysical Journal, **517**, 700 (1999), Copyright 1999 AAS.
\(t=50\tau_{A}\), \(t=200\tau_{A}\), and \(t=400\tau_{A}\), respectively. For the more intense electric fields (red surfaces in Fig. 10), the fractal dimensions are \(d=1.8\), \(d=2\), and \(d=2.4\) at \(t=50\tau_{A}\), \(t=200\tau_{A}\), and \(t=400\tau_{A}\), respectively. These fractal dimensions can be considered as a way to quantify the degree of fragmentation of the electric field and to characterize the fraction of space that it fills as it evolves in time.
Onofri et al [107] calculate the magnitude \(|\mathbf{E}|\) of the electric field at each gridpoint of the simulation domain and construct the distribution function of these quantities, which is shown in Fig. 11 for \(t=50\tau_{A}\). They separately plot the resistive and the convective component of the electric field. The resistive part is less intense than the convective part, but it is much more important in accelerating particles [107].
The fragmentation of a large scale CS was analysed by several authors [108, 109, 110, 111, 112, 113]. Dahlin et al. [112], using kinetic simulations of 3D collisionless plasma with a guide field, analyze the fragmentation of a current sheet and the formation of small scale filaments with strong electric fields. A different mechanism to reach the fragmentation of a large scale CS is the presence of other CoSs in the surrounding of the CS, e.g. multiple reconnection sites [114].
Greco et al. [115], using Cluster high-resolution data, investigated the structure of thin current sheets that populate the turbulent solar wind. They concluded that in the solar wind, the turbulent cascade naturally forms current sheets at several scales, down to the proton skin depth scale. When approaching smaller scales, a current fragmentation process arises.
Beg et al. [116] analyzed the formation and evolution of a large-scale CS with a 3D MHD simulation of two merging flux ropes. They discovered that these systems exhibit self-generated and self-sustaining turbulent reconnection, which is fully 3D and fast.
Heyvaerts et al. [117] suggested that magnetic flux emerging from below the photosphere and interacting with overlying magnetic fields, forms a quasi static large scale CS that may trigger a solar flare. Several numerical studies further explore the idea of the formation of a large
Figure 11: Distribution function of the resistive (solid line) and convective (dashed line) electric field at \(t=50\tau_{A}\). The vertical line represents the value of the Dreicer field in the solar corona. Reproduced with permission from Onofri et al., Physical Review Letters, **96**, 151102 (2006), Copyright 2006 APS.
Figure 12: Side-view (panel (a)) and top view (panel (b)) of the 3D field line topology and the velocity during the blowout jet emission. The direction of the field lines is shown by black arrows. Reproduced with permission from Archontis et al., The Astrophysical Journal Letters, **769**, L21 (2013), Copyright 2006 AAS.
Figure 10: Electric field iso-surfaces at \(t=50\tau_{A}\), \(t=200\tau_{A}\) and \(t=400\tau_{A}\). Reproduced with permission from Onofri et al., Physical Review Letters, **96**, 151102 (2006), Copyright 2006 APS.
scale CS from emerging magnetic flux [118, 119, 120, 121, 122, 123, 124, 125, 126].
Archontis and Hood [127] used a 3D resistive MHD code to follow the emergence of new magnetic flux into the corona with a pre-existing magnetic field. The formation and subsequent fragmentation of the large-scale reconnecting current sheets is obvious in their numerical study (see Fig. 12).
Isliker et al. [80] used the 3D MHD simulations of Archontis & Hood [127], but focused on the statistical properties of the electric fields in the vicinity of the fragmented large-scale current sheet. The appearance of current fragmentation is apparent at the snapshots following the formation of the jet (see Fig. 13). The parallel electric field shows fragmented structures and has preferred regions of positive and negative sign. The fragmentation needs to be quantified with the use of cluster analysis and fractal dimension estimate.
Fig. 14 shows the histogram of the magnitude of the total electric field \(|\mathbf{E}|\), the parallel \(|E_{||}|\), and the perpendicular \(E_{\perp}\) component of the electric field, determined from all coronal grid points. They all show a power-law tail with a rollover at high values. The power-law index of the fit is -1.8 for the parallel electric field, and -2.4 for the total and the perpendicular electric field, just that the total and perpendicular electric field attain larger values. In any case, the parallel electric field is two orders of magnitude smaller than the total electric field, which thus basically coincides with the perpendicular electric field. Also, the parallel electric field shows a much more extended power-law tail than the perpendicular and the total one. Isliker et al. [80] conclude that power-law-shaped distributions are inherent to the electric field and its components. Similar results have also been found in MHD simulations of a decaying current sheet, as we reported earlier in this section (see Fig. 11).
Isliker et al. [80] also investigate the spatial structure of the parallel electric field, applying cluster analysis and calculating its fractal dimension. They consider the magnitude of the parallel electric field \(|E_{\parallel}|\) at all coronal grid points, and apply a threshold below which \(|E_{\parallel}|\) is set to zero. For the threshold, they use the same value of 0.07 as for the isocontours of \(E_{\parallel}\) in Fig. 13. They define a cluster as a set of grid points with (a) an above-threshold value of \(|E_{\parallel}|\) at all the grid points belonging to the cluster, and (b) the cluster's grid points are connected through their nearest neighborhoods in 3D Cartesian coordinates. It follows that a cluster is surrounded by grid points with below-threshold \(|E_{\parallel}|\). They found that there are 162 clusters, and 2 of them are very dominant in spatial extent, one corresponding to the positive and one to the negative extended parallel electric field region in Fig. 13. Using the same data as employed in the cluster analysis (the magnitude of the parallel electric field \(|E_{\parallel}|\) at all the coronal grid points, set to zero when below the threshold value of 0.07), Isliker et al. [80] apply a standard 3D box-counting method in order to determine the fractal dimension \(D_{F}\) of the region with above-threshold parallel electric field. Fig. 15 shows the scaling of the box counts with the box scale, where there is a clear power-law scaling in the entire range, whose index, per definition of the box-counting method, equals the fractal dimension, so they find \(D_{F}=1.7\) for snapshot 30 (standard jet) and \(D_{F}=1.9\) for snapshot 53 (blowout jet).
The regions of high parallel electric field can thus be interpreted as thinned out 2D sheets, as it also corresponds to the visual impression that is given by Fig. 13. Also, the "filling-factor" (fractal dimension) is higher at the blowout jet compared to the time when the standard jet is emitted. After all, the spatial structure of the regions of strong parallel electric field can be characterized as fragmented and fractal, with the various cluster-size distributions exhibiting double power-law scalings.
The formation of large scale 3D CSs in the middle of converging strongly turbulent flows is present in sev
Figure 14: MHD simulations, coronal part only, showing the distribution of the electric field from all coronal grid points, for the magnitude of the total electric field, the perpendicular component (they practically coincide), and the parallel component, respectively. The electric field is in units [V m\({}^{-1}\)], and the mean Dreicer field is \(4.6\times 10^{-4}\) V m\({}^{-1}\).Reproduced with permission from Isliker et al., The Astrophysical Journal, **882**, 57 (2019), Copyright 2019 AAS.
Figure 13: Results from the MHD simulations: a close-up of the coronal part. The left panel shows a visualization of selected magnetic fieldlines (blue) together with an isocontour plot of the total electric field (orange 3D isosurfaces). The vertically oriented isosurface (orange) is aligned with the direction of the reconnected fieldlines and it indicates the emission of the standard jet. The \(x\)\(y\)-plane at the bottom shows the photospheric component \(B_{z}\) as a 2D filled contour plot. The electric field is in physical units [V m\({}^{-1}\)]. In the right panel, isocontours of the parallel electric field are shown, indicating the fragmentation of the current sheet at the interface between the interacting magnetic fields. Reproduced with permission from Isliker et al., The Astrophysical Journal, **882**, 57 (2019), Copyright 2019 AAS.
eral space and astrophysical phenomena, e.g. the (turbulent) magnetotail [128] driven by the turbulent solar wind, the (turbulent) magnetopause [129] driven by the turbulent magnetosheath, eruptive solar flares (resulting from emerging magnetic flux [130, 127], or from the eruption of large scale magnetic topologies [131, 132, 133]), and the outer helioshere [114]. In all these cases, the fragmentation of the CS and its replacement with a much larger strongly turbulent region has led observers to report as a CS the relatively large scale 3D region where the CS's fragmentation was initiated by CoSs included in the 3D magnetic topology of randomly wandering magnetic field lines [21].
## IV Formation of coherent structures in the vicinity of large scale shocks
The theoretical analysis of large scale 2D quasi-static shock waves followed, for many years, the same steps as deployed in the large scale Sweet-Parker model for CSs [16]. A shock discontinuity was placed by hand in the middle a the simulation box, and weak turbulence was added upstream and downstream, to play the role of converging passive scatterers, since the energization of particles is due to their crossing of the shock discontinuity [134] (see Fig. 16). The formation of the shock and the way the specific normal modes were exited were not discussed in the initial model proposed by Fermi in 1954 [135], whose approach was used as the base for the subsequent theoretical studies [136]. In this simplistic scenario, the low amplitude wave modes are hand picked carefully to be able to scatter efficiently the particles. The particles, in order to interact with the waves upstream and downstream, should have initial velocity higher than the phase velocity of the wave. This is the well known injection problem, for which a pre-acceleration mechanism is necessary [136].
Finally, weak turbulence upstream of a shock cannot confine efficiently the accelerated particles in the vicinity of the shock. It was then proposed that streaming instabilities, driven by energized particles upstream of a shock, can be the solution for the trapping of particles in the vicinity of a shock [138, 139].
Obviously, the formation and evolution of large scale shock waves in space and astrophysical explosions encounter pre-existing or self-generated strong turbulence (e.g. the Earth's and planetary bow shocks in the solar wind, or the shocks formed by Coronal Mass ejection in the heliosphere, or the Super Nova (SN) explosions in astrophysical plasmas). The strongly turbulent plasma upstream of a large scale shock carries a variety of CoSs, which interact with the shock and play a crucial role in its evolution [140]. When CoSs cross a shock discontinuity, they are amplified dramatically (see [141, 142, 143, 144, 145, 146, 147] for the magnetosheath, and the references therein), and the shock surface shows large scale ripples [148].
The Earth's bow shock is the most studied environment of a collisionless large scale shock due to the availability of _in situ_ data. The turbulent solar wind upstream carries several types of CoSs (CSs, Cavitons, Rotational Discontinues (RDs), Short Large Amplitude Magnetic Structures (SLAMs), etc., see Fig. 17), and the instabilities excited by the solar wind ions, reflected at the shock front, re-inforce the unstable CoSs before they are convected downstream [149, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151].
Karimabadi et al. [37] use a global hybrid simulation (electrons as fluid, kinetic ions) to explore the formation of CoSs in the vicinity of the bow shock. They analyze the link of large scale shock waves, turbulence upstream and downstream, and CoSs. The solar wind turbulence upstream can easily reach the strongly turbulent level (\(\delta B/B\approx 1\)) and drive turbulent reconnection and the formation of large scale low frequency electromagnetic waves that are compressed and amplified as they cross the shock.
Matsumoto et al. [152] presented a supercomputer particle in cell (PIC) simulation, showing that strong collisionless shocks drive CoSs in the transition region (see
Figure 16: Sample orbit for a quasi-perpendicular shock (\(\theta=60^{o}\)). Left: Energy vs. \(X\) in the shock frame, where \(X\) is the distance from the shock (\(X<0\) upstream; \(X>0\) downstream). Right: Evolution of the energy (upper panel) and of the \(X\)-component of the particles position (lower panel) as a function of time. See details in the article Decker and Vlahos [137]. Reproduced with permission from Decker and Vlahos. The Astrophysical Journal, **306**, 710 (1986), Copyright 1986 AAS.
Figure 15: Fractal dimension of the parallel electric field: scaling of the 3D box-counting algorithm. Reproduced with permission from Isliker et al., The Astrophysical Journal, **882**, 57 (2019), Copyright 2019 AAS.
Fig. 18a). They use high computational capacity to follow the evolution of a collisionless shock moving with Mach number \(M_{A}\approx 40\). In Fig. 18a, between the upstream region \(x>59\) and the downstream region \(x<39\), a transition region is characterized by tangled magnetic field lines. This region is in the strongly turbulent state, and the shock front cannot be visually identified. The fragmentation of the shock front drastically modified the assumptions that are behind the analytical work and the simplistic models for diffusive shock acceleration [153, 154].
It is also of interest to note that filamentary structures are created, as can be seen from the density profile in the transition region in Fig. 18a. These filaments are associated with folded magnetic field lines, and the enhanced density regions contain CoSs. Matsumoto et al. [152] also observe that magnetic reconnection takes place at multiple sites, and, as a result, magnetic filaments are formed along the initial current sheet. Fig. 18b presents the evolution of the kinetic energy of a particle, which shows enhanced gains in energy during the interactions with specific CoSs downstream. Similar evidence for magnetic reconnection is found in other current sheets in the transition and the downstream region [152]. This fragmentation of the initial large scale shock front is reminiscent of the large scale CS fragmentation discussed in Sec. III. Similar results are reported by Caprioli and Spitkovsky [155] by using 3D hybrid simulations.
Korpotina et al. [145] analysed the interaction of CoSs (in their case they focused on rotational discontinuities) carried by the turbulent solar wind with the Earth's Bow shock. They used _in situ_ multispacecraft observations and performed hybrid kinetic simulations. In their article, they stress the amplification of the CoSs in the vicinity of the shock discontinuity. The amplification of the CoSs carried by the solar wind may be as high as two orders of magnitude. The amplification in the foreshock may be due to streaming instabilities, driven by the interaction of the solar wind with reflected solar wind particles. The Earth's bow shock crossing affects the CoSs, since downstream of the shock the magnetic field and the plasma density are amplified (actually, the observed compression ratio far exceeds the Hugoniot prediction; see also the recent study by Trotta et al [156]). Guo et al. [157], using a 3D global hybrid simulation, showed that the upstream turbulence at a quasi-parallel shock may intensify the presence of CoSs at the rate of reconnection at the magnetosheath downstream of the Earth's bow shock (similar results are reported in several recent articles [146, 147, 142, 129, 144, 129]).
Unfortunately, the multi-scale character and the complexity of the microphysics [140] present in an evolving large scale shock cannot be explored by current numerical simulations, and neither can they follow a shock's evolution for long time. This is the main reason why we are still missing many important details on the evolution of CoSs and their role in the heating and acceleration of particles.
The topic of the interaction of large scale shocks with plasma turbulence and the formation of CoSs upstream and downstream as well as the non-stationary evolution of a 3D shock discontinuity [148] is currently an open problem in space and astrophysical plasmas. Moreover, the collective interaction of particles with the rich variety of CoSs (and not only the reconnecting CSs) and its effect on the heating and acceleration of particles also remain an open problem.
Figure 17: The five THEMIS satellites moving along their orbits. The 2-D data from the Omidi simulation [141] are faded in and a zoom is made in to a view of the satellites in the turbulent region near the bow shock. The ’cavitons’ of violet and white color illustrate a broad range of CoSs in this turbulent fore-shock region.Reproduced with permission from Omidi, AIP Conference Series, **932**, 181 (2007), Copyright 2007 AIP.
Figure 18: (a) Supercomputer simulation of a strong collisionless shock, revealing the presence of CoSs. (b) An electron’s kinetic energy (\(\gamma-1\)) as a function of time (normalized to the upstream electron gyro frequency). The times from (A) to (C), for which the particle’s orbit is marked, indicate its interaction with specific CoSs downstream (see details in Matsumoto et al. [152]). Reproduced with permission from Matsumoto et al., Sience, **347**, 974 (2015), Copyright 2015 AAAS.
Formation of coherent structures in the solar atmosphere
In the solar atmosphere, CoSs are formed through the strong magnetic coupling with the turbulence in the convection zone. We can split our presentation of the formation of CoSs in the solar atmosphere in two parts: (1) The formation of CoSs by the random shuffling of the footpoints of the slowly changing magnetic fields, and (2) the formation of CoSs during the emerging magnetic flux and/or the large scale magnetic eruptions.
#### v.1.1 Formation of CoSs in quasi-static closed magnetic topologies
In the solar atmosphere, the formation of CoSs (in the cited literature the emphasis is on the formation and disruption of current sheets) through small scale random shuffling of the footpoints of emerged magnetic fields, caused by the convection zone, was first proposed by Gold [158] in the 60s as the main mechanism for coronal heating. The initial conjecture was developed further in the 70s and the 80s [159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 44, 445, 446, 447, 448, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 515, 515, 525, 536, 540, 515, 537, 541, 542, 543, 544, 55, 556, 557, 56, 56, 578, 58, 590, 515, 525, 539, 544, 55, 56, 579, 58, 591, 59, 59, 592, 593, 594, 501, 502, 515, 503, 515, 536, 54, 55, 56, 578, 595, 596, 597, 598, 599, 600, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 88, 89, 91, 85, 86, 89, 92, 87, 88, 89, 93, 94, 80, 83, 85, 87, 89, 94, 80, 84, 86, 88, 89, 95, 96, 97, 98, 99, 99, 99, 100, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 98, 99, 11, 11, 12, 13, 14, 15, 16, 17, 18, 19, 12, 14, 16, 18, 19, 13, 17, 19, 14, 18, 19, 15, 19, 16, 19, 17, 18, 19, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 20, 23, 24, 26, 28, 29, 21, 24, 27, 29, 22, 23, 24, 25, 26, 28, 29, 20, 24, 27, 28, 29, 21, 25, 26, 29, 22, 23, 26, 29, 24, 27, 28, 29, 23, 25, 27, 29, 26, 27, 28, 29, 21, 28, 29, 20, 29, 21, 22, 23, 24, 28, 29, 22, 25, 29, 26, 27, 28, 29, 21, 29, 23, 27, 28, 29, 20, 21, 24, 29, 23, 25, 26, 29, 27, 28, 29, 20, 29, 21, 28, 29, 22, 23, 29, 24, 25, 26, 27, 29, 23, 28, 29, 25, 29, 26, 29, 27, 28, 29, 29, 30, 31, 32, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 83, 86, 88, 89, 94, 95, 96, 97, 98, 99, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 82
two boundaries. The spontaneous formation of CSs at random places at random times inside the structure is shown in Fig. 21. CSs of all scales appear and evolve since the large scale CSs fragment, as we have shown in section III.
Reconnection of the current sheet(s) will straighten the field lines, but will also cause disturbances in the surrounding plasma, causing further fragmentation of the energy release processes. A strongly turbulent environment is established inside the magnetic flux tube, driven by the random shear caused by photosheric motions and by the energy delivered into the solar corona through the dissipation of fragmented current sheets (see similar results in [107, 112]). The evolution of the CoSs depends on the velocities of the boundary motions and the initial magnetic field strength in the magnetic flux tube.
Turkmani et al. [176, 177] analyzed the statistical properties of the electric fields inside the flux tube studied by Galsgaard and Nordlund [40, 41]. The magnetic and electric fields are obtained from a 3-D MHD experiment that represents a coronal loop with photospheric regions at both footpoints. Photospheric footpoint motion leads to the formation of a hierarchy of current sheets, see Fig. 1a. The distribution function of the resistive electric field has a power law tail for the super-Dreicer electric fields, with slope \(-2.8\) (see Fig. 1b), and this finding is analogous with the results reported in other studies related with the fragmentation of a large scale CS [107, 80].
In a series of articles, Rappazzo et. al. [178, 179, 180, 170], following the steps of the work of Galsgaard and Nordlund [40], analyzed several aspects of the formation of CSs in the solar corona (see Fig. 23). The observational expectations of intermittent energy dissipation in the formed and fragmented CSs were also analyzed in depth [172, 173].
The numerical simulations presented so-far were performed, for the sake of simplicity, in isolated magnetic flux tubes.
Threlfall et al. [181] analyzed the evolution of multi-threaded coronal magnetic flux tubes (see Fig. 24), which become unstable when varying the driving velocity of the individual threads. Assuming that one of the threads is destabilized, it will very quickly lead the system of threads to a strongly turbulent environment, forming many CoSs of different scales, and it reorganizes the multi-threaded topology into a complex magnetic topology with several multi scale intermittently appearing and disappearing current filaments (Fig. 25) (see details in Threlfall et al. [181] and references therein).
Archontis and Hansteen [182] reported on the formation of CoSs generated by patchy magnetic reconnection between interacting magnetic loops as well, using a different initial setup. A three-dimensional magnetohydrodynamic numerical experiment was performed, where a uniform magnetic flux sheet was injected into a fully devel
Figure 23: Isosurfaces of the squared current \(j^{2}\) at a time after the start of the kink instability. Reproduced with permission from Rappazzo et al., The Astrophysical Journal, **771**, 76 (2013), Copyright 2013 AAS.
Figure 21: (Top) Isosurfaces of Joule dissipation regions inside a solar magnetic loop driven by random shuffling of the footpoints. (Bottom) Isosurfaces of strong electric currents in a snapshot. Reproduced with permission from Galsgaard and Nordlund, Journal of Geophysical Research,, **101**, 13445 (1996), Copyright 1996 Wiley.
Figure 22: (a) Snapshot of the resistive electric field configuration within the coronal volume, as calculated from the global MHD model. The blue and red regions represent electric field regions that point towards the left and right foot point, respectively. (b) The distribution function of the resistive electric field. Reproduced with permission from Turkmani et al., Astronomy and Astrophysics, **449**, 749 (2006), Copyright 2006 ESO.
oped convective layer. The gradual emergence of the field into the solar atmosphere results in a network of magnetic loops, which interact dynamically, forming current layers at their interfaces. They find that these CoSs are short-lived (from 30 s to minutes), giving rise to bursts of energy in the range \(10^{25}\)\(10^{27}\) erg, which basically is the microflare range. The CoSs' persistent formation, interaction, and evolution leads to recurrent emission of fast EUV/X-ray jets and to considerable plasma heating in the active corona.
All attempts to model the non linear coupling and the evolution of the emerged magnetic fields, driven by photospheric and subphotospheric motions, can be explored with one more tool. Non linear extrapolation of the force free magnetic filed gives an approximate snapshot of the turbulent state in a coronal active region.
Kanella and Gudisken\({}^{173,174,183}\) used a 3D MHD numerical code to simulate the region from the solar convection zone up to the solar corona. The code includes different processes occurring in the convection zone, photosphere, chromosphere, transition region and corona. The simulated volume starts 2.5 Mm below the photosphere and extends 14.3 Mm above the photosphere into the corona. They used periodic boundary conditions in the horizontal \(x\)a \(y\)-plane; in the vertical \(z\)-direction, the upper boundary is open, while the lower boundary is also open, but remains in hydrostatic equilibrium, enabling convective flows to enter and leave the system. They incorporated two strongly magnetic regions of opposite polarity, which are connected through a magnetic structure with a loop-like shape. The magnetic field is initially set vertically at the bottom boundary and extrapolated to the whole atmosphere, assuming a potential field, while a horizontal field of 100 Gauss is continuously fed in at the lower boundary, producing random bipolar structures in the photosphere. Identifying locations with 3D current sheets, as we have already discussed in section II, turns out not to be so simple. The CSs in 3D are generally not 2D flat structures, as the cartoon-like pictures of 2D reconnection would suggest, but they are much more complex. Often, the background current level is higher in places with many current sheets, so it is not easy to separate one current sheet from a cluster of CSs. This is in some ways similar to the problems experienced by observers, where the background is causing large obstacles for the interpretation. The method Kamela and Gudisken\({}^{173}\) have applied to identify CSs is called ImageJ, used in medical imaging and bio-informatics to perform multi-dimensional image analysis. Fig. 26(a) shows the 4136 identified CSs in the coronal part of the simulation volume, for a certain time-instant during the simulation. In Fig. 26(b), the differential size distribution of the identified CSs' released energy rate is plotted in logarithmic scales, which obey a power-law scaling with an index \(-1.5\).
Einaudi et al.\({}^{43}\) discussed the differences between "nanoflares", introduced by Parker, and "elementary events," defined in their article as small-scale spatially and temporally isolated heating events, resulting from the continuous formation and dissipation of field-aligned current sheets within a coronal loop. They presented numerical simulations of the compressible 3D MHD equations. They used two clustering algorithms to investigate the properties of the simulated elementary events: an IDL implementation of a density-based spatial clustering technique for applications with noise, and their own physical distance clustering algorithm. They identified and tracked elementary heating events in time, and for every event they characterized properties such as density, temperature, volume, aspect ratio, length, thickness, duration, and energy. The energies of the events are in the range of \(10^{18}-10^{21}\) erg, with durations shorter than 100 s. A few events last up to 200 s and release energies up to \(10^{23}\) erg. While high temperatures are typically located at the flux tube apex, the currents extend all the way to the footpoints. Hence, a single elementary event
Figure 24: Seven threads with identical drivers: 3D magnetic configuration at different times. When one of the threads is destabilised, it interacts with the rest of the threads and destabilizes all of them, causing the formation of several CoSs. Reproduced with permission from Threlfal et al., Solar Physics, **296**, 120 (2021), Copyright 2021 Springer
Figure 25: Contours of the toroidal currents in a cut above the polarity inversion line, for different times (see details in\({}^{181}\)) Reproduced with permission from Threlfal et al., Solar Physics, **296**, 120 (2021), Copyright 2021 Springer.
cannot be detected at present. The observed emission is due to the superposition of many elementary events, distributed randomly in space and time within a loop.
The formation of CSs and other CoSs in the solar atmosphere, driven by the turbulent convection zone, depends on the magnetic topology above the photosphere (e.g. single magnetic flux tube, multi-flux magnetic tubes,, etc), and on the details of the convection zone driver (small scale stochastic motion, turbulent motions, emerging magnetic flux, etc). All articles listed above followed the initial conjecture of Parker [159] and focused on the formation of randomly distributed small scale CSs inside a volume, which subsequently will provide Ohmic heating in the Corona. It was assumed, without any proof, that the current sheets formed and the energy released are very small ("nanoflares" or "elementary events", with energy in the range \(\approx 10^{20}\,-\,10^{21}\,\mathrm{erg}\)). In our opinion, this assumption was not supported by the MHD simulations, since the flux tubes, or the multiple flux tubes, or the complex magnetic topologies are filled with CoSs at all scales. The observed explosive phenomena and flares in closed magnetic topologies are also present in the strongly turbulent solar corona and follow the statistical properties reported [164] when the turbulent corona is driven by turbulent photospheric motions. We must emphasize here that the magnetic coupling of the convection zone with the solar atmosphere and the formation of CoSs is a multi-scale process, ranging form \(10^{11}\,\mathrm{cm}\) to a few \(\mathrm{cm}\).
Formation of CoSs during explosive events associated with large scale reorganizations of the magnetic field in the solar atmosphere
A 3D study of solar explosions associated with CMEs is also a multiscale problem. Cheung et al. [184] used a 3D MHD code to capture the large scale of such explosions. Their simulation domain captured the top 7500 km of the solar convection zone, and the first 41600 km of the overlying solar atmosphere. The initial set up was inspired by the observed evolution of a specific Active Region (AR), but was not intended to model a specific flare of the specific AR. The initial setup consisted of a bipolar sunspot pair, each with a magnetic flux of \(3.4\times 10^{21}\,\mathrm{M}\mathrm{x}\). A strongly twisted magnetic bipole with \(10^{21}\,\mathrm{M}\mathrm{x}\) flux emerged in proximity to one of the pre-existing sunspots. The emergence of the parasitic bipole leads to the creation of a twisted coronal flux rope well before the flare onset. The code of Cheung et al. [184] is ideal to capture the large scale evolution, but it misses all the physical processes related with the formation of CoSs and their interaction with particles. Therefore, two more levels of scales are necessary to be included in such an analysis. A hybrid code can capture the formation of CoSs (meso-scales), and a kinetic code can explore the details of the collective dissipation of CoSs and the heating and acceleration of particles.
Inoue et al. [185, 132] used a 3D MHD code to analyze a specific flare/eruption of a specific AR. Their starting point was the nonlinear force free field (NLFFF) extrapolation [131] of the magnetogram. In Fig. 27, the large scale evolution of the eruption is shown. The striking part in this simulation is the presence of thousands of magnetic flux ropes (MFR), which are twisted and evolve, possibly interacting with each other. Based on the analysis presented in the previous subsection, an individual MFR [40] and a collection of MFRs [181] can create a dense environment of CoSs along with the evolving MFRs and their interaction. As shown in Fig. 27, Inoue et al focused their analysis on the formation of the large scale CS in the middle of the huge structure. The fragmentation of this structure and the formation of smaller scale structures, as we reported earlier, was beyond the resolution of their simulation (see also He et al. [186]).
The evolution of a large collection of MFRs was studied with the use of 3D MHD codes recently by several authors (see [187, 186] and references therein). Jiang et al. [187] used a fully 3D MHD simulation with high accuracy to follow solar eruptions. They initiated the simulation with a bipolar configuration with no additional special magnetic topology. The bipolar configuration was driven unstable by a photospheric shearing motion. Once the large-scale CoSs and CSs are formed and reconnection starts, the whole arcade expands explosively, forming fast expanding twisted flux ropes with very strong turbulence in the volume underneath (see Fig. 28). The simplicity and efficiency of their scenario highlights the importance of the magnetic topology driven unstable and the role of
Figure 26: (a) 3D magnetic topology in the solar corona, with 4136 identified CSs; each color represents a different CS. (b) Differential size distribution of the identified CSs’ energy rate in logarithmic scale (diamonds). A power law fit (dashed) yields the index \(\sim-1.50\pm 0.02\) (see Kanella and Gudisken [173] for details. Reproduced with permission from Kanella and Gudisken, Astronomy and Astrophysics, **603**, 83 (2017), Copyright 2021 ESO.
the driver. After all, when using a realistic magnetic topology and a turbulent photospheric driver or emerging magnetic field, large scale CoSs very quickly fragment (see more on this in section III) and drive strong turbulence.
Several authors [178, 188], analyzed the stressing of an isolated flux rope (FR) (which is part of the highly stressed large scale magnetic topologies shown in Figs. 27 and 28) at the two ends by large scale localized photospheric vortical motion, which twists the coronal field lines, and the resulting current fragmentation reaches again the state of strong turbulence, as discussed earlier.
As we mentioned already in Sect. 3, magnetic flux emergence and the subsequent eruptions, with the formation of jets, was studied in many articles [117, 118, 119, 120, 121, 122, 123, 124, 125, 126]. Most of the numerical studies stopped their analysis at the formation of a large scale current sheet through the interaction of the emerging flux tube with the ambient magnetic field of the solar atmosphere.
We can then conclude that during large scale magnetic explosions, the fragmentation of formed large scale CSs, and the formation of CoSs through the stresses resulting from eruptions inside MFRs reported above, are extremely important for the analysis of the heating and acceleration of the coronal plasma during eruptions and large scale reorganization of the magnetic field of an active region. The fact that solar flares and the associated CMEs were modeled so far as 2D structures with a monolithic large scale CS as the main source of the energy released (called by many researchers the **standard flare**) has misled the analysis of the observed data resulting from the heating and acceleration of particles during solar explosions.
## VI Formation and evolution of coherent structures and self organized criticality
We have repeatedly stated in this review that many space and astrophysical systems have a turbulent driver as a source of their strong turbulence, e.g. the convection zone acts as a driver for the magnetic field extending from the convection zone into the solar atmosphere and the solar wind, solar wind turbulence acts as the driver in the vicinity of the Earth's bow shock, the magnetosheath, and the magnetotail, etc. In the previous section, we focused our analysis on the magnetic coupling of the convection zone with the solar atmosphere (see Fig. 29), utilizing mainly 3D MHD codes to concentrate on the evolution of the driven system at the large scales [189].
In Fig. 29, we present a scenario to reconstruct the formation of CoSs in an active region, using a simple cartoon. The starting point is a 3D large scale magnetic
Figure 27: Temporal evolution of the formation and dynamics of the eruptive magnetic flux ropes (MFRs). Panels (a) and (b) show the field lines from different viewing angles. E and S stand for east and south. (c) Temporal evolution of \(|\mathbf{J}|/|\mathbf{B}|\), plotted in the \(x\)\(\hat{\mathrm{a}}\)\(z\)-plane. Reproduced with permission from Inoue et al., The Astrophysical Journal, **867**, 83 (2018), Copyright 2018 AAS
Figure 28: Evolution of magnetic field lines and the large scale CS in 3D during the eruption. (A) The magnetic field lines are shown by thick colored lines, where the colours are used for a better visualization of the different lines. Note that the MFR is weakly twisted in its core but highly twisted in its envelope. The bottom surface is shown with the distribution of the magnetic flux. The vertical, transparent slice shows the distribution of the current density normalized by the magnetic field strength, i.e. \(J/B\). (B) The CS in the 3D configuration is shown as iso-surface for \(J/B=0.5\,\mathrm{Mm}^{-1}\). Reproduced with permission from Jiang et al., Physics of Fluids, **33**, 055133 (2021), Copyright 2018 AIP
topology, which can be estimated by force-free extrapolation of the photospheric magnetic field. The more intense magnetic structures are forming active regions (ARs) (see Fig. 29a). As discussed extensively in Sec. V, the formation of CoSs inside an AR is shown in Fig. 29b. Only a small number of CoSs are forming CSs inside an AR, and from them an again small fraction will reconnect (see Fig. 29c, d). This is a multi scale system, extending from 100 - 1000 Mm in the initial box (Fig. 29(a)) down to a few cm's at the scale of the reconnecting CSs (Fig. 29d).
The existing numerical tools cannot handle the multi scale coupling of the convection zone with the solar atmosphere, so we are searching for other tools to explore the formation of CoSs in the solar atmosphere.
In this section, we will concentrate on numerical tools that are used extensively in the analysis of complex systems (e.g. systems far away from equilibrium, like strong turbulence, or systems comprised of a large number of nonlinearly interacting sub-systems, like a collection of CoSs in a turbulent plasma). The most popular numerical tool used to analyze complex systems is the Cellular Automaton (CA) model [190]. The set up of the CA depends strongly on a qualitative analysis of the physical system under study, which guides the definition of the rules of the CA. The success of a CA model is assessed by the direct comparison with data and results from MHD simulations. So complex systems, like turbulent plasmas, can be explored by CA models (on the global astrophysical scales), and by MHD or kinetic simulations on the intermediate and the local scales. Also, MHD and kinetic simulations can serve as tools for defining the rules of a CA.
The existence of power laws in the frequency distributions of the explosive solar activity (see Fig. 20) may suggest that explosions are a self-organization phenomenon in ARs. Lu and Hamilton [191] (LH91) were the first to realize that ARs may be in a self-organized critical state, and they proposed that explosions ultimately are caused by small magnetic perturbations (\(\delta B\)) (**loading**), which gradually force a CS to reconnect when a **local critical threshold** is passed. The local fragmentation of the reconnecting CSs (see section III) causes a re-organization of the unstable magnetic topology, which may cause **avalanches** of CSs at all scales to reconnect and to release energy (nano-flares, micro-flares, flares) (the basic ideas of SOC were initially proposed by Bak et al. [192], thirty five years ago). The LH91 model opened the way for a series of similar models developed during the last twenty five years (see the reviews by [193, 194, 196, 195]).
There are many ways to develop CA models to represent SOC [194], one of them based its rules on the MHD equations [196, 195]. The proposed set-up can be superimposed onto each classical solar flare CA model, making the latter interpretable in a MHD-consistent way (_classical_ CA models here means the LH91 model [191] and its modifications, which are based on the sand-pile analogy [197, 198, 199]). The set-up thus specifies the physical interpretation of the grid-variables and allows the derivation of quantities such as currents etc. It does not interfere with the dynamics of the CA (unless wished): loading, redistributing (bursting), and the appearance of avalanches and Self-Organized Criticality (SOC), if the latter are implied by the evolution rules, remain unchanged. The result is therefore still a CA model, with all the advantages of CAs, namely that they are fast, that they model large spatial regions (and large events), and therewith that they yield good statistics. Since the set-up introduces all the relevant physical variables into the context of a CA model, it automatically leads to a better physical understanding of the CA models. It reveals which relevant plasma processes and in what form are actually implemented, and what the global flare scenario is the CA models imply. All this is more or less hidden otherwise in the abstract evolution rules. It leads also to the possibility to change the CA models (the rules) at the guide-line of MHD, if this should become desirable. Not least, the set-up opens a way for further comparison of the CA models to observations.
The specifications the set-up meets are: The vector \(\mathbf{A}_{ijk}\) at the grid sites \(\mathbf{x}_{ijk}\) denotes the local vector-field, \(\mathbf{A}(\mathbf{x}_{ijk})\). Note that this was not specified in the classical CA models. Lu et al. [200] for instance discussed this point: it might also have been thought of as a mean local field, i.e. the average over an elementary cell in the grid.
Guided by the idea that one wants to assure \(\mathbf{\nabla}\cdot\mathbf{B}=0\) for the magnetic field \(\mathbf{B}\), which is most easily achieved by having the vector-potential \(\mathbf{A}\) as the primary variable and letting \(\mathbf{B}\) be the corresponding derivative of \(\mathbf{A}\) (\(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\)), it is furthermore assumed that the grid variable \(\mathbf{A}\) of the CA model is identical with the vector-potential.
The remaining and actually most basic problem then is to find an adequate way to calculate derivatives in the grid. In general, CA models assume that the grid-spacing is finite, which also holds for the CA model of [191] (as
Figure 29: (a) Force-free magnetic field lines, extrapolated from the convection zone into the corona. (b) CoSs in a sub-volume of a coronal active region. (c) Same as (b), but zoomed. (d) Spatial distribution of the reconnecting CSs inside a sub-volume of the complex active region. Reproduced with permission from Vlahos et al., The Astrophysical Journal, **608**, 540 (2004), Copyright 2004 AAS
shown in detail by [201]), so that the most straightforward way of replacing differential expressions with difference expressions is not adequate. Consequently, one has to find a way of continuing the vector-field into the space in-between the grid-sites, which will allow to calculate derivatives. For this purpose, Isliker, Anastasiadis, and Vlahos [195, 196] proposed to use spline interpolation, where the 3D interpolation is performed as three subsequent 1D interpolations in the three spatial directions [202]. For the 1D splines, natural boundaries are assumed (the second derivatives are zero at the boundaries).
With the help of this interpolation, the magnetic field \(\mathbf{B}\) and the current \(\mathbf{J}\) are calculated as derivatives of \(\mathbf{A}\), according to the MHD prescription:
\[\mathbf{B}=\nabla\times\mathbf{A}, \tag{8}\]
\[\mathbf{J}=\frac{c}{4\pi}\,\nabla\times\mathbf{B}. \tag{9}\]
According to MHD, the electric field is given by Ohm's law, \(\mathbf{E}=\eta\mathbf{J}-\frac{1}{c}\mathbf{v}\times\mathbf{B}\), with \(\eta\) the diffusivity and \(\mathbf{v}\) the fluid velocity. Since the classical CA models use no velocity-field, the set-up can yield only the resistive part,
\[\mathbf{E}=\eta\mathbf{J}. \tag{10}\]
In applications such as to solar explosions, where the interest is in current dissipation events, i.e. in events where \(\eta\) and \(\mathbf{J}\) are strongly increased, Eq. 10 can be expected to be a good approximation to the electric field. Theoretically, the convective term in Ohm's law would in general yield just a low-intensity, background electric field.
Eq. (10) needs to be supplemented with a specification of the diffusivity \(\eta\): Isliker _et al._[201] have shown that in the classical CA models the diffusivity adopts the values \(\eta=1\) at the unstable (bursting) sites, and \(\eta=0\) everywhere else. This specifies Eq. (10) completely. The set-up of Isliker et al. [195, 196] for classical solar flare CA models yields, among others, consistency with Maxwell's equations (e.g. divergence-free magnetic field), and availability of secondary variables such as currents and electric fields in accordance with MHD. The main aim in Isliker, Anastasiadis, and Vlahos [195, 196] with the introduced set-up was to demonstrate that the set-up truly extends the classical CA models and makes them richer in the sense that they contain much more physical information. The main features they revealed about the classical CA models, extended with their set-up, are:
**1. Large-scale organization of the vector-potential and the magnetic field:** The field topology during SOC state is bound to characteristic large-scale structures which span the whole grid, very pronounced for the primary grid variable, the vector-potential, but also for the magnetic field. Bursts and flares are just slight disturbances propagating over the large-scale structures, which are always maintained, also in the largest events.
**2. Increased current at unstable grid-sites:** Unstable sites are characterized by an enhanced current, which is reduced after a burst has taken place, as a result of which the current at a grid-site in the neighbourhood may be increased.
**3. Availability of the electric field:** The electric field is calculated with the resistive part of Ohm's law, which can be expected to be a good approximation in applications where the interest is in current-dissipation events, e.g. in the case of solar flares.
**4. Energy release in terms of Ohmic dissipation:** Isliker, Anastasiadis, and Vlahos [195, 196] also replaced the some-what _ad hoc formula_ in the classical CA models to estimate the energy released in a burst with the expression for Ohmic dissipation in terms of the current. The distributions yielded in this way are very similar to the ones based on the ad hoc formula, so that the results of the CA models remain basically unchanged.
**5. CA as models for current dissipations:** As a consequence of point 2 and 4 in this list, and of the fact that there is an approximate linear relation between the current and the stress measure of the classical CAs, one can conclude that the _extended_ CA models can be considered as models for energy release through current dissipation.
It is rather interesting to compare the 3D isosurfaces shown in Fig. 30 for the electric field generated by the CA model of Isliker, Anastasiadis, and Vlahos [195, 196] with the 3D MHD simulations reported in Galsgaard and Nordlund [40] and shown here in Fig. 21. It is remarkable to note the appearance of non-steady current surfaces in the MHD model, as it is the case in the CA model in Fig. 30.
Fragos et al. [203] used "magnetograms" generated with the percolation method and applied a linear extrapolation to search for the statistical properties of the reconstructed coronal AR. Moraitis _et al._[204] did the same, using observational magnetograms, and they noticed that the re-organization of the magnetic fields is a potential way to identify reconnecting CSs in the coronal part of an AR. Dimitropoulou et al. [205, 206] used a series of observed magnetograms to drive the SOC model proposed by Is
Figure 30: Three dimensional isosurfaces of the electric current density, as yielded by the CA model of Isliker, Anastasiadis, and Vlahos [195]. Reproduced with permission from Isliker et al., Astronomy and Astrophysics, **363**, 1068 (2000), Copyright 2000 ESO
liker et al. [195; 196]. They obtained robust power laws in the distribution functions of the modeled flaring events, with scaling law indices that agree well with the observations.
Models along these lines have been proposed [207; 208] (see also Fig. 31) in the mid 80's and the beginning of the 90's and remained undeveloped due to the lack of tools for the global analysis of active regions till recently. The nonlinear coupling of the turbulent convection zone with ARs, as well as the consistency of the related results obtained by the SOC theory with those from turbulence simulations, have been studied intensively by several authors [209]. Uritsky et al. [210] examined in depth the question of the relation of SOC with turbulence in the solar corona and agreed with the suggestion made by Dahlburg et al. [211] that reconnecting CSs and their **fragmentation can serve as the driver for avalanches in the SOC scenario**. Uritsky & Devila [210] also suggested, by studying an AR in a quiescent non-flaring period, that (1) there is formation of non-potential magnetic structures with complex polarity separation lines inside the active region, and (2) there are statistical signatures of canceling bipolar magnetic structures coinciding with flaring activity in the active region. Each of these effects can give rise to an unstable magnetic configuration, acting as an energy source for coronal dissipation and heating. The development of a parallel use of models based on complexity theory and of well established 3D MHD or kinetic codes is the only way to explore the interplay between global and local scales in turbulent systems.
The tools used in this section focused on reconnecting CSs and cannot capture all types of CoSs in the driven turbulent system present in the solar atmosphere. Keeping track of CoSs that are not leading to reconnection, but still dissipate energy collectively in a large volume, is an open project for future models that will utilize tools of complexity theory.
## VII Discussion ans summary
The formation of CoSs in 3D strongly turbulent magnetized plasma remains till today an open problem for the analysis of space, astrophysical and laboratory plasmas. The main obstacle remains the multi-scale characteristics, with the dynamic evolution taking place on all scales. The list of intermittently appearing CoSs inside 3D strongly turbulent magnetised plasmas is long (CSs (non-reconnecting and reconnecting), magnetic filaments, large amplitude magnetic disturbances, vortices, shocklets, etc) and expanding. CSs and their reconnection are the best studied CoSs, but so far the vast majority of studies separated CSs and their evolution from the rest of the CoSs and the turbulent environment where they were formed. The other shortcoming in the analysis of CSs till today is the fact that their evolution was analyzed with analytical and numerical tools in 2D magnetic topologies. The other well known large scale magnetic discontinuities that were isolated from CoSs and a strongly turbulent environment in their vicinity are large scale shocks. Isolating CSs and shocks in their analysis from 3D magnetic topologies and the strongly turbulent environments, in which they are formed and evolve, can lead to an erroneous answer to the question "Who really needs turbulence?".
In this review, we have addressed two important questions: (1) How CoSs are formed? We presented three different ways for the formation: (a) The formation of CoSs (with special emphasis on CSs) where 3D strong turbulence is present. Common space, astrophysical, and laboratory plasmas, where strong turbulence is present, are the solar atmosphere, the solar wind, astrophysical jets, edge localised mode turbulence in TOKAMAKS, etc. (b) The fragmentation of large scale CSs, appearing mostly in explosive phenomena in the solar atmosphere, in the magnetotail under the influence of the solar wind, and in the magnetopause under the influence of the magnetosheath. (c) The interaction of large scale shocks with strong turbulence upstream and downstream of the shock, as appearing in the interaction of the Earth's bow shock with the turbulent solar wind, at the termination shock in the heliosphere, and at super nova remnants. (2) How strong turbulence is excited in astrophysical settings and in laboratory plasmas. We have chosen in this review to address mainly the excitation of strong turbulence by the convection zone, by turbulent flows, and by the fragmentation of large scale CSs formed during coronal explosions and in the magnetotail.
Let us now summarize the main points in this review.
* CoSs are formed in strongly turbulent 3D magnetized plasmas.
* CSs formed inside a 3D strongly turbulent plasma cannot be analyzed as isolated 2D periodic structures.
* Only a small fraction of the CSs formed inside a 3D strongly turbulent plasma reconnect. Therefore, magnetic reconnection dominates the acceleration of the small fraction of high energy particles in the tail of the particles' energy distribution. The collective interaction of the non reconnecting CSs at all scales and with other CoSs (large scale magnetic disturbances, etc.) may play a crucial role in
Figure 31: Formation of CSs developed intermittently at random positions. Reproduced with permission from Anastasiadis and Vlahos, The Astrophysical Journal, **428**, 819 (1994), Copyright 1994 AAS.
the heating of the ambient plasma and the overall dissipation of energy.
* The methods developed so far to search in observed data for 3D CoSs in strongly turbulent plasmas are not sufficient to capture their statistical properties, since they have been based mainly on the 2D modeling of the characteristics of CoSs.
* The formation of large scale CSs and their subsequent fragmentation inside a turbulent plasma gives rise to clusters of CoSs in the vicinity of the CSs and contributes to the development of smaller scale activity inside the strongly turbulent system.
* Karimabadi et al. [37] and Groselj et al. [74] emphasize that the motion of CoSs generates waves that are emitted into the ambient plasma in the form of highly oblique compressional and Alfven modes, as well as large amplitude magnetic disturbances. This indicates that strong turbulence will in general consist of CoSs and waves, therefore "weak" and strong turbulence co-exist in the multiscale evolution of a strongly turbulent plasma.
* Large scale magnetic disturbances and CoSs in fully developed turbulence follow a mono-fractal or multi-fractal scaling, both in space and astrophysical plasmas. This strongly affects the interaction of particles with CoSs.
* Large scale shock waves, like CSs, never appear in isolation. They are formed in the presence of turbulent flows upstream and downstream. The presence of CoSs in the vicinity of a shock solves a number of open problems through the amplification of the CoSs near the shock discontinuity.
* Unfortunately, the multi-scale character and the complexity of the micro-physics present in evolving large scale shocks cannot be explored by current numerical simulations, and neither can they follow the evolution for long times. This is the main reason why we are missing many important details on the evolution of CoSs and their role in the heating and acceleration of particles so far.
* The magnetic coupling of the turbulent convection zone with the solar atmosphere has many avenues for forming CoSs and for exciting strong turbulence in the solar corona and the solar wind. We have emphasized two of them in this review, (1) magnetic foot-point shuffling of otherwise stable magnetic flux ropes, and magnetic field emerging into a complex magnetic topology, (2) explosive evolution of large scale magnetic structures ( loss of equilibrium, triggered e.g. by emerging magnetic flux).
* The multi scale character of CoSs inside a strongly turbulent plasma cannot be addressed with the well known numerical tools (MHD codes, hybrid codes, and particle in cell codes), since they capture only a part of the plasma evolution and cannot realize the self consistent coupling of all scales, and moreover they neglect the fact that most natural systems are open.
* The use of tools borrowed from complexity theory together with the 3D numerical codes mentioned above can be the solution for addressing the 3D nature of CoSs appearing inside a strongly turbulent plasma.
The statistical properties of 3D CoSs inside a strongly turbulent plasma are an interesting and important piece of information when analyzing the transport properties of charged particles in the complex topologies of CoSs. The energy dissipation of CoSs needs more careful analysis, and this problem is currently open and needs a separate review for its exposition.
###### Acknowledgements.
We would like to thank our colleagues Peter Cargill, Tassos Anastsiadis, Manolis Georgoulis, Vasilis Archontis, Rene Kluiving, Marco Onofri, Fabio Lepreti, Tasos Fragos, and Nikos Sioulas, for many interesting and constructive discussions over several years on the topics addressed in this review.
|
2310.15269 | GradSim: Gradient-Based Language Grouping for Effective Multilingual
Training | Most languages of the world pose low-resource challenges to natural language
processing models. With multilingual training, knowledge can be shared among
languages. However, not all languages positively influence each other and it is
an open research question how to select the most suitable set of languages for
multilingual training and avoid negative interference among languages whose
characteristics or data distributions are not compatible. In this paper, we
propose GradSim, a language grouping method based on gradient similarity. Our
experiments on three diverse multilingual benchmark datasets show that it leads
to the largest performance gains compared to other similarity measures and it
is better correlated with cross-lingual model performance. As a result, we set
the new state of the art on AfriSenti, a benchmark dataset for sentiment
analysis on low-resource African languages. In our extensive analysis, we
further reveal that besides linguistic features, the topics of the datasets
play an important role for language grouping and that lower layers of
transformer models encode language-specific features while higher layers
capture task-specific information. | Mingyang Wang, Heike Adel, Lukas Lange, Jannik Strötgen, Hinrich Schütze | 2023-10-23T18:13:37Z | http://arxiv.org/abs/2310.15269v1 | # GradSim: Gradient-Based Language Grouping
###### Abstract
Most languages of the world pose low-resource challenges to natural language processing models. With multilingual training, knowledge can be shared among languages. However, not all languages positively influence each other and it is an open research question how to select the most suitable set of languages for multilingual training and avoid negative interference among languages whose characteristics or data distributions are not compatible. In this paper, we propose GradSim, a language grouping method based on gradient similarity. Our experiments on three diverse multilingual benchmark datasets show that it leads to the largest performance gains compared to other similarity measures and it is better correlated with cross-lingual model performance. As a result, we set the new state of the art on AfriSenti, a benchmark dataset for sentiment analysis on low-resource African languages. In our extensive analysis, we further reveal that besides linguistic features, the topics of the datasets play an important role for language grouping and that lower layers of transformer models encode language-specific features while higher layers capture task-specific information.
## 1 Introduction
Most natural language processing (NLP) research today still focuses on a small number of languages. Extending NLP models to further languages poses different challenges, i.a., little (annotated) data Hedderich et al. (2021). Multilingual training can help in those cases by sharing knowledge across languages. However, adding new languages to the multilingual training set may not necessarily lead to performance gains. In fact, certain languages might actually hurt the performance on downstream tasks in a specific target language Adelani et al. (2022); Snaebjarnarson et al. (2023), for instance, due to unrelatedness to the target language.
As a solution, previous work investigates different measures for language similarity and selects only languages similar to the target language for the multilingual training set i.a. Tan et al. (2019); Lin et al. (2019); Pires et al. (2019); Oncevay et al. (2020); Shaffer (2021); Snaebjarnarson et al. (2023). However, it is an open research question whether language similarity translates into performance gains of multilingual models. For multilingual training, other characteristics might play a role, such as topical shifts of the training data. As a result, it is still unclear how to select the set of languages that leads to the most effective multilingual training setup.
In this paper, we study multilingual fine-tuning of language models with a diverse set of training languages.1 In particular, we show that linguistics-based language similarities are only weakly correlated with cross-lingual transfer performance. Figure 1 illustrates a sample case in which neither
Figure 1: Exemplary transfer learning setup with African languages: The motivation for this work is that neither language family (indicated by node colors) nor typological distance (indicated by distance of languages in the plot) are consistent predictors of good performance when choosing a source language for cross-lingual transfer. Red cross: source language affecting performance negatively. Green tick: source language affecting performance positively.
language family information (indicated by node colors) nor similarity of language embeddings (indicated by proximity in the vector space) is helpful for finding languages that have a positive cross-lingual transfer score with the target language. Thus, prior information about the languages, such as their language families or typological features, alone is not enough for an effective multilingual training. Instead, similarity measures that capture additional information about the data and task beyond linguistics similarity may achieve better performance. Wang et al. (2020), for instance, show that gradient similarity across languages measured along the optimization trajectory correlates with language proximity and cross-lingual performance.
We draw inspiration from this observation. However, instead of projecting conflicting gradients throughout the training process, we propose to leverage the _gradient similarity_ to group languages with a branch-and-bound-like algorithm that _optimizes the overall similarity score_ of all languages. This approach has the following advantages: (i) It can be applied _without any prior knowledge_ of the languages or topics of the given datasets, (ii) it is _well correlated_ with downstream task performance of the multilingual model, (iii) it finds the best language groups from a _global perspective_, i.e., instead of selecting source languages independently of each other (which may create groups of mutually interfering languages), we form each group based on a criterion that evaluates the group as a whole.
In our experiments, we show the superior performance of our grouping method compared to various baseline approaches on three multilingual datasets with different tasks and set the new state of the art on AfriSenti, a sentiment analysis dataset in 12 low-resource African languages.
Furthermore, we extensively analyze our models with a topic analysis, a correlation-based analysis and an ablation study, revealing important insights, for instance that the topic distribution of the training data heavily affects multilingual training and that lower layers of transformer models encode language-specific features while higher layers capture task-specific information. This confirms results from prior work (i.a., Raganato and Tiedemann, 2018; Jawahar et al., 2019; Tenney et al., 2019; Kovaleva et al., 2019) from another (correlation-based) perspective.
The code base for GradSim is available online.2
Footnote 2: [https://github.com/boschresearch/gradsim](https://github.com/boschresearch/gradsim)
## 2 Related Work
Multilingual and multi-task training.A growing number of research projects investigates multilingual training to cover a variety of languages, including low-resource languages (Hu et al., 2020; Lange et al., 2020; Hedderich et al., 2021; FitzGerald et al., 2022). In the context of low-resource sentiment analysis, Wang et al. (2023) recently use the cross-lingual transfer score between pairs of languages to select source languages for multilingual training. Our approach differs from these works in that we investigate language interactions from a global optimization perspective.
Considering each language as a separate task, multilingual training can be treated as a multi-task learning (MTL) problem (Ruder, 2017). A line of existing work utilizes gradient-based techniques to improve multi-task learning (Chen et al., 2018; Sener and Koltun, 2018; Yu et al., 2020; Wang et al., 2020). They show that negative cosine similarity between gradients leads to negative interference for MTL optimization, and projecting out the conflicting gradients can improve the optimization dynamics. Our work follows this insightful observation. However, in contrast to their work, we propose to leverage multilingual gradients for language grouping to ensure that gradients are aligned in each language group.
Language similarity measures.In order to group languages for multilingual training or transfer learning, related work has proposed different ways to estimate the similarity between languages, e.g., leveraging the language family taxonomy (Tan et al., 2019; Shaffer, 2021; Chronopoulou et al., 2023; Snaebjarnarson et al., 2023) or representing languages as information-rich vectors based on their typological or conceptual features (Littell et al., 2017; Lin et al., 2019; Oncevay et al., 2020; Liu et al., 2023).
Another line of works measures language similarity based on embeddings from multilingual pretrained language models (mPLMs) (Raganato and Tiedemann, 2018; Lange et al., 2021b; Chang et al., 2022; Lin et al., 2023). Tan et al. (2019) and Shaffer (2021), for instance, perform language grouping for multilingual named entity recognition and neural machine translation based on embeddings. In contrast to these studies, we propose to use the gradient cosine similarity between languages as the similarity measure for language grouping. This
model-based similarity measure reflects how each language interacts in the optimization process, with no need of any prior knowledge of the languages.
## 3 Method
In this section, we describe our proposed language grouping approach and the general multilingual training in which we apply its results. Note that the gradient-based similarity estimation is purely model-based, thus, can be applied to other settings, e.g., multi-domain or multi-task problems, as well.
### Step I: Gradient Similarities
Due to the high discrepancy among languages, multilingual optimization often suffers from the conflicting gradient issue (Wang et al., 2020; Xu and Murray, 2022), i.e., gradients of different languages point into different directions. Previous works show that gradient similarity is correlated with model performance (Chen et al., 2018; Sener and Koltun, 2018; Wang et al., 2020; Yu et al., 2020). Inspired by this observation, we propose to use gradient similarity for grouping languages.
Given a set of languages \(L=\{l_{1},l_{2},\ldots,l_{N}\}\), we study the gradient similarities across languages by training a multilingual model jointly on all languages and measure the language gradients \(G=\{g_{1},g_{2},\ldots,g_{N}\}\) along the optimization process. To reduce computational costs, we average language gradients first at the epoch level and calculate the per-epoch gradient cosine similarity between languages. Then we average the gradient similarity over all epochs. Finally, we get a gradient similarity matrix \(S\in\mathbb{R}^{N\times N}\) across \(N\) languages, with \(s_{i,j}=\cos(g_{i},g_{j})=\frac{g_{i}\cdot g_{j}}{|g_{i}||g_{j}|}\).
Since it is very expensive to calculate the gradient similarity based on the gradients w.r.t. all parameters, we choose to only use the gradients based on the classification layer of the model. An analysis of gradients of different layers and ablations studies can be found in Sections 5.2 and 5.3.
### Step II: Language Grouping
Based on the pairwise similarity matrix \(S\) from Step I, we next determine the best grouping into a pre-defined number of \(K\) groups.
In particular, our goal is to find the \(K\) language groups which (i) cover all languages of the given language set \(L\), and (ii) maximize the overall similarity score of all languages, which is a reduction from the Set-Cover problem. We solve it using the branch-and-bound-like algorithm as in Standley et al. (2020) and Fifty et al. (2021).3 The algorithm evaluates different combinations of \(K\) language groups under the constraint that each language is included in at least one, but potentially multiple groups. We finally select the language grouping that leads to the highest overall similarity score.
Footnote 3: Alternatively, the binary integer program (BIP) solver could be used as in Zamir et al. (2018).
Given \(\Gamma=\{\gamma_{1},\ldots,\gamma_{K}\}\) as a potential grouping result, we define the overall similarity score for \(\Gamma\) as \(\sum_{i=1}^{N}\text{Sim}(l_{i}|\Gamma)\) where \(\text{Sim}(l_{i}|\Gamma)\) is the collective similarity score of language \(l_{i}\) in its language group \(\gamma_{j}\in\Gamma\). The collective similarity score of \(l_{i}\in\gamma_{j}\) is defined as the average of all pair-wise similarities between \(l_{i}\) and the other languages in \(\gamma_{j}\).
### Steps III : Training and Inference
Given the language groups \(\Gamma\) from Step II, we train one multilingual model per group \(\gamma_{j}\in\Gamma\), using the training data from the respective languages. For inference, we select the appropriate multilingual model for each target language and apply it to the test data. If a target language \(l_{i}\) appears in more than one group, we select the group with the highest collective similarity score of \(l_{i}\) for inference.
## 4 Experiments
In this section, we describe our experimental settings as well as our results for three tasks.
### Tasks and Datasets
We experiment with the following three datasets covering different languages as well as text classification and sequence tagging tasks. (Dataset statistics are given in Table 8 and 9 in Appendix A.)
**AfriSenti**(Muhammad et al., 2023, 2023): This shared task dataset provides a challenging testbed for sentiment analysis: Both the languages (12 African languages) and the text genre (Twitter) pose challenges to NLP models. To investigate multilingual training results, we focus on the multilingual subtask of the shared task (Subtask B), and report macro-weighted F1 scores following Muhammad et al. (2023).
**WikiAnn**(Pan et al., 2017): This dataset offers automatically extracted labels for named entity recognition (NER). Following Shaffer (2021), we select 15 languages for our experiments and use micro-averaged F1 as the evaluation metric.
**Universal Dependency (UD) treebank v1.2**(Nivre et al., 2016): We experiment with part-of-speech (POS) tagging using the 17 Universal POS labels. Following prior work Yasunaga et al. (2018); Lange et al. (2021), we use 27 languages from the dataset with 21 high-resource and 6 low-resource languages and report accuracy for evaluation.
### Training Details
For our experiments on AfriSenti, we use the pretrained AfroXLM-R large transformer Alabi et al. (2022), an XLM-R model adapted to African languages, as our base model. To measure language gradients in Step I, we use 25% of the training data in AfriSenti and set the batch size to 8 for computational efficiency. For multilingual training and inference (Step III ), we use all training data and a batch size of 32. In both stages, we finetune the model with a learning rate of 1e-5 and set the maximum sequence length to 128. We set the number of language groups to \(K=4\) which equals the number of language families in the dataset. We further provide a comparison of different \(K\) in Section 5.3.
For the NER task, we follow the training setup used in Shaffer (2021) for a fair comparison. Specifically, we use XLM-R as our base model and finetune it for 3 epochs. We set the batch size to 20 with a learning rate of 2e-5 and a max sequence length of 300. Following Shaffer (2021), we set the number of language groups to \(K=4\).
For the POS tagging task, we use the XLM-R model as well and set the training epoch to 20. We use a batch size of 8, a learning rate of 2e-5 and a maximum sequence length of 128. Here, we specify \(K=6\) for language grouping, as 6 language families are covered by the 27 languages we study.
On all three datasets, we use the AdamW optimizer Loshchilov and Hutter (2017). The training was performed on Nvidia A100 GPUs.4 All reported results are averaged over 5 random seeds.
Footnote 4: All experiments ran on a carbon-neutral GPU cluster.
### Baselines
Besides a **monolingual model** (trained only on the target language) and a purely **multilingual model** (trained on _all_ available languages), we consider the following baselines for language grouping that have been presented by prior work:
**Language family.** We group languages based on their language family information and train one multilingual model per language family. Language family-based grouping is also studied by, i.a., Tan et al. (2019); Shaffer (2021); Chronopoulou et al. (2023); Snebjarnarson et al. (2023).
**Typological similarity.** Languages can be represented by typological features, e.g., the syntax, phonology or inventory features. Using the _lang2vec_ tool and the URIEL knowledge base Littell et al. (2017), we retrieve language vectors and use the pairwise distances among them as the similarity measure of our algorithm. This is similar to Lin et al. (2019) and Oncevay et al. (2020).
**Embedding distance.** Multilingual pretrained language models (mPLMs) also encode language-specific information Raganato and Tiedemann (2018); Chang et al. (2022). Tan et al. (2019) and Shaffer (2021) use mPLM-based language embeddings to determine language similarities for language grouping. Following this idea, we compute
Figure 2: Overview of our proposed method GradSim for language grouping. (Step I) We train all languages in one multilingual model to measure the gradient similarities across languages. (Step II) We determine the best \(K\) language groups based on the similarity measure from the first step. (Step III) We train one language model on each language group and deploy it for inference.
sentence embeddings using the pretrained encoder from our base model and average sentence embeddings of the same language. Then, we use the embedding distance across languages as the similarity measure in Step I (denoted by Embedding distance (PLM)). As an alternative, we also consider embeddings from the language model fine-tuned on the task (denoted by Embedding distance (FT)).
**Oracle upper bound.** As an upper bound, we group languages based on the post-hoc cross-lingual transfer performance. The cross-lingual transfer performance is often used for source language selection as in Adelani et al. (2022) and Wang et al. (2023). We consider this an oracle upper bound as it is a direct indication of how knowledge learned from one language affects the performance on another language. Note that this approach is computationally expensive as it requires \(N\times N\) transfer experiments for \(N\) languages, while our gradient-based approach only needs a single training run for collecting gradient information.
### Results
Text classification.Table 1 shows our experimental results on the AfriSenti dataset (per language and on average). While for a few languages, a grouping based on our baseline approaches performs best (e.g., embedding distance for _am_ and _pcn_, or typological similarity for _sw_), GradSim performs best or second best for most languages and, as a result, best on average. Its average result comes very close to the oracle upper bound, which, in contrast to our approach, requires prior knowledge about cross-lingual transfer performance.
We also compare GradSim with the state-of-the-art method on AfriSenti Wang et al. (2023), which uses AfroXLM-R with task-adaptive pretraining (TAPT) Gururangan et al. (2020) and performs transfer learning after selecting the best source languages based on their cross-lingual transfer score. For a direct comparison, we also apply TAPT and use GradSim to group languages for multilingual training. As shown in Table 2, GradSim sets the new state of the art on AfriSenti. It is superior to the previous approach of Wang et al. (2023) that only considers the pairwise transfer scores, neglecting possible interactions of different source languages. Instead, GradSim maximizes the overall gradient similarities from a global perspective.
Sequence tagging.Table 3 provides our results for multilingual named entity recognition. We report the state-of-the-art results from Shaffer (2021) as baseline results. Our approach GradSim outperforms the prior state of the art on most high-resource languages and all low-resource languages, again leading to the best overall results.
Our results for POS tagging are provided in Table 4. GradSim outperforms multilingual and monolingual training without language grouping as well as language grouping based on other metrics. It performs best on average over the low-resource languages as well as on average over all languages.
Given the results on sequence tagging tasks, we find that low-resource languages benefit more from language grouping. For high-resource languages, additional training sources from other languages
\begin{table}
\begin{tabular}{l c|c c c c c c c c c c c} \hline \hline
**Method** & **avg\({}^{*}\)** & **am\({}^{*}\)** & **dz** & **ha** & **ig\({}^{*}\)** & **kr\({}^{*}\)** & **ma\({}^{*}\)** & **pcm** & **pt\({}^{*}\)** & **sw\({}^{*}\)** & **ts** & **twi** & **yo\({}^{*}\)** \\ \hline Oracle upper bound & 71.46 & 69.87 & 69.45 & 80.15 & 78.48 & 70.72 & 52.41 & 68.67 & 71.85 & 61.59 & 55.42 & 64.79 & 75.52 \\ \hline Multilingual & 59.97 & 51.99 & 56.62 & 66.10 & 67.64 & 61.03 & 43.07 & 55.70 & 67.37 & 58.18 & 42.86 & 49.94 & 62.87 \\ Monolingual & 68.29 & 49.00 & 57.16 & **80.36** & **79.55** & 70.72 & 48.73 & 68.24 & 66.12 & 62.50 & 44.66 & 54.56 & **75.09** \\ \hline Language family & 66.19 & 67.84 & 68.54 & 78.88 & 68.16 & 61.10 & 51.10 & 67.32 & 64.88 & 58.38 & 47.04 & 54.60 & 64.36 \\ Typological similarity & 68.93 & 66.10 & **69.27** & 79.38 & 78.00 & 70.09 & 49.17 & 63.81 & 66.12 & **63.23** & 46.79 & 61.25 & 73.95 \\ Embedding dis. (PLM) & 68.81 & **72.09** & 67.20 & 71.57 & 74.49 & 71.89 & 51.49 & **69.09** & 67.32 & 62.50 & **52.01** & 60.88 & **75.09** \\ Embedding dis. (FT) & 69.62 & 59.69 & 67.36 & 79.78 & 78.71 & 69.99 & 50.01 & 68.46 & 66.43 & 62.62 & 49.20 & 64.01 & 75.07 \\ GradSim**(ours)** & **71.34** & 66.11 & 67.90 & 79.97 & **79.55** & **72.12** & **53.68** & 68.40 & **72.30** & 63.05 & 50.36 & 63.46 & **75.09** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on AfriSenti, a benchmark dataset for sentiment analysis on low-resource African languages. Bold shows best results, underline highlights second-best results per language and on average. * indicates the settings with statistically significant improvements (_p-value_\(<0.05\)) using GradSim compared to Embedding dis. (FT), the overall second-best system.
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Method** & **Average F1** \\ \hline SOTA Single model & 74.08 \\ GradSim+TAPT Single model (**ours**) & **75.29** \\ \hline SOTA Ensemble & 75.06 \\ GradSim+TAPT Ensemble (**ours**) & **75.34** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on AfriSenti in comparison to the state of the art Wang et al. (2023). We apply task-adaptive pretraining (TAPT) and ensemble methods on top of GradSim for a fair comparison to the state of the art.
have a less prominent impact when enough language training data is available. It highlights the value of multilingual learning with well-suited languages to enhance the performance of low-resource languages, providing a key strategy for advancing future low-resource NLP research.
Significance tests.We run permutation-based significance tests following Dror et al. (2018) with a significance level of 0.05 between GradSim and the respective second-best system on all three datasets. In Tables 1, 3 and 4, settings with statistically significant improvements when using GradSim are marked with *. The results show that GradSim is significantly better than the second-best system in 32 out of 37 single language settings where GradSim outperforms the second-best system across three datasets. In addition, its average performance across all languages is significantly better than the other systems on all three datasets.
## 5 Analysis
To analyze the behavior of the model, we perform the following analyses on the AfriSenti dataset: A qualitative analysis of the data in order to better understand differences coming from data peculiarities (Section 5.1), a correlation analysis to explain why some grouping methods work better than others (Section 5.2), and an ablation study to investigate the impact of our design choices (Section 5.3).
### Topic Analysis
Although data analysis is valuable for research progress, it is challenging for foreign languages. Therefore, we choose a semi-automatic approach involving machine translation and manual inspection for better understanding the input data of our models: For each language, we first extract the 50 most relevant keywords via a term frequency-inverse document frequency (TF-IDF) method. Then, we use the _pygoogletranslate_ API5 to translate the keywords into English and remove duplicate words and stop words. Table 6 provides exem
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **Multi.** & **Mono.** & **Family** & \begin{tabular}{c} **Embed.** \\ **(FT)** \\ \end{tabular} & \begin{tabular}{c} **GradSim** \\ **(ours)** \\ \end{tabular} \\ \hline \multicolumn{5}{l}{_high-resource_} \\ \hline \(\text{bg}^{*}\) & **99.42** & 99.34 & 99.21 & 99.38 & 99.40 \\ \(\text{cs}^{*}\) & 99.00 & 99.00 & 98.91 & 98.99 & **99.01** \\ \(\text{da}^{*}\) & 98.22 & **98.66** & 98.12 & 98.06 & 98.55 \\ \(\text{de}\) & 94.47 & **94.72** & 94.43 & 94.59 & 94.43 \\ \(\text{en}\) & 97.06 & **97.34** & 96.88 & 97.25 & 97.18 \\ \(\text{es}^{*}\) & **97.35** & 97.29 & 97.18 & 97.23 & 97.21 \\ \(\text{eu}\) & 95.95 & 96.01 & 96.01 & 96.04 & **96.09** \\ \(\text{fa}^{*}\) & 97.30 & 97.31 & 97.20 & 97.35 & **97.41** \\ \(\text{fa}^{*}\) & 97.51 & 97.64 & 97.61 & 97.51 & **97.70** \\ \(\text{fr}^{*}\) & 96.58 & 96.48 & 96.21 & 96.40 & **96.67** \\ \(\text{he}^{*}\) & 97.39 & 97.25 & 97.25 & 97.31 & **97.43** \\ \(\text{hi}\) & 97.56 & 97.62 & 97.49 & **97.64** & 97.55 \\ \(\text{hr}\) & 97.66 & **97.67** & 97.56 & 97.65 & 97.57 \\ \(\text{id}\) & 91.16 & **91.78** & **91.78** & 91.56 & 91.20 \\ \(\text{it}^{*}\) & 98.60 & **98.68** & 98.45 & 98.51 & 98.58 \\ \(\text{nl}^{*}\) & 93.76 & **93.94** & 93.67 & 93.80 & 92.38 \\ \(\text{no}^{*}\) & 98.88 & **99.93** & 98.91 & 98.95 & 99.01 \\ \(\text{pl}^{*}\) & **98.58** & 98.56 & 98.42 & 98.47 & 98.52 \\ \(\text{pr}^{*}\) & 98.49 & 98.51 & 98.43 & 98.44 & **98.54** \\ \(\text{sl}^{*}\) & 98.94 & **99.03** & 98.90 & 98.96 & 99.02 \\ \(\text{sv}\) & **98.86** & 98.71 & 98.77 & 98.81 & 98.81 \\ \hline \multicolumn{5}{l}{_avg_*} \\ \hline \multicolumn{5}{l}{_low-resource_} \\ \hline \(\text{el}^{*}\) & 98.56 & 98.41 & 98.20 & 98.57 & **98.59** \\ \(\text{et}^{*}\) & 95.34 & 94.76 & 95.46 & 95.27 & **95.78** \\ \(\text{ga}^{*}\) & 93.34 & 92.49 & 93.32 & 93.10 & **93.52** \\ \(\text{hu}\) & 96.82 & 96.87 & 97.01 & **97.14** & 96.94 \\ \(\text{ro}\) & **95.74** & 94.65 & **95.32** & **95.74** & 95.14 \\ \(\text{ta}^{*}\) & 85.63 & 84.55 & 84.55 & 87.16 & **88.32** \\ \(avg^{*}\) & 94.24 & 93.62 & 93.98 & 94.50 & **94.72** \\ \hline \multicolumn{5}{l}{_avg_**(all)**\({}^{*}\)} \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c} \hline \hline & **Multi.** & **Mono.** & **Family** & \begin{tabular}{c} **Embed.** \\ **(prior)** \\ \end{tabular} &
\begin{tabular}{c} **GradSim** \\ **(ours)** \\ \end{tabular} \\ \hline \multicolumn{5}{l}{_high-resource_} \\ \hline \(\text{ar}^{*}\) & 86.65 & 85.25 & 84.92 & 85.25 & **88.02** \\ \(\text{he}\) & 84.21 & 84.51 & 82.47 & **84.83** & 84.06 \\ \(\text{da}^{*}\) & 90.00 & 87.57 & 98.64 & 90.49 & **91.65** \\ \(\text{de}\) & 84.42 & 82.42 & 84.18 & 85.73 & **87.27** \\ \(\text{en}\) & 81.97 & 77.91 & 81.28 & **83.37** & 83.31 \\ \(\text{es}^{*}\) & 98.59 & 82.01 & 88.87 & 89.90 & **90.85** \\ \(\text{fr}\) & 88.22 & 82.83 & 87.78 & **89.79** & 89.54 \\ \(\text{hi}^{*}\) & 87.30 & 84.04 & 85.51 & 87.17 & **88.25** \\ \(\text{it}\) & **92.27** & 66.03 & 88.60 & 90.52 & 90.72 \\ \(\text{ru}^{*}\) & 88.32 & 88.18 & 87.77 & 88.55 & **88.70** \\ \(\text{ko}\) & 85.97 & 86.54 & 84.66 & **86.91** & 85.92 \\ \(\text{ja}^{*}\) & 71.08 & 66.83 & 66.83 & 71.40 & **75.56** \\ \(\text{zh}\) & **79.36** & 73.66 & 73.66 & 79.12 & 75.49 \\ \(avg^{*}\) & 85.34 & 82.14 & 83.55 & 85.62 & **86.10** \\ \hline \multicolumn{5}{l}{_low-resource_} \\ \hline \(\text{sv}^{*}\) & 88.22 & 63.30 & 55.11 & 90.13 & **90.22** \\ \(\text{yo}^{*}\) & 77.24 & 7.74 & 21.81 & 85.33 & **86.22** \\ \(avg^{*}\) & 82.73 & 35.52 & 38.46 & 87.73 & **88.22** \\ \hline \multicolumn{5}{l}{_avg_**(all)**\({}^{*}\)} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on WikiAnn, a NER benchmark, in micro F1. The numbers of the four baseline / previous state-of-the-art methods are taken from Shaffer (2021) and micro-averaged over the different classes. * indicates the settings with statistically significant improvements using GradSim compared to Embed. (prior), the second-best system. Further baseline results are given in Table 10 in Appendix B for space reasons.
plary results for three languages of the AfriSenti dataset. The complete set of keywords for all languages is provided in Appendix D (see Table 15).
While the keywords extracted for Swahili (_sw_) and Amharic (_am_) are mainly centered around political and administrative topics, e.g., national, minister, education, government etc, the keywords for Xitsonge (_ts_) are more related to every-day life aspects. The multilingual model performance reveals that indeed Swahili and Amharic can effectively be trained together while Swahili and Xitsonga rather harm each other, even though Swahili and Xitsonga belong to the same language family and Swahili and Amharic do not. When looking at the language grouping results, GradSim indeed groups _sw_ and _am_ together (see Table 12 in Appendix D), thus, is able to capture their topical similarity, while a language family-based grouping would cluster _sw_ and _ts_ into the same group.
### Correlation Analysis
Table 5 provides the results of our correlation analysis that we perform on the AfriSenti dataset. In particular, we compute the Pearson correlation coefficient between the different grouping methods (similarity measures) that we study and different characteristics, such as model-specific characteristics (measured by cross-lingual transfer score), language-specific characteristics (measured by typological similarity) and topic-specific characteristics (measured by keyword embedding distance).
From the results, we can draw a number of interesting conclusions, namely:
(i) The transfer score is not correlated with language family information and only weakly correlated with embedding-based similarity measures often used in related work. For gradient similarity, we see considerably higher correlation values, supporting our proposal of using this similarity measure for language grouping.
(ii) There is a relatively weak correlation between the cross-lingual transfer score and the typological similarity, while language family and embedding-based similarity measures show a high correlation with typological language similariy. This indicates that these similarity measures capture the linguistics-based language information well, which, however, does not translate into better transfer performance. Similar to the oracle measure (transfer score), gradient similarity based on the classifier parameters is only weakly correlated
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Language (family)** & **Keywords** \\ \hline Swahili & god, thank you, major, national, \\ (Niger-Congo) & minister, better, package, service, \\ & continue, dr, education, citizens, \\ & news, world, construction, people, \\ & region, police, state, president, father, army \\ \hline Amharic & flower, city, season, a matter, \\ (Afro-Asiatic) & government, discussion, december, press release, district, information, administration, public, government, the racist, man, poison \\ \hline Xitsonga & mozambique, listen, wake up, \\ (Niger-Congo) & awake, live, conform, sugar, home, \\ & lake, leave, speed, connect, come \\ \hline \hline \end{tabular}
\end{table}
Table 6: Exemplary keywords from AfriSenti tweets of different languages (reduced to a subset for space reasons, full set is provided in Appendix D (Table 15)).
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{**Grouping methods**} & \multicolumn{3}{c}{**Pearson correlation coefficient**} \\ \cline{2-4} & \(\leftrightarrow\) **transfer score** & \(\leftrightarrow\) **typological similarity** & \(\leftrightarrow\) **topic similarity** \\ \hline \multicolumn{4}{l}{_Baseline_} \\ \hline Cross-Transfer Score (oracle) & 1 & 0.353 & 0.5079 \\ Language Family & 0.0869 & 0.5278 & 0.2680 \\ Typological similarity & 0.3530 & 1 & 0.0353 \\ Embedding distance (PLM) & 0.4029 & 0.7696 & -0.2383 \\ Embedding distance (FT) & 0.4667 & 0.7240 & -0.0252 \\ \hline \multicolumn{4}{l}{_Gradient similarity wrt. different layers (from deep to shallow)_} \\ \hline Classification layer & **0.6963** & 0.3944 & **0.4749** \\ Encoder layer 23 & 0.6485 & 0.6377 & 0.1486 \\ Encoder layer 21 & 0.5526 & 0.7811 & 0.1134 \\ Encoder layer 18 & 0.4462 & 0.8181 & -0.0601 \\ Encoder layer 15 & 0.4602 & 0.8329 & 0.1083 \\ Encoder layer 12 & 0.4532 & 0.8566 & -0.0731 \\ Encoder layer 6 & 0.4542 & 0.8586 & -0.0342 \\ Encoder layer 0 & 0.4526 & 0.8526 & 0.0721 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of our correlation analysis on the AfriSenti dataset.
with typological language similarity.
(iii) Based on the keywords extracted for our analysis in Section 5.1, we retrieve keyword embeddings from the pretrained model encoder and average them for each language. We then compare the similarities of the keyword-based language embeddings with our different similarity measures using Pearson correlation. We find that they are only weakly correlated with language family information and even weakly negatively correlated with embedding distances. However, the correlation with the cross-transfer score and our proposed gradient similarity is larger, indicating that the gradient similarity can indeed pick up the topic information of the data.
(iv) While higher layers are higher correlated with task performance, lower layers show a higher correlation with typological distance. This indicates that lower layers encode rather general language-specific information while higher layers capture task-related information.
### Ablation Study
Gradients from different layers.Table 7 shows an ablation study of our model. The main design choice of our approach is the position in the model where to take the gradients. In our analysis, we compare model performance when using gradients from different layers. We see a clear trend that higher layers are better suited than lower layers. In particular, taking the gradients directly from the classification layers leads to the best results.
Number of language groups.For our experiments, we choose the number of language groups \(K\) to be the same as the number of language families covered in the datasets.6 However, \(K\) is a hyperparameter of our method. Therefore, we investigate the performance for \(K\in\{1\ldots 8\}\) in Figure 3. Choosing \(K=2\) can already improve the performance compared to purely multilingual training (\(K=1\)). Until \(K=4\), the performance further improves and then converges for larger \(K\).
Footnote 6: Except for WikiAnn, where we set \(K\) to the same number as prior work (Shaffer, 2021) for a fair comparison.
## 6 Discussion
In this section, we summarize our main findings.
Language similarity is not enough to determine transfer suitability.When sharing knowledge across languages, the information about linguistics-based language similarity (e.g., whether the languages come from the same language family or how languages are typologically similar to each other) is not enough for optimal performance. This observation is in line with the findings by Tan et al. (2019), Shaffer (2021) and Malkin et al. (2022) that languages from the same family may still exhibit distinct linguistic features and, thus, language-family based grouping can enhance model performance only to a certain extent. In addition, we find that there are other aspects that will affect multilingual model performance and, therefore, need to be taken into account, such as the topical distribution of the data.
Gradient-based method does not require any prior knowledge.Our proposed gradient-based approach for grouping languages is a pure model-based approach, thus, does not require any prior knowledge about the language, task or data. As a result, it can be successfully applied, even when the data distribution (e.g., topical distribution) is unknown (e.g., because we are dealing with foreign languages). While our current work only presents results for language grouping for multilingual models, the method itself is more general and can be applied to other settings as well, such as multi-task learning or multi-domain setups.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Gradient from layer** & **Task performance** \\ \hline Classification layer & **71.34** \\ Encoder layer 23 & 69.95 \\ Encoder layer 21 & 69.43 \\ Encoder layer 18 & 69.21 \\ Encoder layer 15 & 68.91 \\ Encoder layer 12 & 69.19 \\ Encoder layer 6 & 69.19 \\ Encoder layer 0 & 68.22 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation study of gradients from different layers on the AfriSenti dataset.
Figure 3: Ablation study of different number of groups on AfriSenti: average F1 w.r.t. number of groups.
Lower layers capture language, upper layers task information.Adding to previous work on analyzing transformer-based pretrained language models [1, 19, 20], our correlation analysis shows that gradient similarity between languages from lower layers are more correlated to language-specific distances, i.e., low layers seem to encode language-specific information, while gradient similarity from upper layers are more correlated to task-specific performance, i.e., upper layers tend to capture task-specific information.
## 7 Conclusion
In this paper, we addressed the challenging problem of grouping languages for effective multilingual training. We proposed a gradient-based grouping approach and showed in our experiments that it is better correlated to cross-lingual transfer performance than language family or language embedding-based grouping. In our analysis, we identified topical distribution differences as one potential challenge that can be addressed effectively by our approach. Furthermore, our correlation analysis confirmed results from prior work that lower layers of transformer-based pretrained models seem to encode language-specific features, while upper layers capture task-specific information. Our method shows superior performance compared to a variety of baseline methods for language grouping on three diverse datasets and, in particular, sets the new state of the art on a multilingual sentiment analysis benchmark dataset consisting of low-resource African languages.
## Limitations
One limitation of our work is the scope of evaluation. While we performed experiments on three diverse text classification and sequence tagging tasks, GradSim is generally applicable to a wide range of tasks and could thus be evaluated on even further tasks.
Besides, our experiments currently focus on multilingual settings and datasets. Experiments for multi-domain and multi-task settings are outside the scope of this work, however, an interesting direction for future work.
Finally, compared to the large number of languages in the world, the set of languages in our work is still limited and, thus, our results might not be representative for all languages of the world. However, we chose the datasets for our experiments with the aim of covering a broad variety of languages, including African languages which are typically under-explored in NLP research.
## Ethics Statement
Our work focuses on multilingual and low-resource settings. For instance, we investigate our models on African languages which are typically under-represented and under-explored in NLP research. Including them into NLP research is important from an ethical point of view.
|
2302.02504 | Motion-compensated MR CINE reconstruction with reconstruction-driven
motion estimation | In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an effective
approach to address highly undersampled acquisitions by incorporating motion
information between frames. In this work, we propose a novel perspective for
addressing the MCMR problem and a more integrated and efficient solution to the
MCMR field. Contrary to state-of-the-art (SOTA) MCMR methods which break the
original problem into two sub-optimization problems, i.e. motion estimation and
reconstruction, we formulate this problem as a single entity with one single
optimization. Our approach is unique in that the motion estimation is directly
driven by the ultimate goal, reconstruction, but not by the canonical
motion-warping loss (similarity measurement between motion-warped images and
target images). We align the objectives of motion estimation and
reconstruction, eliminating the drawbacks of artifacts-affected motion
estimation and therefore error-propagated reconstruction. Further, we can
deliver high-quality reconstruction and realistic motion without applying any
regularization/smoothness loss terms, circumventing the non-trivial weighting
factor tuning. We evaluate our method on two datasets: 1) an in-house acquired
2D CINE dataset for the retrospective study and 2) the public OCMR cardiac
dataset for the prospective study. The conducted experiments indicate that the
proposed MCMR framework can deliver artifact-free motion estimation and
high-quality MR images even for imaging accelerations up to 20x, outperforming
SOTA non-MCMR and MCMR methods in both qualitative and quantitative evaluation
across all experiments. The code is available at
https://github.com/JZPeterPan/MCMR-Recon-Driven-Motion. | Jiazhen Pan, Wenqi Huang, Daniel Rueckert, Thomas Küstner, Kerstin Hammernik | 2023-02-05T22:51:27Z | http://arxiv.org/abs/2302.02504v2 | # Reconstruction-driven motion estimation for motion-compensated MR CINE imaging
###### Abstract
In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an effective approach to address highly undersampled acquisitions by incorporating motion information between frames. In this work, we propose a deep learning-based framework to address the MCMR problem efficiently. Contrary to state-of-the-art (SOTA) MCMR methods which break the original problem into two sub-optimization problems, i.e. motion estimation and reconstruction, we formulate this problem as a single entity with one single optimization. We discard the canonical motion-warping loss (similarity measurement between motion-warped images and target images) to estimate the motion, but drive the motion estimation process directly by the final reconstruction performance. The higher reconstruction quality is achieved without using any smoothness loss terms and without iterative processing between motion estimation and reconstruction. Therefore, we avoid non-trivial loss weighting factors tuning and time-consuming iterative processing. Experiments on 43 in-house acquired 2D CINE datasets indicate that the proposed MCMR framework can deliver artifact-free motion estimation and high-quality MR images even for imaging accelerations up to 20x. The proposed framework is compared to SOTA non-MCMR and MCMR methods and outperforms these methods qualitatively and quantitatively in all applied metrics across all experiments with different acceleration rates.
motion-compensated reconstruction, Cardiac CINE reconstruction, deep learning, reconstruction-driven registration / motion estimation
## I Introduction
CINE cardiac magnetic resonance imaging (CMR) serves as a versatile tool for characterizing cardiac morphology and assessing cardiac function. Quantitative indicators such as volume and ejection fraction can be calculated from CMR and an evidence-based diagnosis of cardiovascular disease can be accomplished. A reconstruction with high spatial and temporal resolutions across the whole cardiac sequence is an indispensable prerequisite for CMR. In this context, a short scan time, ideally within a single breath-hold, is preferred to alleviate the patients' scan discomfort and prevent potential image artifacts due to patient motion. To this aim, only a limited amount of k-space (frequency domain) data can be collected for every temporal frame, violating the Nyquist-Shannon sampling theorem and resulting in aliasing artifacts in the image domain. In the past decade, Parallel Imaging [1, 2] and Compressed Sensing [3, 4] were introduced in CMR, enabling shorter scan time and improved reconstruction performance. However, reconstruction performance can be further improved if adequate spatial-temporal information is shared along the cardiac cycle. This information is linked by the cardiac motion, which bridges every single frame of the whole cardiac sequence and serves as the key to successful reconstruction. A straightforward way to leverage this motion information in CMR reconstruction is to use motion-compensated MR reconstruction (MCMR) [5] in which the cardiac motion has to be estimated. However, precise cardiac motion estimation remains a challenging problem due to the non-rigid nature of the cardiac motion, especially in the case of accelerated imaging where motion has to be estimated from undersampled data.
To circumvent the non-trivial tasks of cardiac motion estimation, different CMR reconstruction methods sidestep the motion estimation and aim to exploit spatio-temporal redundancies. The works of [6, 7] suggested disentangling the original reconstruction problem into a low-rank and a sparse
Fig. 1: The difference between the proposed MCMR framework (bottom) and the conventional MCMR work (top) is shown. The conventional approaches divide the original MCMR problem into two sub-optimization problems: motion estimation and reconstruction. Its motion estimation is optimized by minimizing the intermediate motion-warping loss (brightness similarity measurement between motion-warped images and target images) and if deep learning is used, the motion prediction back-propagation is only exerted on the motion estimation part. In contrast, we develop a deep learning-based framework that predicts the motion from the perspective of our ultimate goal: reconstruction. We discard using any intermediate motion-warping loss. The back-propagation is performed through the whole pipeline and reconstruction-driven motion estimation is established.
component and these two sub-optimizations are carried out jointly. However, the preservation of dynamic information crucially depends on the optimization of the sparse component and the implementation of soft thresholding can incur information loss. Moreover, deep learning reconstructions were proposed e.g. [8, 9] that unroll the dynamic MR optimization process with a spatio-temporal regularization. In this case, multiple unrolled gradient descent steps have to be executed, giving rise to the training difficulty of the network and processing time in both training and testing. Other methods [10, 11, 12] utilized the \(k-t\) domain to leverage the spatio-temporal redundancies to ameliorate the dynamic reconstruction. Whereas all these methods endeavor to extract the spatio-temporal correlation implicitly, there is no guarantee that the correlation of every cardiac frame is fully exploited. On the contrary, MCMR leverages the estimated cardiac motion to explicitly share cardiac spatio-temporal information.
A high-quality MCMR can be performed if the cardiac motion can be estimated precisely over the whole cardiac cycle. Therefore, the selection of a proper motion estimation/registration approach plays a decisive role in MCMR. Conventional registration methods based on B-spline [13, 14] or diffusion method [15] can be employed as motion estimators in MCMR. These methods can provide meaningful registration results but demand enormous computing time in the order of hours for a single CMR sequence. Furthermore, hyper-parameter tuning for these methods [13, 14] is also a non-trivial task, hindering their implementation in clinical practice. Lately, learning-based registration/motion estimation approaches have been introduced into medical imaging [16, 17, 18] and embodied in the application of cardiac motion estimation [19, 20, 21]. These methods accelerate the registration time from hours to seconds by leveraging a trained neural network during inference and mitigating hyper-parameter tuning. However, these cardiac registration methods are not designed for the MCMR context but are designed to minimize the brightness inconsistency of estimated motion-warped images and target images (motion-warping error). Yet in the context of accelerated imaging, the undersampled input images exhibit artifacts and intensity inconsistencies. The direct application of these general motion estimation/registration methods to accelerated imaging data can result in imprecise motion fields and can thus incur error propagation in MCMR. Qi _et al._[22] circumvented this problem by providing reference images in the training loss whilst feeding undersampled data as network inputs. Concurrently, a registration method designed for the MCMR context is proposed by Kustner _et al._[23] in which the registration is directly estimated from the k-space. All aforementioned methods conduct a pair-wise motion estimation and they have to be carried out multiple times in MCMR, in which for every single frame a registration from multiple other frames is required. To provide a more efficient and time-continuous registration, groupwise motion estimation has been studied [24, 25]. In group-wise registration, the spatial-temporal redundancy over multiple frames can be leveraged to facilitate the registration, especially when through-plane motion occurs in the context of 2D CMR. Furthermore, the temporal coherence over the cardiac cycle can be instilled during training by applying a temporal loss term [24].
After the choice of a proper motion estimation/registration method, there are multiple MCMR frameworks available to combine the motion estimator and reconstruction. The seminal work [5] of Batchelor _et al._ pioneered the MCMR concept in which the motion information is embedded as a general matrix into the MR forward model. This work formulated the MCMR problem with two individual stages: motion estimation and reconstruction. The motion estimation in the first stage and the reconstruction in the second stage are both carried out separately, while the pre-calculated motion from the first stage is regarded as a fixed matrix in the second-stage reconstruction [22, 26, 27]. Furthermore, MCMR can also be reformulated as a joint optimization problem in which an iterative optimization of image reconstruction and motion estimation are carried out alternatively. A potential synergy can be established: a more accurate motion estimation can provide a better reconstruction, and based on a less artifacts-affected image a better motion estimation can be accomplished. Odille _et al._ proposed a reconstruction method using sensor-based motion estimation e.g. respiratory belt or ECG signal [28, 29]. The need for external tracking hardware is relieved by adopting B-spline-based and optical flow-based motion estimation in this joint optimization context [30, 31]. More recently, variational methods [32] and dictionary learning [33] are also employed to solve this joint optimization problem for CMR reconstruction. However, all these methods demand a relatively long estimation time because of their iterative optimization nature. Therefore, deep-learning-based methods were proposed to speed up joint optimization. [25, 34] unrolled MCMR joint optimization with a group-wise motion estimation network and the mutual benefit of CMR reconstruction and motion estimation is demonstrated in their work.
However, the decomposition of the MCMR into two sub-optimization problems serves as a workaround to solve MCMR has two major drawbacks: First, the solution space of the full problem is restricted by the solution of the motion-estimation problem itself whose goal is to minimize the motion-warping loss between different cardiac frames. This goal is not necessarily aligned with the final reconstruction objective due to undersampled images' artifact-degradation and intensity-inconsistency amongst cardiac frames. Second, extra efforts have to be built in to cope with motion estimation in the case of accelerated imaging with undersampled data, e.g. extra pre-processing steps with intra-bin motion correction [26, 27], loss function tuning [22] or k-space motion estimation [35]. Although the estimation difficulty of the motion can be reduced if the alternating joint optimization is used, it requires multiple iterations of motion estimation and reconstruction to yield satisfactory reconstruction, prolonging the processing time. On the contrary, in this work we propose an MCMR framework that optimizes the complete MCMR framework together without breaking it into two sub-optimization problems.
Moreover, all aforementioned MCMR methods follow the suggestions of [5] which applied all temporal frames to reconstruct one single frame of the sequence so that all temporal redundancy can be exploited. We argue in this work that using a smaller amount of temporal frames to conduct the MCMR
can achieve a better result. This setting reduces the residual motion-warping error from other temporal frames while still leveraging enough redundant information.
In summary, the main contributions of our work are as follows: **(a)** We propose a deep learning-based approach, which efficiently solves the motion-compensated reconstruction and addresses the MCMR problem as a single entity. Our framework estimates motion from the perspective of CMR reconstruction, rather than motion estimation alone. We establish an efficient mechanism in which the motion estimation process is directly driven by the final reconstruction results (refer to Fig. 1) and without using iterative joint optimization of motion estimation and reconstruction. **(b)** We investigate the optimal number of temporal frames to use during the MCMR. We observe that using a smaller amount of frames to reconstruct the cardiac frames achieves better performance than using all frames of a sequence. We find a balance between the exploitation of sequence redundancy and the suppression of residual warping error, which can inspire all other MCMR methods. **(c)** We demonstrated the reconstruction of images from undersampling rates up to 20x with the optimization only depending on one final reconstruction loss term. The canonical motion-warping loss including smoothness terms that serve as an intermediate loss in MCMR is discarded in this work. Therefore, we avoid the non-trivial weighting factor tuning. We applied our method on 43 in-house acquired CMR CINE data and compared it to several canonical and SOTA methods. Our method outperforms the baselines with superior qualitative and quantitative results.
## II Problem Formulation
### _General MR Reconstruction_
Let \(x^{(t)}\in\mathbb{C}^{N}\) indicate the \(t\)-th complex-valued temporal frame of the dynamic CINE sequence \(\mathbf{x}=[x^{(1)},\ldots,x^{(T)}]\) stacked as a column vector and \(N\) denotes the amount of pixels in the 2D plane, i.e. \(N=N_{X}N_{Y}\) with \(X\), \(Y\) the height and width of the frame and \(T\) the number of temporal phases. \(y^{(t)}\in\mathbb{C}^{SN}\) from \(\mathbf{y}=[y^{(1)},\ldots,y^{(T)}]\) is the corresponding undersampled k-space data with \(S\) being the number of MR receiver coils. Regarding the CMR reconstruction task of a retrospectively gated CINE, the following inverse problem has to be solved:
\[\min_{x^{(t)}}\left\|\mathbf{A}^{(t)}x^{(t)}-y^{(t)}\right\|_{2}^{2},\ t=1, \ldots,T. \tag{1}\]
\(\mathbf{A}^{(t)}\) represents the MR forward multi-coil encoding operator with \(\mathbf{A}^{(t)}=\mathbf{D}^{(t)}\mathbf{FS}\), in which \(\mathbf{S}\in\mathbb{C}^{SN\times N}\) denotes the coil sensitivity maps, \(\mathbf{F}\in\mathbb{C}^{SN\times SN}\) is the forward Fourier encoding matrix, \(\mathbf{D}^{(t)}\in\mathbb{R}^{SN\times SN}\) is the undersampling mask diagonal matrix. Eq. (1) can be solved by using general conjugate-gradient SENSE (CG-SENSE) [36] reconstruction which is performed \(T\) times to reconstruct these \(T\) cardiac frames. However, this general MR reconstruction method optimizes every cardiac frame \(x^{(t)}\) separately regardless of the adequate temporal information across the cardiac sequence. Therefore, its reconstruction performance is limited with respect to the undersampling ratio. In this work, we use this general CG-SENSE as an initialization step (_Reconstruction Initialization_ in Fig. 2) for the following MCMR task.
### _Motion-compensated MR reconstruction with a varying number of input neighboring frames_
As mentioned above, leveraging the temporal information in the cardiac sequence can facilitate the CMR reconstruction. The spatial-temporal redundant information is bridged by the cardiac motion. Following the work of Batchelor _et al._[5], motion is embedded into the MR forward model and information from other temporal frames can be leveraged as complements:
\[\min_{x^{(t)}}\left\|\mathbf{A}^{(K)}\mathbf{U}^{(t\to K)}x^{(t)}- \mathbf{y}^{(K)}\right\|_{2}^{2},\ t=1,\ldots,T \tag{2}\]
where \(K=2k+1\) denotes the neighboring \(\pm k\) frames of the frame \(t\). The k-spaces \(\mathbf{y}^{(K)}=[y^{(t-k)},\ldots,y^{(t)},\ldots,y^{(t+k)}]\in\mathbb{C}^{SN}\) are the corresponding MR forward multi-coil encoding operators with \(\mathbf{A}^{(t)}=\mathbf{D}^{(t)}\mathbf{FS}\), in which \(\mathbf{S}\in\mathbb{C}^{SN\times N}\) denotes the coil sensitivity maps, \(\mathbf{F}\in\mathbb{C}^{SN\times SN}\) is the forward Fourier encoding matrix, \(\mathbf{D}^{(t)}\in\mathbb{R}^{SN\times SN}\) is the undersampling mask diagonal matrix. Eq. (1) can be solved by using general conjugate-gradient SENSE (CG-SENSE) [36] reconstruction which is performed \(T\) times to reconstruct these \(T\) cardiac frames. However, this general MR reconstruction method optimizes every cardiac frame \(x^{(t)}\) separately regardless of the adequate temporal information across the cardiac sequence. Therefore, its reconstruction performance is limited with respect to the undersampling ratio. In this work, we use this general CG-SENSE as an initialization step (_Reconstruction Initialization_ in Fig. 2) for the following MCMR task.
### _Motion-compensated MR reconstruction with a varying number of input neighboring frames_
As mentioned above, leveraging the temporal information in the cardiac sequence can facilitate the CMR reconstruction. The spatial-temporal redundant information is bridged by the cardiac motion. Following the work of Batchelor _et al._[5], motion is embedded into the MR forward model and information from other temporal frames can be leveraged as complements:
\[\min_{x^{(t)}}\left\|\mathbf{A}^{(K)}\mathbf{U}^{(t\to K)}x^{(t)}- \mathbf{y}^{(K)}\right\|_{2}^{2},\ t=1,\ldots,T \tag{3}\]
where \(K=2k+1\) denotes the neighboring \(\pm k\) frames of the frame \(t\). The k-spaces \(\mathbf{y}^{(K)}=[y^{(t-k)},\ldots,y^{(t)},\ldots,y^{(t+k)}]\in\mathbb{C}^{SN}\) are the corresponding MR forward multi-coil encoding operators with \(\mathbf{A}^{(t)}=\mathbf{D}^{(t)}\mathbf{FS}\), in which \(\mathbf{S}\in\mathbb{C}^{SN\times N}\) denotes the coil sensitivity maps, \(\mathbf{F}\in\mathbb{C}^{SN\times SN}\) is the forward Fourier encoding matrix, \(\mathbf{D}^{(t)}\in\mathbb{R}^{SN\times SN}\) is the undersampling mask diagonal matrix. Eq. (1) can be solved by using general conjugate-gradient SENSE (CG-SENSE) [36] reconstruction which is performed \(T\) times to reconstruct these \(T\) cardiac frames. However, this general MR reconstruction method optimizes every cardiac frame \(x^{(t)}\) separately regardless of the adequate temporal information across the cardiac sequence. Therefore, its reconstruction performance is limited with respect to the undersampling ratio. In this work, we use this general CG-SENSE as an initialization step (_Reconstruction Initialization_ in Fig. 2) for the following MCMR task.
### _Motion-compensated MR reconstruction with a varying number of input neighboring frames_
As mentioned above, leveraging the temporal information in the cardiac sequence can facilitate the CMR reconstruction. The spatial-temporal redundant information is bridged by the cardiac motion. Following the work of Batchelor _et al._[5], motion is embedded into the MR forward model and information from other temporal frames can be leveraged as complements:
\[\min_{x^{(t)}}\left\|\mathbf{A}^{(K)}\mathbf{U}^{(t\to K)}x^{(t)}- \mathbf{y}^{(K)}\right\|_{2}^{2},\ t=1,\ldots,T \tag{4}\]
where \(K=2k+1\) denotes the neighboring \(\pm k\) frames of the frame \(t\). The k-spaces \(\mathbf{y}^{(K)}=[y^{(t-k)},\ldots,y^{(t)},\ldots,y^{(t+k)}]\in\mathbb{C}^{SN}\) are the corresponding MR forward multi-coil encoding operators with \(\mathbf{A}^{(t)}=\mathbf{D}^{(t)}\mathbf{FS}\), in which \(\mathbf{S}\in\mathbb{C}^{SN\times N}\) denotes the coil sensitivity maps, \(\mathbf{F}\in\mathbb{C}^{SN\times SN}\) is the forward Fourier encoding matrix, \(\mathbf{D}^{(t)}\in\mathbb{R}^{SN\times SN}\) is the undersampling mask diagonal matrix. Eq. (1) can be solved by using general conjugate-gradient SENSE (CG-SENSE) [36] reconstruction which is performed \(T\) times to reconstruct these \(T\) cardiac frames. However, this general MR reconstruction method optimizes every cardiac frame \(x^{(t)}\) separately regardless of the adequate temporal information across the cardiac sequence. Therefore, its reconstruction performance is limited with respect to the undersampling ratio. In this work, we use this general CG-SENSE as an initialization step (_Reconstruction Initialization_ in Fig. 2) for the following MCMR task.
### _Motion-compensated MR reconstruction with a varying number of input neighboring frames_
As mentioned above, leveraging the temporal information in the cardiac sequence can facilitate the CMR reconstruction. The spatial-temporal redundant information is bridged by the cardiac motion. Following the work of Batchelor _et al._[5], motion is embedded into the MR forward model and information from other temporal frames can be leveraged as complements:
\[\min_{x^{(t)}}\left\|\mathbf{A}^{(K)}\mathbf{U}^{(t\to K)}x^{(t)}- \mathbf{y}^{(K)}\right\|_{2}^{2},\ t=1,\ldots,T \tag{5}\]
where \(K=2k+1\) denotes the neighboring \(\pm k\) frames of the frame \(t\). The k-spaces \(\mathbf{y}^{(K)}=[y^{(t-k)},\ldots,y^{(t)},\ldots,y^{(t+k)}]\in\mathbb{C}^{SN}\) are the corresponding MR forward multi-coil encoding operators with \(\mathbf{A}^{(t)}=\mathbf{D}^{(t)}\mathbf{FS}\), in which \(\mathbf{S}\in\mathbb{C}^{SN\times N}\) denotes the coil sensitivity maps, \(\mathbf{F}\in\mathbb{C}^{SN\times SN}\) is the forward Fourier encoding matrix, \(\mathbf{D}^{(t)}\in\mathbb{R}^{SN\times SN}\) is the undersampling mask diagonal matrix. Eq. (1) can be solved by using general conjugate-gradient SENSE (CG-SENSE) [36] reconstruction which is performed \(T\) times to reconstruct these \(T\) cardiac frames. However, this general MR reconstruction method optimizes every cardiac frame \(x^{(t)}\) separately regardless of the adequate temporal information across the cardiac sequence. Therefore, its reconstruction performance is limited with respect to the undersampling ratio. In this work, we use this general CG-SENSE as an initialization step (_Reconstruction Initialization_ in Fig. 2) for the following MCMR task.
### _Motion-compensated MR reconstruction with a varying number of input neighboring frames_
As mentioned above, leveraging the temporal information in the cardiac sequence can facilitate the CMR reconstruction. The spatial-temporal redundant information is bridged by the cardiac motion. Following the work of Batchelor _et al._[5], motion is embedded into the MR forward model and information from other temporal frames can be leveraged as complements:
\[\min_{x^{(t)}}\left\|\mathbf{A}^{(K)}\mathbf{U}^{(t\to K)}x^{(t)}- \mathbf{y}^{(K)}\right\|_{2}^{2},\ t=1,\ldots,T \tag{6}\]
where \(K=2k+1\) denotes the neighboring \(\pm k\) frames of the frame \(t\). The k-spaces \(\mathbf{y}^{(K)}=[y^{(t-k)},\ldots,y^{(t)},\ldots,y^{(t+k)}]\in\mathbb{C}^{SN}\) are the corresponding MR forward multi-coil encoding operators with \(\mathbf{A}^{(t)}=\mathbf{D}^{(t)}\mathbf{FS}\),
\(\mathbb{C}^{SNR}\) are used as complementary neighboring data to reconstruct the frame \(x^{(t)}\). We assume periodicity in the cardiac cycle, i.e. the previous frame to \(x^{(0)}\) is regarded as \(x^{(T)}\). \(\mathbf{U}^{(t\to K)}\in\mathbb{R}^{NK\times N}\) denotes the cardiac motion matrix and warps \(x^{(t)}\) to the \(K\) cardiac frames. By means of \(\mathbf{U}^{(t\to K)}\), the redundancy and correlation of the neighboring cardiac frames of \(x^{(t)}\) are instilled for the \(t\)-th frame reconstruction. It should be noted that our MCMR framework differs from the original MCMR framework [5] which applied all temporal frames \(K=T\) to conduct the reconstruction, while in our case we choose \(K<T\) as detailed in Section V-A. Analogously to \(\mathbf{A}^{(t)}\), \(\mathbf{A}^{(K)}=\mathbf{D}^{(K)}\mathbf{F}\mathbf{S}\in\mathbb{C}^{SNR\times K}\) denotes the CMR forward model for these \(K\) frames.
## III Method
In this work, we propose a deep-learning-based framework to reconstruct the dynamic CINE images. This framework consists of two parts: a _Motion Estimation Block_ which tries estimate cardiac motion and a _Motion-Compensated Reconstruction Block_ which is purposed for carrying out the motion-compensated reconstruction, depicted in Fig. 2. In contrast to all the previously proposed MCMR works, our framework can be trained end-to-end which regards the motion estimation and reconstruction processes as a single entity instead of splitting them into two sub-tasks. Furthermore, unrolling the iterative procedure of motion estimation and reconstruction prolongs the processing time and renders itself inefficient. In this work we aim to estimate precise motion directly from the undersampled data by using one-shot prediction with a motion estimator \(\mathcal{G}\) and then solve the inverse problem with an \(\ell_{2}\) regularizer using the initial sequence \(\mathbf{x}_{u}\) provided by _Reconstruction Initialization_ block, read as:
\[\hat{\mathbf{U}} =\mathcal{G}\left(\mathbf{x}_{u}\right) \tag{3a}\] \[\hat{\mathbf{x}} =\arg\min_{\mathbf{x}}\left\|\mathbf{A}\hat{\mathbf{U}}\mathbf{x }-\mathbf{y}^{(TK)}\right\|_{2}^{2}+\lambda\left\|\mathbf{x}-\mathbf{x}_{u} \right\|_{2}^{2}, \tag{3b}\]
where \(\lambda\) presents the weighting factor of the applied \(\ell_{2}\) term. \(\hat{\mathbf{U}}\in\mathbb{R}^{NTK\times NT}\) denotes the estimated cardiac motion which is used inside the MCMR, \(\hat{\mathbf{x}}\) denotes the final reconstructed image for all cardiac frames. \(\mathbf{y}^{(TK)}=[y^{(1-k)},\ldots,y^{(1+k)},\ldots\ldots,y^{(T-k)},\ldots,y^ {(T+k)}]\) extends from \(\mathbf{y}^{(K)}\) presenting the adopted complementary neighboring frames to reconstruct every \(x^{(t)}\) of the cardiac sequence \(\mathbf{x}\).
### _Motion Estimation Block_
We utilize a learning-based motion estimation network \(\mathcal{G}\) with trainable parameters \(\theta\) to predict the non-rigid cardiac motion. The backbone of GRAFT [24] is applied to model \(\mathcal{G}\). GRAFT is a group-wise motion estimation network that takes the undersampled cardiac sequence \(\mathbf{x}_{u}\) as input and predicts the motion between the frames. Its inherent _Temporal Information Enhancement_ Block consists of convolutional layers which take the target frame along with its one previous and subsequent cardiac frame as input and extract the spatial-temporal information from them. By means of that, the problem of through-plane motion and occlusion can be alleviated. Afterward, a _Feature Encoder_ is incorporated which processes the embedding from _Temporal Information Enhancement Block_ and extracts the information from the image sequence. Subsequently, a 4D-Correlation layer is performed to compute the correlation of the 2D spatial plane and a Gated Recurrent Unit (GRU) is employed to conduct an iterative motion estimation. Finally, the motion is upsampled \(4\times\) to the original image size. This process is carried out \(K\) times and a motion field \(\hat{\mathbf{U}}\) mapping from dimension \(NT\) to \(NTK\) is produced by GRAFT at this end.
Usually, a warping similarity measurement \(\mathcal{L}_{w}\) is utilized to drive the learning of the motion estimation network: \(\mathcal{L}_{w}(\mathbf{x}^{(K)},\hat{\mathbf{U}}^{(t\to K)}x^{(t)})\) with \(\mathbf{x}^{(K)}=[x^{(t-k)},\ldots,x^{(t)},\ldots,x^{(t+k)}]\) the target frames from \(\mathbf{x}\) and \(\hat{\mathbf{U}}^{(t\to K)}x^{(t)}\) its corresponding warped estimation. However, \(\mathcal{L}_{w}\) is just an intermediate motion-warping loss function in the context of MCMR. As mentioned in Section I, the loss' effectiveness is undermined by the increase of the undersampling rate (more aliasing and severe intensity inconsistency) whose goal diverges from the goal of improving the final reconstruction quality. Furthermore, the utilization of \(\mathcal{L}_{w}\) after the _Motion Estimation Block_ breaks the original MCMR optimization into two sub-tasks, introducing the drawbacks as mentioned in Section I. In this work, we do not calculate \(\mathcal{L}_{w}\) at this intermediate position but forward the output motion \(\hat{\mathbf{U}}\) of \(\mathcal{G}\) to the subsequent _Motion-Compensated Reconstruction Block_. Since no network loss function is applied yet, the motion prediction \(\hat{\mathbf{U}}\) with learnable parameters \(\theta\) are still pending and the complete forward chain of the applied deep learning model is to be established by the subsequent _Motion-Compensated Reconstruction Block_.
### _Motion-Compensated Reconstruction Block_
The _Motion-Compensated Reconstruction Block_ is a complex-valued operator which executes the CINE reconstruction and serves as the forward pass for the network prediction \(\hat{\mathbf{U}}\) to reach the final loss function. This block solves Eq. (3b) by finding the stationary point utilizing the normal function:
\[\underbrace{\left(\hat{\mathbf{U}}^{H}\mathbf{A}^{H}\mathbf{A} \hat{\mathbf{U}}+\lambda\mathbf{I}\right)}_{\mathbf{V}}\hat{\mathbf{x}}= \underbrace{\left(\hat{\mathbf{U}}^{H}\mathbf{A}^{H}\mathbf{y}^{(TK)}+\lambda \mathbf{x}_{u}\right)}_{\mathbf{b}}. \tag{4}\]
Since the inverse of matrix \(\mathcal{V}\) is computationally prohibitive to calculate, we adopt Conjugate Gradient (CG) [37] to solve this problem in an iterative manner until the process converges. Therefore, we present the final output of the _Motion-Compensated Reconstruction Block_ in the basis of a set of conjugate vectors \(\mathcal{P}=\{\mathbf{p}_{0},...,\mathbf{p}_{I}\}\):
\[\hat{\mathbf{x}}=\sum_{i=0}^{I}\alpha_{i}\mathcal{V}\mathbf{p}_{i}. \tag{5}\]
Here, \((\mathbf{p}_{i})^{H}\mathcal{V}\mathbf{p}_{j}=0\) for all \(i\neq j\) and \(\alpha_{i}\) are their corresponding basis coefficients. In the following we will describe the conjugate gradient algorithm, to comprehend the end-to-end nature of our proposed approach. Details of the applied CG are shown in Alg. 1. \(\mathbf{p}_{i}\) and \(\alpha_{i}\) can be obtained during the iteration
update of CG and \(I\) is its predefined total iteration number. All \(\alpha_{i}\) and \(\mathbf{p}_{i}\) are calculated based on \(\mathcal{V}\) while \(\mathcal{V}\) is computed based on the motion matrix \(\hat{\mathbf{U}}\). In contrast to conventional works, it is important to note that \(\hat{\mathbf{U}}\) here is not a static fixed matrix but still a pending variable from \(\mathcal{G}(\mathbf{x}_{u})\). Its trainable parameters \(\theta\) still wait for updates through back-propagation on a higher-level loss function for network training. To express this more clearly, we reformulate Eq. (5) with Eq. (4) as:
\[\hat{\mathbf{x}}=\sum_{i=0}^{I}\alpha_{i}\left(\mathcal{G}^{H}(\mathbf{x}_{u}) \mathbf{A}^{H}\mathbf{A}\mathcal{G}(\mathbf{x}_{u})+\lambda\mathbf{I}\right) \mathbf{p}_{i}. \tag{6}\]
Note that \(\alpha_{i}\) and \(\mathbf{p}_{i}\) also depend on the motion estimation network \(\mathcal{G}(\mathbf{x}_{u})\) (refer to Alg. 1) which in turn depends on the trainable parameters \(\theta\).
Finally, we define our loss function \(\mathcal{L}_{r}\) as the mean squared error between the reconstruction estimation \(\hat{\mathbf{x}}\) and the reference reconstruction target \(\mathbf{x}_{\text{ref}}\). Thus, the final learning-based optimization function can be represented as:
\[\mathcal{L}_{r}=\left\|\sum_{i=0}^{I}\alpha_{i}\left(\mathcal{G}^{H}(\mathbf{x }_{u})\mathbf{A}^{H}\mathbf{A}\mathcal{G}(\mathbf{x}_{u})+\lambda\mathbf{I} \right)\mathbf{p}_{i}-\mathbf{x}_{\text{ref}}\right\|_{2}^{2}. \tag{7}\]
Now, the complete deep-learning forward chain is established and \(\theta\) can be updated by gradient back-propagation. An end-to-end MCMR framework is cast without employing any intermediate motion-warping loss. In this respect, the motion estimation process is directly guided and driven by feedback from the final reconstruction performance but not by the motion estimation/registration. The goal of motion compensation is now aligned with the final reconstruction goal.
```
1:\(\mathbf{r}_{0}:=\mathbf{b}\), \(\mathbf{x}_{0}=\mathbf{0}\)\(\triangleright\)\(\mathbf{r}\) is the residual
2:\(\mathbf{p}_{0}:=\mathbf{r}_{0}\), \(i:=0\)
3:while\(i<I\)do
4:\(\alpha_{i}:=(\mathbf{r}_{i}^{H}\mathbf{r}_{i}/((\mathbf{p}_{i})^{H}\mathcal{V} \mathbf{p}_{i})\)
5:\(\mathbf{x}_{i+1}:=\mathbf{x}_{i}+\alpha_{i}\mathbf{p}_{i}\)
6:\(\mathbf{r}_{i+1}:=\mathbf{r}_{i}-\alpha_{i}\mathcal{V}\mathbf{p}_{i}\)
7:\(\beta_{i}:=(\mathbf{r}_{i+1})^{H}\mathbf{r}_{i+1}/((\mathbf{r}_{i+1})^{H} \mathbf{r}_{i})\)
8:\(\mathbf{p}_{i+1}:=\mathbf{r}_{i+1}+\beta_{i}\mathbf{p}_{i}\)
9:\(i:=i+1\)
10:endwhile
```
**Algorithm 1** Algorithm of Conjugate Gradient
## IV Experiments
### _Dataset and Implementation Details_
43 subjects (27 patients and 16 healthy volunteers) were scanned with a 2D cardiac CINE sequence. The data is acquired in-house on a 1.5T MRI scanner (Magnetom Aera, Siemens Healthineers) with an acquisition sequence of 2D balanced steady-state free precession (bSSFP) equipped with multi-channel body and spine coil resulting in 30, 34 and 38 MR receiver coil channels. A \(2\times\) GRAPPA acceleration generated the CINE data with an in-plane resolution of \(1.9\times 1.9\)mm\({}^{2}\), a slice thickness of 8mm, echo time (TE) of 1.06ms, and repetition time (TR) of 2.12ms. Retrospective gating is used to bin the data into 25 cardiac phases with a temporal resolution of 40ms. Matrix size varies from the smallest size 176 (frequency-encoding) \(\times\) 132 (phase-encoding) to the largest size 192 \(\times\) 192. A stack of 10 to 15 slices for each subject along the long axis was acquired from base to apex under multiple breath-holds (2 slices per breath-hold). Slices without clear cardiac anatomy were discarded, resulting in a total of 366 cardiac motion-resolved image sequences. Retrospective undersampling is performed by Cartesian VISTA [38] sampling with varying acceleration factors.
The proposed framework was implemented in PyTorch (v1.9.0) and trained on an NVIDIA A40 GPU. The AdamW [39] optimizer combined with a one-cycle learning rate scheduler (max. learning rate 0.0001) was used to optimize Eq. (7). The network parameters for the _Motion Estimation Block_ follows [24] and the hyper-parameters \(K\), \(\lambda\) and \(I\) are set to \(9\), \(0.01\) and \(10\) for training and test. Regarding network training, we adopt either a fixed undersampling rate for training or a mixed training procedure with \(R=8,R=12,R=16\) and \(R=20\) undersampled data with a random selection with the same probability (dubbed as mixed \(R\) training). During inference, we test our approach on an arbitrary undersampling rate. The undersampled raw k-space data is first reconstructed by the _Reconstruction Initialization_ block and then fed to the proposed network.
### _Ablation study_
We investigate the benefits of using Eq. (7) as the loss function in comparison to the widely used motion-warping loss \(\mathcal{L}_{w}\) (refer to III-A) which breaks the MCMR into two sub-tasks for the MCMR reconstruction. To this respect, we conduct a set of experiments in which 4 trainings are carried out: 1. training only uses \(\mathcal{L}_{r}\) (Eq. (7)); 2. training only uses \(\mathcal{L}_{w}\); And we use a combined loss function \(\mathcal{L}=\alpha\mathcal{L}_{r}+\beta\mathcal{L}_{w}\) only for this ablation study. We conduct the 3. training with \(\alpha=100,\beta=1\) and 4. training with \(\alpha=10,\beta=1\). We use reference images instead of the undersampled images in \(\mathcal{L}_{w}\) during the network training as proposed by [22, 34] to mitigate being affected by aliasing artifacts. All of these 4 experiments are trained and tested on \(R=12\).
We further investigate the impact of using different amounts of neighboring frames \(K\) for the dynamic CINE reconstruction during training and test. In the ideal case, the motion across the whole cardiac cycle can be estimated precisely, therefore all \(N\) temporal frames should be used to exploit the temporal redundancy. However, non-rigid contraction and expansion of the heart are challenging to estimate and given the 2D acquisition nature through-plane motion and occlusion (especially towards basal slices) can occur. Thus, the residual frame-to-frame warping error cannot be suppressed completely to zero even with SOTA motion estimators. The more neighboring frames considered, the larger the accumulated residual motion error can occur. We therefore investigate the optimal number of neighboring frames to use for the CINE reconstruction. We run experiments using neighbouring \(k=\pm 1\) (\(K=3\)), \(\pm 2\), \(\pm 4\), \(\pm 6\), \(\pm 8\), \(\pm 12\) (\(K=25\)) frames with mixed \(R\) training and test on different acceleration rates.
Finally in order to test the method's generalizability, we implement a 5-fold cross-validation (35 subjects for training and 8 for testing). We repeat the experiments five times with the same parameter settings.
### _Baseline comparisons_
We compare our method with six baseline methods. Two SOTA MCMR methods are considered in which the cardiac motion is estimated explicitly prior to the reconstruction. One is GRAFT-Recon [24], which applies GRAFT to predict the cardiac motion by using \(\mathcal{L}_{w}\) loss and then conducts the follow-up reconstruction task separately. The second one is Unrolled-MCMR [34], which performs an iterative unrolled joint optimization of cardiac motion estimation and reconstruction but its motion is also calculated from \(\mathcal{L}_{w}\). Moreover, CG-SENSE [36], L+S [6], MoDL [40] and CTF-Net [12] are adopted as non-MCMR reconstruction methods for comparison. L+S solves the reconstruction by leveraging the decomposed low-rank and sparse matrix, MoDL uses an unrolled scheme with a dealiasing network and a data-consistency term, while CTF-Net tackles the problem by exploiting the \(k-t\) domain redundancy using recurrent networks.
### _Evaluation_
We apply Structural Similarity Index (SSIM) [41], Peak Signal-to-Noise Ratio (PSNR) and Normalized Mean Squared Error (NMSE) to evaluate the reconstruction performance quantitatively. Besides these three metrics, we also employ Learned Perceptual Image Patch Similarity (LPIPS) [42] which has been verified to be closer to human perception. Furthermore, we use a cardiac segmentation network [43] to obtain a bounding box around the heart to focus evaluation on the cardiac anatomy. An offset value of \(10\) pixels is set to extend the bounding box region. All metrics (SSIM, PSNR, NMSE and LPIPS) are evaluated within this heart region.
## V Results
### _Baseline comparisons_
We compare our method to six baseline methods (see IV-C). The deep learning approaches included the proposed network, MoDL [40] and CTF-Net [12] are trained with mixed \(R\) training procedure (see IV-A). These methods can achieve their best performance by using this training procedure. The mixed \(R\) training procedure is not applied to GRAFT-Recon [24] and Unrolled-MCMR [34] since including highly undersampled data gave rise to unstable training and poor reconstruction performance. In practice we found that GRAFT-Recon can achieve the best inference results when using the fixed \(R=8\) training compared to all other fixed \(R\) trainings. For Unrolled-MCMR, the fixed \(R=12\) training is the best training strategy. Thus, for GRAFT-Recon a fixed \(R=8\) training is conducted, while for Unrolled-MCMR training we only use \(R=12\) data. Besides, we also set their temporal neighborhood to \(K=9\) with \(\pm 4\) neighboring frames, whilst in their original work they employed all temporal frames which can cause higher warping errors. After training, all six methods are tested on an arbitrary undersampling rate. The quantitative performance evaluated by metrics PSNR, NMSE, SSIM, and LPIPS is shown in Table IV.
The superior and consistent performance of the proposed method is shown across every single undersampling rate compared to all other baseline methods and regardless of the evaluation metric. It can be noted that learning-based methods e.g. GRAFT-Recon, Unrolled-MCMR, CTF-Net and MoDL outperform conventional methods like CG-SENSE and L+S. Moreover, Unrolled-MCMR consistently demonstrates the second-best performance because of its usage of reference images in the motion-warping loss function and its unrolled iterative optimization mechanism. The advantage of iterative optimization becomes more prominent for higher acceleration rates.
The qualitative comparison of two test subjects (healthy subject and patient) between the proposed network and the non-MCMR methods is illustrated in Fig. 4 for undersampling rates of \(R=8\) and \(R=16\). The corresponding error maps are displayed as well. The proposed network presents a consistent performance in both subjects with the highest PSNR score and lowest residual error. Temporal traces are in good agreement with the fully-sampled reference and cardiac dynamics were recovered by the proposed network. Diagnostic features like papillary muscles are restored clearly without blurring in both cases. Moreover, it is apparent that MoDL does not leverage any temporal information to reconstruct the cardiac frames. While CTF-Net and MoDL still show comparable results for \(R=8\), a severe performance drop for MoDL is perceived in \(R=16\). The same conclusion can also be drawn from Table IV. This indicates the importance of leveraging temporal redundancy during the reconstruction, especially in highly accelerated cases.
Further qualitative comparison of two test subjects (healthy subject and patient) between the proposed framework and other MCMR methods (GRAFT-Recon and Unrolled-MCMR) are demonstrated in Fig. 5. The proposed network outperforms the two compared MCMR methods in both \(R=12\) and \(R=20\). While the proposed framework is only trained with one loss term (without any smoothness terms), it predicts a more meaningful and dense motion field even for \(R=20\). The motion estimation from Unrolled-MCMR is sparse and non-smooth, in spite of the usage of smoothness terms during training. The GRAFT-Recon reveals inferior reconstruction due to the motion estimation being artifact-affected resulting in error propagation amongst frames, while the proposed method yields a reconstruction image without any aliasing in both cardiac region and background.
## VI Discussion
MCMR is a powerful and straightforward concept that has been demonstrated for the reconstruction of cardiac CINE [25, 33, 34, 45]. However, a wide range of MCMR implementations for CINE is precluded by two major unsolved challenges: high-speed MCMR processing and precise artifact-suppressed cardiac motion estimation. In this work, we proposed a learning-based MCMR framework for CINE imaging that copes with these two problems at once. The fast MCMR is achieved by leveraging the trained network to accelerate the estimation process in inference time, whilst the artifact-suppressed motion estimation is achieved using reconstruction-driven motion estimation. We treat the two sub-tasks as a single entity, in which the training loss is back-propagated end-to-end from the final reconstructed images to the motion estimation input.
We performed an ablation study in which the use of an intermediate warping similarity loss is compared to a final reconstruction loss. Results indicate that if the optimization is driven by the final reconstruction loss, not only the reconstruction performance is enhanced, but also the motion prediction is ameliorated. Furthermore, we investigated the aspect of the optimal number of neighboring frames in the cardiac cycle to be used for MCMR. The conducted experiments
Fig. 4: Qualitative comparison of the proposed method to non-MCMR methods including CG-SENSE [36], L+S [6], MoDL [40] and CTF-Net [12] in the \(R=8\) (patient with myocarditis) and \(R=16\) (healthy subject) accelerated acquisition. The respective PSNR values of the heart region are depicted in the image. Reference images, reconstructed images and their corresponding error maps are demonstrated. The spatial (\(x-y\)) images are depicted next to the temporal traces (\(y-t\)) through the middle of the left ventricle. The selected y-axis is marked with a blue line in the reference image.
Fig. 5: Qualitative comparison of the proposed network to CG-SENSE [36], GRAFT-Recon [24] and Unrolled-MCMR [34] in the \(R=12\) (left side, healthy subject) and \(R=20\) (right side, patient with myocarditis) accelerated acquisition. The respective PSNR values of the heart region are shown in the image. Reference images, reconstructed images, corresponding error maps and color-wheel-encoded [44] motion field visualization are shown. The spatial (\(x-y\)) images are depicted next to the temporal traces (\(y-t\)) through the left ventricle. The selected \(y\)-axis is marked with a blue line in the reference image.
demonstrate that the higher the acceleration rate, the more neighboring frames are preferable. There is a trade-off between the static reconstruction error which is incurred by the lack of redundant information, and the dynamic reconstruction error around the heart which is caused by residual warping errors from neighboring frames. It is important to note that this phenomenon is not only occurring in our proposed approach but is generic for any MCMR method. Based on these results, we set a fixed number of neighboring frames in this work. In the future, we plan to make this important hyper-parameter learnable so it can self-adapt to the optimal value for different application scenarios.
We further conduct a systematic analysis of the proposed approach compared to baseline methods. The consistent and superior performance of our proposed method is found throughout this work. We conclude from the experiments that learning-based methods usually outperform non-learning-based methods. Furthermore, methods that leverage the whole cardiac cycle e.g. CTF-Net usually outperform methods that leverage less temporal redundancy e.g. MoDL, especially in the high acceleration case. Furthermore, Unrolled-MCMR showed similar but inferior performance results to the proposed method. This is attributed to two major reasons. First, we carry out just a single but more effective optimization instead of applying alternating updates of motion fields and image reconstructions. It should be noted that our proposed method can also be extended as an iterative unrolled optimization but at the cost of prolonged training and test time. Second, we only have one simple reconstruction loss term \(\mathcal{L}_{r}\) in the final stage, while Unrolled-MCMR employs a conventional motion-warping loss \(\mathcal{L}_{w}\) together with two regularization terms, requiring non-trivial weighting factor tuning.
The superior results of the proposed method can be attributed to the artifact-suppressed motion estimation (refer to the motion fields in Fig. 5). The proposed _Motion-Compensated Reconstruction Block_ can be regarded as a transformation operator which extends the motion estimation procedure from image space to k-space. Although Eq. (7) presents a loss function that forces the framework to generate a reference resembled reconstruction, it can also be interpreted as a warping loss function which warps a set of undersampled images by the estimated motion to the target images while ensuring consistency to acquired k-space samples.
Furthermore, our proposed approach provides another perspective on solving the cardiac motion estimation/registration problem. Cardiac motion estimation/registration can not only be used inside the MCMR framework for reconstruction but can also be applied for cardiac feature tracking to evaluate myocardial strain and functional analysis [46, 47] or to facilitate cardiac segmentation tasks [48]. Our proposed method can be recast to a motion estimation/registration method with two major benefits compared to the conventional motion estimation/registration methods. First, we only need a single loss term (Eq. (7)) to generate smooth and realistic motion fields avoiding further hyper-parameter tuning. Second, we can predict high-quality cardiac motion directly from highly-undersampled MR data. It is also conceivable that we do not need visually appealing MR images for the extraction and quantification of clinical parameters (e.g. left ventricular function). A potential synergistic approach for jointly reconstructing, analyzing (e.g. segmentation or motion tracking) and interpreting the cardiac CINE imaging will be developed further based on this study.
In the end, we also acknowledge some limitations of our work. First, the motion estimation is based on the backbone of GRAFT. It conducts \(N\times K\) computations to reconstruct the cardiac cycle with \(N\) frames which are suboptimal regarding estimation speed and memory usage. In future work, we will attempt to build a more efficient and lightweight group-wise motion estimator to accelerate the reconstruction process further. Moreover, currently our MCMR framework only learns motion prediction and uses only one non-learnable regularizer. This has been proven in this work as a simple yet effective approach, but we have not yet integrated our motion estimates in learnable denoising regularizers [49, 40], which will be subject to future work. Finally, the proposed work is solely evaluated for retrospective undersampling. The test performance on a larger cohort including prospectively collected data will also be evaluated in future work.
## VII Conclusion
In this work, we proposed a learning-based MCMR framework for CINE imaging. We introduce a mechanism that solves the MCMR problem as a single entity and drives the motion estimation directly from the final reconstruction perspective. The training loss is back-propagated through the whole pipeline and the framework is optimized end-to-end without breaking into two sub-tasks. We find out that using a smaller neighboring frames number to conduct MCMR can achieve better results than using all sequence frames. Our method shows consistent performance throughout all conducted experiments and outperforms all baseline methods. We have confidence that the developed method for cardiac CINE imaging can also be generalized and applied to other reconstruction applications.
|
2305.01041 | Data-Parallel Algorithms for String Diagrams | We give parallel algorithms for string diagrams represented as structured
cospans of ACSets. Specifically, we give linear (sequential) and logarithmic
(parallel) time algorithms for composition, tensor product, construction of
diagrams from arbitrary $\Sigma$-terms, and application of functors to
diagrams. Our datastructure can represent morphisms of both the free symmetric
monoidal category over an arbitrary signature as well as those with a chosen
Special Frobenius structure. We show how this additional (hypergraph) structure
can be used to map diagrams to diagrams of optics. This leads to a case study
in which we define an algorithm for efficiently computing symbolic
representations of gradient-based learners based on reverse derivatives. The
work we present here is intended to be useful as a general purpose
datastructure. Implementation requires only integer arrays and well-known
algorithms, and is data-parallel by constuction. We therefore expect it to be
applicable to a wide variety of settings, including embedded and parallel
hardware and low-level languages. | Paul Wilson, Fabio Zanasi | 2023-05-01T19:02:20Z | http://arxiv.org/abs/2305.01041v1 | # Data-Parallel Algorithms for String Diagrams
###### Abstract
We give parallel algorithms for string diagrams represented as _structured cospans of ACSets_. Specifically, we give _linear_ (sequential) and _logarithmic_ (parallel) time algorithms for composition, tensor product, construction of diagrams from arbitrary \(\Sigma\)-terms, and application of functors to diagrams.
Our datastructure can represent morphisms of both the free symmetric monoidal category over an arbitrary signature as well as those with a chosen Special Frobenius structure. We show how this additional (hypergraph) structure can be used to map diagrams to _diagrams of optics_. This leads to a case study in which we define an algorithm for efficiently computing symbolic representations of gradient-based learners based on reverse derivatives.
The work we present here is intended to be useful as a _general purpose_ datastructure. Implementation requires only _integer arrays_ and well-known algorithms, and is data-parallel by constuction. We therefore expect it to be applicable to a wide variety of settings, including embedded and parallel hardware and low-level languages.
## 1 Introduction
String diagrams are a formal graphical syntax [23] for representing morphisms of monoidal categories which is now widely used (see for example [16, 17, 18, 5]). The purpose of this paper is to make string diagrams not just a convenient notation for algebraic reasoning, but also an efficient general-purpose tool in computing with graphical structures in a compositional manner. To that end, the datastructures and algorithms we define satisfy the following desiderata.
**Fast and data-parallel.**: Our algorithms are data-parallel by construction, and have _linear_ (sequential) and _logarithmic_ (parallel) time complexities.
**Minimal primitives.**: Our datastructures are defined in terms of simple integer arrays. Moreover, we assume only a small number of simple, well-known primitive operations (e.g., prefix sum). This makes it possible to implement our algorithms in a wide variety of settings, such as embedded and parallel (i.e., GPU) hardware.
**Simple to implement correctly.**: Key parts of our datastructure are defined in terms of the recent construction of _ACSets_[21]. Consequently, implementations are essentially the same as their categorical definitions, making it easier to ensure correctness.
A number of representations of string diagrams have been explored in the literature, such as the wiring diagrams of Catlab.jl [20] and the 'hypergraph adjacency representations' of [26]. Our goals most closely align with the latter: we aim to make string diagrams useful as a general purpose'scalable combinatorial syntax'. For example, we hope that our implementation serves as an alternative in cases where a programmer would currently use a tree or directed graph.
However, the primary motivating application for our work is in representing gradient-based learners as optics, as described in [11]. In particular, this motivates perhaps the most significant extension to [26]: our datastructures can 'natively' represent morphisms of the free symmetric monoidal category over a signature with a chosen Special Frobenius monoid. This equips categories with hypergraph structure, which we show can be used to simulate _diagrams of optics_. In turn, this allows for a large number of applications modelling 'bidirectional information flow' based on
optics such as [11, 25, 7]. In our specific example, it allows us to define to an efficient algorithm for taking reverse derivatives [9] and modeling gradient-based learners in general.
Our main contributions are as follows:
* A representation of morphisms of the free symmetric monoidal category over a signature \(\Sigma\) as _structured cospans of ACSets_.
* Proof of correspondence between this representation and free symmetric monoidal category
* Data-parallel algorithms with _linear_ (sequential) and _logarithmic_ (parallel) time complexity for...
* Composition and tensor product of diagrams
* Construction of a diagram from an arbitrary \(\Sigma\)-term
* Application of functors to diagrams
* An algorithm for mapping diagrams to _diagrams of optics_ using hypergraph structure, and consequently an algorithm for taking reverse derivatives of diagrams in linear (sequential) and logarithmic (parallel) time.
The structure of the paper is as follows. In Section 2 we give necessary background, including string diagrams, presentations by generators and equations, structured cospans, and ACSets. We also recall the bipartite graph representation of hypergraphs introduced in [26], and give a detailed account of the representation of finite functions as arrays which is the foundation of our implementation. Our contributions begin in Section 3 where we show how the 'internal wiring' of diagrams can be represented using ACSets. This is built upon in Section 4, where we give the main definition of the paper: our datastructure for string diagrams as structured cospans of these wirings. In Section 5 we prove the correspondence between these structured cospans and the free symmetric monoidal category on a given signature plus a chosen Special Frobenius monoid. In Section 6, we translate a combinatorial condition first introduced in [4] to our datastructure, allowing for the representation of morphisms of the free symmetric monoidal category _without_ the additional Frobenius structure. Sections 7 and 8 together define an efficient algorithm for constructing diagrams from \(\Sigma\)-terms. Finally, in Section 9 we define an algorithm for applying functors to diagrams, and then in Section 10 show how it can be used to map diagrams to diagrams of optics, leading to an efficient algorithm for taking reverse derivatives. We conclude the paper in Section 11 with some directions for future work. A reference implementation for some of the algorithms described in the paper can be found at [https://yarrow.id](https://yarrow.id).
## 2 Background
We introduce the necessary background to describe our contributions. Section 2.1 recalls string diagrams, monoidal signatures, and the free symmetric monoidal category presented by a signature. In Section 2.2, we give the details of two isomorphic categories of finite functions, and give the computational complexities of basic algorithms for composition, coproduct, and tensor product. We also list the basic primitive array operations assumed by our implementation. Finally, in Section 2.4 we discuss the combinatorial encoding of string diagrams as hypergraphs introduced in [4], the encoding of hypergraphs as bipartite graphs used in [26], and the definition of ACSets, in terms of which we will define our datastructure.
### String Diagrams, Monoidal Signatures, and Free\({}_{\Sigma}\)
String diagrams are a two-dimensional graphical syntax for representing morphisms of categories. Informally, this syntax consists of widgets placed on the page, and connected with labeled wires. The example below has wire labels in capital letters, and is constructed from the widgets \(f,g,h\), \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol }}}}}}}}}}}}}}}}}\) \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol \boldsymbol{\boldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol{ \boldsymbol{\boldsymbol \boldsymbolboldsymbol \boldsymbol{ \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol \boldsymbol{\boldsymbol \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol \boldsymbol \boldsymbol \boldsymbol{ \boldsymbolboldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol\boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol\boldsymbol{ \boldsymbol\boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ { \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol{ { \boldsymbol{ \boldsymbolboldsymbol{ { \boldsymbol{ \boldsymbolboldsymbol{ { { \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ { \boldsymbolboldsymbol{ { \boldsymbol{ \boldsymbol{ { \boldsymbolboldsymbol{ { \boldsymbol{ \boldsymbol{ \boldsymbol{ { \boldsymbol{ \boldsymbol{ { \boldsymbolboldsymbol{ {\boldsymbol{ { \boldsymbolboldsymbol{ {\boldsymbol{ \boldsymbol{ \boldsymbol{ { \boldsymbol{ { \boldsymbolboldsymbol{ { { \boldsymbol { \boldsymbol{ \boldsymbol { \boldsymbol{ \boldsymbol{ { \boldsymbol{ \boldsymbol { \boldsymbol{ { \boldsymbol{ \boldsymbol { {\boldsymbolboldsymbol{ { { \boldsymbol \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { { \boldsymbol{ \boldsymbol { \boldsymbol { \boldsymbol { {\boldsymbol{ \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { {\boldsymbolboldsymbol{ {\boldsymbol{ \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol { \boldsymbol {\boldsymbol {\boldsymbol {\boldsymbol \boldsymbol {\boldsymbol {\boldsymbol{ \boldsymbol {\boldsymbol {\boldsymbolboldsymbol { {\boldsymbolboldsymbol { {\boldsymbolboldsymbolboldsymbol { {{\boldsymbolboldsymbolboldsymbolboldsymbolboldsymbol { { { { { \boldsymbol
and \(\dashv\).
(1)
The choice of 'widgets' and 'wire labels' correspond to the _generating morphisms_ and _generating objects_ of a _monoidal signature_.
**Definition 2.1** (Monoidal Signature).: _A monoidal signature \(\Sigma\) consists of_
* \(\Sigma_{0}\) _the_ _generating objects_
* \(\Sigma_{1}\) _the_ _generating morphisms_ _or_ _operations_
* \(\Sigma_{2}\) _the_ _equations_
* \(A\) _typing relation__\(\tau:\Sigma_{1}\rightarrow(\Sigma_{0}^{*}\times\Sigma_{0}^{*})\)_
**Remark 2.2**.: _Note that we include a typing relation in order to allow for 'polymorphic' generators. This means that a presentation of a symmetric monoidal category having, for example, a generating morphism \(f:A\to A\) for all \(A\) can be represented with just a single label in \(\Sigma_{1}\). We have already seen an example of where this is useful in (1): the generator \(\_\_\) is pictured as having both the type \(A\to A\otimes A\) and \(C\to C\otimes C\). Although this distinction is useful, we will frequently assume there is a chosen typing for a given generator, and therefore speak of 'the' type of a generator._
Given a presentation-the basic widgets that define a category-we can now define \(\Sigma\)-terms: the'syntax trees' representing diagrams built inductively by tensor and composition of generators.
**Definition 2.3**.: _Given a monoidal signature \(\Sigma\), a \(\Sigma\)**-term** is a binary tree whose leaves are labeled \(\Sigma_{1}\cup\{\mathsf{id},\sigma\}\), and whose nodes are labeled either \(\otimes\) or \(\natural\)._
Intuitively, a \(\Sigma\)-term is built from building blocks the \(\Sigma_{1}\)-operations and the'structural' morphisms \(\mathsf{id}\) (identity) and \(\sigma\) (symmetry), composed sequentially (\(\natural\)) and in parallel (\(\otimes\)). We revisit our example string diagram from (1) as a \(\Sigma\)-term in the following example.
**Example 2.4**.: _Equation (1) can be represented as the \(\Sigma\)-term \(((\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\_\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes\)\(\_\)\(\mathsf{id}\otimes\)\(\_\mathsf{id}\otimes
Figure 1: Laws of strict symmetric monoidal categories.
#### 2.1.1 Special Frobenius Monoids and Hypergraph Categories
One monoidal signature particularly important to us is that of Special Frobenius Monoids.
**Definition 2.6** (Special Frobenius Monoids).: _The theory of Special Frobenius monoids is denoted **Frob**, and consists of generators_
(2)
_and equations_
(3)
A category in which each object is equipped with a special frobenius monoid compatibly with the monoidal product is often called a hypergraph category [13] (also known as a well-supported compact closed category [8]).
**Definition 2.7** (Hypergraph Category).: _A **Hypergraph Category** is a symmetric monoidal category in which every object \(A\) is equipped with a Special Frobenius Monoid_
_satisfying the equations (3) and compatible with the tensor product, i.e. so that_
The hypergraph structure will play a special role: it will be used to represent the _wires_ of the string diagram. Specifically, it will represent the _wires_ of the string diagram. It will therefore be useful to refer to morphisms constructed exclusively from generators in **Frob**. Such morphisms are called _Frobenius spiders_: we define them now.
**Definition 2.8** (Frobenius Spider).: _A **Frobenius Spider** in a hypergraph category is any morphism built by tensor and composition of generators in **Frob**, i.e.,_
### Finite Functions and their Representations
Because our work is defined in terms of ACSets [21], the datastructures presented in this paper are ultimately all expressed in terms of the category \(\mathsf{FinOrd}\) of _finite sets and functions_. We will need two different (but isomorphic) 'encodings' of this category:
* \(\mathsf{Free}_{\mathsf{CMon}}\): a'mathematician-friendly' encoding, freely generated by a signature and equations.
* \(\mathsf{FinFun}\): a 'programmer-friendly' encoding, defined in terms of _arrays_.
Even though both of these representations are well-studied (see e.g., [21]), it is useful to record them for future use.
#### 2.2.1 Finite Functions in Terms of Commutative Monoids
**Definition 2.9** (Commutative Monoids).: _Given a chosen set of generating objects \(\Sigma_{0}\), the theory of commutative monoids \(\mathbf{CMon}(\Sigma_{0})\) has generating objects \(\Sigma_{0}\), and for each \(A\in\Sigma_{0}\) the generating arrows_
(4)
_and equations_
(5)
The equations \(\mathbf{CMon}_{2}\) of the theory of commutative monoids are sufficient to deduce the _naturality_ equations (Proposition 2.11 and 2.10). It is important that these equations are _derivable_ (and not axioms) in order to relate finite functions to Special Frobenius Monoids in Section 2.1.1.
**Proposition 2.10**.: _For all morphisms \(f:A\to B\) in \(\mathbf{CMon}(\Sigma_{0})\),_
(6)
Proof.: Induction. It is clear that (6) holds for all generators. In the inductive step, assume (6) holds for morphisms \(f_{0}\) and \(f_{1}\), then it is straightforward to derive that (6) holds for \(f_{0}\otimes f_{1}\) and \(f_{0}\mathbin{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{ \raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0 pt}{\raisebox{0.0pt}{\raisebox{0.0pt}{\raisebox{0.0pt{\raisebox{0.0}{\raisebox{0.0pt} {\raisebox{0.0pt}{\raisebox{0.0pt{\raisebox{0}}{\raisebox{0.0.0pt{\raisebox{0} \raisebox{0.0.0pt{\raisebox{0}}{\raisebox{\raisebox{0.0pt}{\raisebox{0.0pt{\raisebox{0} \raisebox{0.0pt{\raisebox{0}}{\raisebox{\raisebox{0}}{\raisebox{\raisebox{0.0pt{ \
In the sequential RAM model of computation, a single operation takes a single timestep. For example, reading or writing to a memory location. However, there are a variety of models of _parallel_ computation [19]. In this paper, we will use two variants of the Parallel RAM (PRAM) model [19, p.11, p.15]. The first of these is the PRAM CREW (Concurrent Read / Exclusive Write) model, which we will simply write as PRAM. This assumes that, while many processors can read a memory location in parallel, conflicting writes by multiple processors to the same location are forbidden.
A slightly stronger model is the PRAM CRCW (Concurrent Read / Concurrent Write), which allows multiple processors to write to the same memory location in parallel. When conflicting writes occur, an arbitrary processor succeeds. We will need the full power of the PRAM CRCW only rarely. Unless specified explicitly, we hereafter only assume the PRAM CREW model.
The algorithms presented in this paper are all in terms of a small number of primitive integer array operations. We give a table of these with their sequential and PRAM CREW complexities in terms of the size of the input \(n\) below. Note that two operations (repeat and segmented range) have complexities in terms of their input _values_, and we use \(|s|\) to denote the size of the array \(s\).
\begin{tabular}{l l l} Primitive & Complexity (Sequential) & Complexity (PRAM) \\ \hline arrange & \(O(n)\) & \(O(1)\) \\ zeros & \(O(n)\) & \(O(1)\) \\ sum & \(O(n)\) & \(O(\log n)\) \\ prefix sum & \(O(n)\) & \(O(\log n)\) \\ dense integer sort & \(O(n)\) & \(O(\log n)\) \\ concatenate & \(O(n)\) & \(O(\log n)\) \\ connected components & \(O(n)\) & \(O(\log n)\) (CRCW) \\ \hline repeat(x, s) & \(O(sum(s))\) & \(O(\log|s|)\) \\ segmented range(s) & \(O(sum(s))\) & \(O(\log|s|)\) \\ \hline \end{tabular}
We include under the banner of sum and prefix sum other operations like max and all which can be implemented with parallel scans and folds. For a more in-depth explanation of these primitives, we direct the reader to our implementation, which can be found at [https://yarrow.id](https://yarrow.id). However, we give a brief overview now. The dense integer sort operation refers specifically to the subset of sorting algorithms operating on positive integer arrays of length \(n\) whose largest element is \(O(n)\). Such sorts can be computed in \(O(n)\) sequential time by counting sort, and in \(O(\log n)\) parallel time by radix sort. The concatenate operation simply copies multiple arrays to a single contiguous memory location. There are some subtleties to its use in the parallel case, which we discuss further in Section 9.
The repeat(x, s) operation takes two equal-length arrays, and outputs the array whose entries are those of \(x\) repeated a number of times indicated by \(s\). So for example, repeat\((\langle a,b,c\rangle,\langle 0,1,2\rangle)=\langle b,c,c\rangle\). The arrange primitive outputs a length \(n\) array of indices \(\langle 0,1,\ldots n-1\rangle\), and segmented range computes a concatenation of such arrays whose lengths are specified by the input argument \(s\). Note also that segmented range can in fact be expressed in terms of the other operations, and so is not required to be a primitive.
Finally, note that the time complexity of most operations is at most \(O(n)\) (sequential) and \(O(\log n)\) (PRAM CRCW). Most of the algorithms we present later will be in terms of a constant number of each of such operations, thus guaranteeing linear (sequential) and logarithmic (parallel) time complexity.
With our model of parallel computation chosen, we can now give basic complexity results for operations on finite functions defined as arrays.
#### 2.2.3 FinFun: Finite Functions as Arrays
In addition to the presentation in terms of the category \(\mathsf{Free}_{\mathsf{CMon}}\), mentioned in Section 2.2.1, finite functions may also be represented as _integer arrays_. This category, which we call FinFun, will be central to our implementation. Thus we describe it in detail, including complexity results for several operations.
**Definition 2.15** (FinFun).: _The category \(\mathsf{FinFun}\) has objects the natural numbers \(\mathbb{N}\). An arrow \(f:A\to B\) is an element \(f\in\overline{B}^{A}\). Explicitly, \(f\) is an array of values \(\langle x_{0},x_{1},\ldots,x_{A-1}\rangle\) where each \(x_{i}\in\overline{B}\)._
We can represent an arrow of \(\mathsf{FinFun}\) by its target (codomain) and a table (array) of elements of its outputs. The source (domain) of an arrow is the length of its element table.
```
classFiniteFunction: target:int table:array
``` @property defsource(self): returnlen(self.table) ```
**Proposition 2.16**.: \(\mathsf{FinFun}\) _forms a category with identities and composition as below._
\[\mathsf{id}:A\to A f\mathbin{\,\sharp\,}g:A\to C\] \[\mathsf{id}=\langle 0,1,\ldots,A-1\rangle (f\mathbin{\,\sharp\,}g)(i)\mapsto g_{f_{i}}\]
Proof.: Composition is well-defined: \(g_{f_{i}}\) is always defined precisely because \(f(i)\in\overline{B}\). The identity law is satisfied because \(\mathsf{id}_{i}=i\), so for an arrow \(f:A\to B\) we have \((\mathsf{id}_{A}\mathbin{\,\sharp\,}f)_{i}=\mathsf{id}_{f_{i}}=f_{i}\) and \((f\mathbin{\,\sharp\,}\mathsf{id}_{B})_{i}=f_{\mathsf{id}_{i}}=f_{i}\) for all \(i\). Finally, observe that composition is associative: let \(A\stackrel{{ f}}{{\to}}B\stackrel{{ g}}{{\to}}C \stackrel{{ h}}{{\to}}D\) be arrows. Then we have \(((f\mathbin{\,\sharp\,}g)\mathbin{\,\sharp\,}h)_{i}=h_{(f\mathbin{\,\sharp \,}g)_{i}}=h_{g_{f_{i}}}=(g\mathbin{\,\sharp\,}h)_{f_{i}}=(f\mathbin{\,\sharp \,}(g\mathbin{\,\sharp\,}h))_{i}\)
The identity morphism and composition of morphisms can be implemented as follows.
``` @staticmethod defidentity(n:int): returnarange(0,n)#[0,..,n-1] ```
@staticmethod defcompose(f,g): assertf.target==g.source returnFiniteFunction(g.target,g.table[f.table])
```
**Proposition 2.17** (Complexity of composition in \(\mathsf{FinFun}\)).: _, Let \(f:A\to B\) and \(g:B\to C\) be morphisms of \(\mathsf{FinFun}\). Computing the composite \(f\mathbin{\,\sharp\,}g\) has \(O(A)\) sequential and \(O(1)\) PRAM CREW time complexity._
Proof.: The composite \(f\mathbin{\,\sharp\,}g:A\to C\) is an integer array of \(A\) elements in the set \(\overline{C}\). For each \(i\in\overline{A}\), Each element \((f\mathbin{\,\sharp\,}g)_{i}\) is computed by a single memory lookup, for a total of \(O(A)\) operations. In the parallel (PRAM CREW) setting, each lookup can be performed in parallel, giving \(O(1)\) time complexity.
#### 2.2.4 \(\mathsf{FinFun}\) as a strict monoidal category
The category of finite sets and functions has initial objects and coproducts, from which it can be made into a strict symmetric monoidal category.
**Proposition 2.18**.: \(0\) _is the initial object in \(\mathsf{FinFun}\)._
Proof.: Let \(B\) be an object in \(\mathsf{FinFun}\). Then there is a unique morphism \(?:0\to B\): the empty array \(\langle\rangle\).
In code, the initial map \(\mathtt{initial}:0\to B\) returns the empty array.
```
@classmethod definitial(cls,B): returnFiniteFunction(B,zeros(0))
```
**Proposition 2.19** (Coproducts in \(\mathsf{FinFun}\)).: _Let \(f:A_{0}\to B\) and \(g:A_{1}\to B\) be arrows of \(\mathsf{FinFun}\). The coproduct of objects \(A_{0}\) and \(A_{1}\) is given by addition \(A_{0}+A_{1}\), with injections defined as_
\[\iota_{0} :A_{0}\to A_{0}+A_{1} \iota_{1} :A_{1}\to A_{0}+A_{1}\] \[\iota_{0} =\langle 0,1,\ldots A_{0}-1\rangle \iota_{1} =\langle A_{0},A_{0}+1,\ldots A_{1}-1\rangle\]
_Then the coproduct \(f+g:A_{0}+A_{1}\to B\), denoted \(f+g\), is given by array concatenation:_
\[f+g=\langle f_{0},f_{1}\ldots f_{A_{0}-1},g_{0},g_{1}\ldots g_{A_{1}-1}\rangle\]
Proof.: This choice of \(f+g\) commutes with the injections:
\[(\iota_{0}\,\sharp\,(f+g)_{i}=(f+g)_{\iota_{0i}}=[f_{0},f_{1},\ldots,f_{A_{0}- 1}]_{i}=f_{i}\]
and
\[(\iota_{1}\,\sharp\,[f,g])_{i}=[f,g]_{\iota_{i}}=[g_{0},g_{1},\ldots,g_{A_{1}- 1}]_{i}=g_{i}\]
Moreover, this choice must be unique: if even an entry in the array \(f+g\) is not as specified above, then the diagram does not commute.
The coproduct \(f+g:A_{0}+A_{1}\to B\) of maps \(f:A_{0}\to B\) and \(g:A_{1}\to B\) is implemented as array concatenation.
def coproduct(f, g): assert f.target == g.target target = f.target table = concatenate(f.table, g.table) return FiniteFunction(target, table)
Naturally, since \(\mathsf{FinOrd}\) is cocomplete and we claimed that \(\mathsf{FinOrd}\cong\mathsf{FinFun}\), we expect \(\mathsf{FinFun}\) to have coequalizers making it cocomplete as well. This is indeed the case; a well-known result is below.
**Proposition 2.20** (Coequalizers in \(\mathsf{FinFun}\)).: _Let \(f,g:A\to B\) be parallel arrows in \(\mathsf{FinFun}\), and \(G\) the graph of \(B\) vertices and edges \(\{(f(i),g(i))\mid i\in\overline{A}\}\). If \(Q\) is the number of connected components of \(G\), and \(q:B\to Q\) is the function labeling a vertex with its connected component, Then \(q=\mathsf{c}(f,g)\) is the coequalizer of \(f,g\)_
Coequalizers can be computed directly using the connected components algorithm. For the purposes of implementation, we assume the existence of a primitive function connected_components : \(B^{A}\times B^{A}\to Q\times Q^{B}\), which computes connected components from a graph encoded as an adjacency list. That is, its two arguments are an array of edge sources and targets, respectively. Then we can implement coequalizers of finite functions as follows.
def coequalizer(f, g): assert f.source == g.source assert f.target == g.target Q, q = connected_components(f.table, g.table) return FiniteFunction(Q, q)
**Proposition 2.21** (Coequalizer Complexity).: _Computing the coequalizer of finite functions \(f,g:A\to B\) has \(O(A+B)\) sequential and \(O(\log(A+B))\) PRAM CRCW time complexity._
Proof.: Clearly the complexity of computing coequalizers is the same as computing connected components. In the sequential case, connected components can be labeled in \(O(V+E)\) time for a graph \(G=(V,E)\) (see e.g. [10, Chapter 22]), and \(O(\log V)\) time (see [19, p. 218] or [24]) in the parallel (PRAM CRCW) case. Since the graph \(G\) has \(B\) vertices and \(A\) edges, it then follows that computing the coequalizer of \(f,g:A\to B\) in \(\mathsf{FinFun}\) has \(O(A+B)\) sequential time complexity and \(O(\log(A+B))\) parallel (PRAM CRCW) time complexity.
Initial objects and coproducts give \(\mathsf{FinFun}\) the structure of a strict monoidal category. For morphisms \(f:A_{0}\to B_{0}\) and \(g:A_{1}\to B_{1}\), the tensor product \(f\otimes g:A_{0}\otimes A_{1}\to B_{0}\otimes B_{1}\) is given by \((f\mathbin{\sharp}\iota_{0})+(g\mathbin{\sharp}\iota_{1})\) However, tensor products can be written more directly.
**Proposition 2.22** (Tensor Product).: _Let \(f:A_{0}\to B_{0}\) and \(g:A_{1}\to B_{1}\) be morphisms in \(\mathsf{FinFun}\). The tensor product \(f\otimes g\) is the array \(\langle f_{0},f_{1},\ldots f_{A_{0}},(B_{0}+g_{0}),(B_{0}+g_{1}),\ldots(B_{0}+ g_{A_{1}}\rangle\)_
Proof.: It is straightforward that \(f\mathbin{\sharp}\iota_{0}\) is an array with the same entries as \(f\), and that \(g\mathbin{\sharp}\iota_{1}\) is an array with entries \(\langle B_{0}+g_{0},B_{0}+g_{1},\ldots B_{0}+g_{B_{1}-1}\). It is then immediate that the tensor product is the coproduct (concatenation) of these two arrays.
The tensor product of morphisms can be implemented as follows, with time complexity the same as for the coproduct.
```
deftensor(f,g): table=concatenate([f.table,g.table+f.target]) returnFiniteFunction(f.target+g.target,table)
```
**Proposition 2.23**.: \(\mathsf{FinFun}\) _is a strict symmetric monoidal category._
Proof.: Symmetry is given in the usual way with \(\sigma:=(\iota_{1}+\iota_{0})=\langle 1,0\rangle\), which is evidently self-inverse. Strictness follows because concatenation of arrays is strictly associative, and concatenation with the empty array is the identity.
One can easily show (see e.g., [21]) that there is an isomorphism \(\mathsf{FinFun}\cong\mathsf{FinOrd}\), which also respects the symmetric monoidal structure of the two categories.
### Cospans and Structured Cospans
Cospans (and more recently _structured_ cospans [1]) are now commonly used to model 'open' systems. In particular, cospans are used in the combinatorial representation of string diagrams introduced in [4], and illustrated in Section 2.4 below. We therefore recall these concepts now.
**Definition 2.24**.: _A **cospan** in a category \(\mathscr{C}\) is a pair of morphisms \(X\xrightarrow{s}W\xleftarrow{t}Y\). We call the maps \(s\) and \(t\) the **legs** of the cospan, the objects \(X\) and \(Y\) the **feet**, and \(W\) the **apex**._
_Structured_ cospans are a recently introduced [1] double-categorical framework for open systems. The work we present here can be interpreted as a re-examination of the combinatorial characterisation of string diagrams in [4] through the lens of this framework.
**Definition 2.25** (from [1]).: _Let \(\mathsf{L}:\mathscr{C}\to\mathscr{D}\) be a functor, which we call the **structuring functor**. A **structured cospan** is a cospan in \(\mathscr{D}\) of the form \(\mathsf{L}(A)\xrightarrow{s}W\xleftarrow{t}\mathsf{L}(B)\)._
We will not require the double-categorical structure of structured cospans here. However, by [1, Corollary 3.11] structured cospans form a symmetric monoidal category when \(\mathsf{L}\) is left-adjoint and \(\mathscr{C}\) and \(\mathscr{D}\) have finite colimits. Composition in this category is by pushout, and tensor product is the pointwise tensor product of the 'legs' of the cospan.
**Remark 2.26**.: _Closely related, but not used in this paper, is the construction of decorated cospans [12]. The relationship between decorated and structured cospans is examined in [2]._
### Hypergraphs and Bipartite Graphs for String Diagram Representation
The authors of [4] provide the a combinatorial characterisation of string diagrams as cospans of hypergraphs. Although we will not require the technical details in this paper, we now give a quick sketch of the basic idea, which is useful to keep in mind for later developments. A string diagram
(left) and its hypergraph representation (right) are depicted below.
In this representation, _hyperedges_ (depicted as white squares) are labeled with generators, and _nodes_ (depicted as \(\blacksquare\)) represent the wires of the string diagram. Importantly, the hyperedges in this representation have _ordered lists_ of nodes as their sources and targets. Being a cospan, the representation actually consists of three distinct hypergraphs. The hypergraph in the middle (on grey background) encodes the internal structure of the string diagram. The outermost hypergraphs (on blue background) are discrete, i.e. they only containing nodes. The two morphisms from the outermost hypergraphs to the middle one, whose assignments are indicated with dotted arrows, indicate which nodes constitute the left and the right interface of the hypergraph. This corresponds to the 'dangling wires' of the string diagrams, which allow sequential composition on the left and on the right with other string diagrams.
As illustrated in [4], this representation is known to be an isomorphism of categories when the string diagrams come from a hypergraph category -- so that there is a Special Frobenius structure on each object. When considering string diagrams in a symmetric monoidal category, without extra structure, then the representation is an isomorphism only if we restrict to so-called _monogamous_ cospans. Monogamicity is a requirement on the way interfaces, nodes, and hyperedges interact with each other; we refer to [4] for a full definition, and give an equivalent one in Section 6.
However, in order to work with such hypergraphs on a computer, one must choose how their data should be represented. For example, one might choose to represent each hyperedge as a pair of ordered integer lists. However, in order to define parallel algorithms suitable for e.g., GPUs, we will need a 'flat' array representation. The issue of how to _efficiently_ encode these hypergraphs is addressed in [26], where the authors show that encoding (monogamous) hypergraphs as bipartite graphs leads to an efficient representation as sparse matrices. However, [26] is limited to the _monogamous_ case, and so is only suitable for representing string diagrams of symmetric monoidal categories without the additional Special Frobenius structure. Further, [26] only accounts for categories with a single generating objects, and the representation described does not explicitly represent diagrams as cospans.
Our work generalises the approach of [26] to the case of string diagrams equipped with a chosen Special Frobenius structure. This addition allows us to define in Section 10 an efficient algorithm for taking reverse derivatives of large diagrams, Returning to our running example, the addition of this structure allows for the representation of string diagrams such as the one below left as cospans of bipartite multigraphs (below right).
### \(ACSets\)
One of our main contributions is to encode the hypergraph structure of [4] in such a way that allows for _parallel_ algorithms on the datastructure. This is possible thanks to the construction of ACSets, first described in [21]. Informally, ACSets are a class of category which represent 'array-based data with attributes'. For example, a graph with node and edge labels has data (the adjacency information of the graph) and attributes (the labels). We now informally recall the ACSets of [21], beginning with the definition of a _schema_.
**Definition 2.27** (Schema [21], Informal).: _A **schema** is a finitely presented (small) category \(S\) which is 'bipartite': objects of \(S\) are either in \(S_{0}\) or \(S_{1}\), respectively._
Intuitively, a schema specifies the relationships between data and attributes of a category. Concretely, the objects in \(S_{0}\) (the data) will map to finite sets, and those in \(S_{1}\) (the attributes) to some chosen typing.
**Definition 2.28** (ACSets [21]).: _Given a schema \(S\) and a typing map \(K:S_{1}\to\mathsf{Set}\), the category \(\mathsf{ACSet}^{S}_{K}\) has objects functors \(F:S\to\mathsf{Set}\) which restrict to \(F|_{S_{1}}=K\) and arrows \(\alpha:F_{0}\to F_{1}\) the natural transformations where \(\alpha|_{S_{1}}=\mathsf{id}\)._
The _typing map_\(K\) defines the type of attributes of data. For example, a graph with vertices labeled in \(\mathbb{N}\) might be modeled as an ACSet whose schema has three objects: \(S_{0}=\{V,E\}\) and \(S_{1}=\{L\}\), and whose typing map is \(K(L)=\mathbb{N}\). The requirement that arrows \(\alpha\) must have \(\alpha|_{S_{1}}=\mathsf{id}\) ensures that morphisms of ACSets cannot arbitrarily modify attribute data.
In this paper, we will only need to consider _finite_ ACSets: those for which the objects \(F\) restrict to functors \(F|_{S_{0}}\to\mathsf{FinFun}\). Notice that this means that our datastructure consists of dense integer arrays: this is critical for making our implementation suitable for parallel hardware such as GPUs. In the next section, we will see two examples of ACSets which form the basis of our datastructure.
## 3 Representing Diagram Wirings with ACSets
We can now begin to define our datastructure for representing string diagrams as structured cospans. In this section, we define two categories, Wires and BipartiteMultigraph, which model the 'internal wiring' of a string diagram. These categories are related by an adjunction \(\mathsf{L}:\mathsf{Wires}\to\mathsf{BipartiteMultigraph}\) which will serve as the'structuring functor' when we come to define string diagrams as structured cospans \(\mathsf{L}(A)\xrightarrow{s}G\xleftarrow{\mathsf{L}(B)}\) in Section 4. The objects of Wires will serve as the feet of the cospan and are analogous to the discrete hypergraphs of [4].
To make the role of the two categories more clear, we begin with examples. Pictured below left is a string diagram, and below right the bipartite multigraph corresponding its 'internal wiring'.
The bipartite structure means that there are two kinds of node. The \(\blacksquare\)-nodes represent the _wires_ of the string diagram, and are labeled with the generating objects of \(\Sigma_{0}\). Meanwhile, the \(\circ\)-nodes represent operations, and are labeled in \(\Sigma_{1}\).
Edges of the graph are labeled with a natural number: An edge \(\blacksquare\to\circ\) labeled \(i\) denotes that a given wire connects to the \(i^{\text{th}}\) source port of an operation. Similarly, an edge \(\circ\to\blacksquare\) denotes that a wire connects to a given _target_ port of an operation.
Notice however that this information is missing. Namely, it is not specified which of the \(\blacksquare\)-nodes correspond to the _boundary_ of the string diagram; that is, which are the left and right 'dangling
wires'. This is the purpose of the category \(\mathsf{Wires}\), whose objects are thought of as labeled finite sets, and whose morphisms are 'label preserving maps'. Pictured below is an L-structured cospan.
The center grey box depicts the apex of the cospan: a bipartite multigraph. At the edges are two objects of \(\mathsf{Wires}\) in blue boxes. Dashed arrows between them correspond to the legs of the structured cospan, and can be thought of as morphisms of \(\mathsf{Wires}\).
To formalise this construction, we now give the precise details of the categories \(\mathsf{Wires}\) and \(\mathsf{BipartiteMultigraph}\) as \(\mathsf{ACSets}\), as well as the structuring functor \(\mathsf{L}\).
### Wires: The Category of Multi-Sorted Finite Functions
We now describe \(\mathsf{Wires}\), the category of'multi-sorted' finite sets and functions. As with \(\mathsf{FinOrd}\), these functions are presented by \(\mathsf{CMon}(\Sigma_{0})\). The only difference is that the set of generating objects \(\Sigma_{0}\) is now arbitrary, instead of being restricted to a single object. For example, given generating objects \(A\) and \(B\), we can construct morphisms like the one below.
The _cospans_ of such morphisms (below right) correspond to string diagrams with generators of \(\mathbf{Frob}\) like the one below left.
Note carefully that the legs of the cospan are now also _label preserving_. That is, the source and target of each dashed arrow have the same label. This property arises by defining \(\mathsf{Wires}\) as a category of \(\mathsf{ACSets}\) where labels are attributes.
**Definition 3.1**.: _Let \(\Sigma\) be an arbitrary monoidal signature. The category of 'labeled wires' is denoted \(\mathsf{Wires}_{\Sigma}\) and defined as the category \(\mathsf{ACSet}^{\mathcal{W}}_{K_{\Sigma}}\) whose schema \(\mathcal{W}\) is defined as follows:_
\[\mathcal{W}:=\raisebox{-14.226378pt}{\includegraphics[]{fig/A_1\(1\)2_2\(3\)4_5\(6\)7_8\(9\)10_11\(2\)4_5\(7\)13\(6\)7_14\(5\)8_15_16_17_18_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_12\(4\)5_17_181_19_19_12\(4\)5_17_181_19_19_12_45_17_181_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_19_199_19_19_19_19_19_199_19_19_199_19_19_199_19_199_19_19_199_19_199_19_199_199_19_199_199_19_19_199_19_199_199_19_199_199_19_199_199_199_19_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_199_1999_199_199_199_1999_199_199_1999_199_199_1999_199_1999_199_1999_199_199_1999_199_199_199_199_1999_199_1999_1999_1999_199_1999_1999_199_1999_1999_199_199_1999_1999_199_1999_1999_1999_199_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_19999_1999_1999_19999_1999_1999_1999_1999_1999_19999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_19999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_19999_19999_1999_199_1999_19999_1999_1999_19999_1999_1999_19999_1999_1999_1999_19999_1999_19999_19999_1999_19999_1999_19999_1999_1999_19999_1999_19999_1999_1999_1999_19999_19999_1999_19999_1999_19999_19999_19999_1999_1999_19999_1999_19999_1999_19999_1999_19999_1999_19999_19999_1999_19999_19999_19999_19999_19999_1999_1999_19999_19999_1999_1999_19999_1999_1999_1999_1999_19999_1999_19999_1999_1999_19999_1999_19999_19999_1999_19999_1999_1999_1999_19999_1999_1999_19999_19999_19999_1999_19999_19999_19999_1999_19999_1999_1999_19999_1999_19999_19999_1999_1999_1999_19999_199_19999_19999_19999_19999_19999_1999_1999_19999_1999_1999_1999_19999_1999_19999_1999_1999_1999_19999_1999_1999_1999_19999_19999_19999_1999_19999_199_1999_1999_19999_1999_19999_19999_1999_19999_1999_199999_1999_1999_1999_19999_19999_1999_1999_1999_1999_1999_19999_1999_1999_1999_1999_19999_1999_19999_1999_19999_199_19999_1999_1999_19999_1999_1999_1999_19999_1999_1999_1999_19999_1999_19999_19999_1999_19999_1999_1999_1999_1999_19999_1999_1999_1999_1999_19999_1999_1999_1999_19999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_1999_19999_1999919_1999_1999919_19999_1999_1999_1999_199919_19999_199_19999_199_1999_1999_19999_1999_1999199_1999_1999919_19999_199919_1999_199919_199919_199919_1999919_1999919_1999919_199919_999199_199919_1999919_199919_1999919_199919_199919_199919_1999199_19919919_199919_199919919_1999
**Example 3.3**.: _Let \(F\) and \(G\) be objects of \(\mathsf{Wires}_{\Sigma}\). Concretely, \(F\) consists of a set \(F(W)\) and a morphism \(F(w_{n}):F(W)\to\Sigma_{0}\). Similarly, \(G\) is a set \(G(W)\) and morphism \(G(w_{n}):G(W)\to\Sigma_{0}\). A label preserving map \(\alpha\) between \(F\) and \(G\) is a natural transformation with a single non-identity component, \(\alpha_{W}\). An example of a label-preserving map for \(F(W)=3\) and \(G(W)=3\) is given below._
_Note that the finite function \(\alpha_{W}\) can be depicted formally as the string diagram below, where we have labeled incoming wires according to \(F(w_{n})\) and outgoing wires according to \(G(w_{n})\)._
_This example illustrates how morphisms of \(\mathsf{Wires}\) indeed correspond to the theory of commutative monoids over an arbitrary set of generating objects \(\Sigma_{0}\)._
We can alternatively think of the objects of \(\mathsf{Wires}\) as being discrete vertex-labeled graphs. When we come to define the structuring functor \(\mathsf{L}\), an object \(A\in\mathsf{Wires}\) will map to the _discrete_ bipartite multigraph \(\mathsf{L}(A)\in\mathsf{BipartiteMultigraph}\) having only \(\blacksquare\)-nodes.
In fact, morphisms in \(\mathsf{Wires}\) will form the 'Frobenius Half-Spiders' in an arbitrary Hypergraph category.
**Proposition 3.4**.: _Let \(\mathscr{C}\) be a hypergraph category presented by a signature \(\Sigma\). There is a unique strict symmetric monoidal identity-on-objects functor \(\mathsf{S}:\mathsf{Wires}_{\Sigma}\to\mathscr{C}\) which maps the monoid structure of \(\mathbf{CMon}\) to the monoid structure of \(\mathbf{Frob}\)._
Proof.: Recall that \(\mathsf{FinOrd}\) is presented by \(\mathbf{CMon}\), and the morphisms of \(\mathsf{Wires}\) are natural transformations with a single non-identity component in \(\mathsf{FinOrd}\). It follows that the morphisms of \(\mathsf{Wires}_{\Sigma}\) are also presented by \(\mathbf{CMon}\) with the set of generating objects \(\Sigma_{0}\). Thus, the action of \(\mathsf{S}\) is fixed on all generators, composition, and tensor products, and is unique.
We call such morphisms 'Half-Spiders'.
**Definition 3.5** (Half-Spider).: _Let \(\mathscr{C}\) be a hypergraph category presented by a signature \(\Sigma\). A Frobenius Half-Spider in \(\mathscr{C}\) is a morphism in the image of \(\mathsf{S}:\mathsf{Wires}_{\Sigma}\to\mathscr{C}\). We will sometimes write \(\mathsf{S}(f)\) for an arrow \(f\in\mathsf{FinFun}\) to mean \(\mathsf{S}(\mathcal{W}(f,l))\) when \(l\) is clear from context._
Alternatively, half-spiders are those morphisms in a hypergraph category built by tensor and composition from the generators,,,, and. In addition, recall that all hypergraph categories have a dagger [13, Section 1.3.3]. Consequently, there is a second functor mapping morphisms of \(\mathsf{Free}_{\mathbf{CMon}}\) to the _comonoid_ structure of a hypergraph category: \(f\mapsto\mathsf{S}(f)^{\dagger}\). Thus, these 'dagger half-spiders' are morphisms built from,,, and.
Putting these two functors together characterises the Frobenius spiders in a Hypergraph category. Namely, Frobenius spiders will be those morphisms of the form \(\mathsf{S}(f)\,\mathfrak{S}(g)^{\dagger}\). This will be stated more formally in Proposition 4.7.
We will now describe bipartite multigraphs, accounting for the additional structure required to represent the _operations_ in a string diagram.
### Bipartite Multigraphs
Bipartite Multigraphs can be regarded as objects of Wires decorated with additional data. More formally, they are given by the following \(\mathsf{ACSet}\).
**Definition 3.6**.: _A **bipartite multigraph** is an object of the category \(\mathsf{BipartiteMultigraph}:=\mathsf{ACSet}^{\mathcal{S}}_{K}\), where \(\mathcal{S}\) is the schema functor defined below_
_and the typing map \(K:\mathcal{S}_{1}\to\mathsf{Set}\) is defined as_
\[K(\mathsf{Ob}):=\Sigma_{0}\qquad\qquad K(\mathsf{Sig}):=\Sigma_{1}\qquad \qquad K(\mathsf{Port}):=\mathbb{N}\]
Not every bipartite multigraph represents a valid 'internal wiring' of a string diagram. Note for example that the schema above does not prevent a bipartite multigraph with a generator having two edges with the same 'port label' from a \(\blacksquare\) to a \(\circ\)-node.
**Example 3.7**.: _The following bipartite multigraph is 'ill-formed' for two reasons._
_First, there are two \(\blacksquare\to\circ\) edges labeled \(0\). This conflicts with the interpretation of edge labels as defining which 'port' of a generator is connected to a wire. Secondly, there is no output edge \(\circ\to\blacksquare\) labeled \(0\): we need every output port of \(g\) to be accounted for._
In order to rule out objects like the above, we will need to speak of 'well-formed' bipartite multigraphs.
**Definition 3.8** (Well-Formed Bipartite Multigraph).: _A bipartite multigraph \(G\) is **well-formed with respect to a signature**\(\Sigma\) there are chosen typings \((a:X\to\Sigma_{0}^{*},b:X\to\Sigma_{0}^{*})\) so that for each \(x\in G(X)\) so that \((a(x),b(x))\in\tau(G(w_{n})(x))\), both \(\langle x_{i},\mathsf{port}_{i}\rangle\) and \(\langle x_{o},\mathsf{port}_{o}\rangle\) are mono, and_
\[\forall e\in E_{i}\quad w_{n}(w_{i}(e))=a(x_{i}(e))_{\mathsf{port}_{i}(e)} \qquad\qquad\forall e\in E_{o}\quad w_{n}(w_{o}(e))=b(x_{o}(e))_{\mathsf{port} _{o}(e)}\]
**Remark 3.9**.: _A consequence of Definition 3.8 is that in a well-formed bipartite multigraph \(G\), \(G(E_{i})\) and \(G(E_{o})\) equal the total arity and coarsity of generators \(G(X)\), respectively._
**Remark 3.10**.: _The above definition of well-formedness specifically allows for polymorphic generators. More precisely, since an operation \(g\in\Sigma_{1}\) has a typing relation, we may have for example a bipartite multigraph like the following._
_Notice that there are two \(\circ\)-nodes labeled \(g\) with different types: one \(g:A\to I\), and the other \(g:B\otimes C\to D\). Such a diagram is considered well-formed as long as both such types exist in the typing relation._
When defining algorithms on structured cospans of well-formed bipartite multigraphs, we will need the following preservation theorem. This will guarantee that composites of structured cospans built from well-formed diagrams are also well-formed.
**Proposition 3.11** (Preservation of Well-Formedness).: _If a morphism of bipartite multigraphs \(\alpha:F_{0}\to F_{1}\) has all components permutations except for \(\alpha_{W}\), and \(F_{0}\) is well-formed, then \(F_{1}\) is well-formed._
Proof.: Observe that the number of edges, generators, and their attribute data is preserved. Moreover, the attributes of \(\blacksquare\)-nodes are also preserved by naturality, so \(F_{1}\) is well-formed.
Finally, before defining string diagrams as structured cospans, we first formalise the relationship between the categories \(\mathsf{Wires}\) and \(\mathsf{BipartiteMultigraph}\) as an adjunction. Observe that \(\mathsf{Wires}\) is a subcategory of \(\mathsf{BipartiteMultigraph}\), and therefore embeds into it.
**Definition 3.12**.: _Denote by \(\mathsf{L}:\mathsf{Wires}\to\mathsf{BipartiteMultigraph}\) the (identity-on-objects) inclusion functor._
As observed in [21, p.18], \(\mathsf{L}\) is left-adjoint to the forgetful functor.
**Proposition 3.13** (\(\mathsf{L}\) is left-adjoint).: _Let \(\mathsf{R}:\mathsf{BipartiteMultigraph}\to\mathsf{Wires}\) be the forgetful functor mapping a bipartite multigraph to its set of wires and their labels. Then \(\mathsf{L}\) is left-adjoint to \(\mathsf{R}\)._
Proof.: Let \(\alpha:\mathsf{L}(A)\to G\) be an arrow in \(\mathsf{BipartiteMultigraph}\), and let \(\epsilon_{G}:\mathsf{L}(\mathsf{R}(G))\to G\) be the natural transformation whose components are defined as follows.
\[\epsilon_{G_{Y}}=\begin{cases}\mathsf{id}_{W}&\text{if }Y=W\\?&\text{otherwise}\end{cases}\]
where \(?=\bullet_{-}\) is the initial map. Then there is a unique \(f\) such that \(\mathsf{L}(f)\mathbin{\sharp}\epsilon_{G}=\alpha\). Since composition of morphisms in \(\mathsf{BipartiteMultigraph}\) is pointwise, we must have that \((\mathsf{L}(f)\mathbin{\sharp}\epsilon_{G})_{Y}=\alpha_{Y}\) for each component \(Y\). We therefore must have \(\mathsf{L}(f)_{W}=\alpha_{W}\), and \(\mathsf{L}(f)_{Y}=?:0\to 0\), and so \(f=\mathcal{W}(\alpha_{W},G(w_{n}))\) by naturality.
The left-adjointness of \(\mathsf{L}\) says that any morphism of bipartite multigraphs of the form \(\alpha:\mathsf{L}(A)\to G\) is completely determined by its \(W\) component \(\alpha_{W}:\mathsf{L}(A)(W)\to G(W)\). When we define string diagrams in Section 4, this fact will allow us to define composition in terms of coequalizers of \(\mathsf{FinFun}\). In addition, the bipartite multigraphs in the image of \(\mathsf{L}\) are particularly important in the next section. We therefore introduce the following notation.
**Definition 3.14** (Discrete Bipartite Multigraph).: _Given a labeling \(w:A\to\Sigma_{0}\), the **discrete bipartite multigraph** for \(w\) is denoted \(\mathsf{D}(w):=\mathsf{L}(\mathcal{W}(w))\). Given a bipartite multigraph \(G\), we overload the same notation and write \(\mathsf{D}(G):=\mathsf{L}(\mathsf{R}(G))\)._
With these definitions in hand, we can finally define string diagrams as structured cospans.
## 4 String Diagrams as Structured Cospans
We can now define our category of string diagrams as structured cospans.
**Definition 4.1** (Diagram).: _Fix a monoidal signature \(\Sigma\). A **diagram** over \(\Sigma\) is an \(\mathsf{L}\)-structured cospan \(\mathsf{L}(A)\xrightarrow{s}G\stackrel{{ t}}{{\leftarrow}} \mathsf{L}(B)\) where \(G\) is well-formed with respect to \(\Sigma\). We say the **type** of such a diagram is \(A\to B\)._
Note that diagrams form a symmetric monoidal category as described in [1, Corollary 3.11]. This follows because \(\mathsf{L}\) is left-adjoint and therefore preserves colimits.
**Definition 4.2**.: _The symmetric monoidal category of diagrams is denoted \(\mathsf{Diagram}_{\Sigma+\mathsf{Frob}}\) and defined to be the subcategory of \({}_{\mathsf{L}}\mathsf{Csp}(\mathsf{BipartiteMultigraph})\) defined in [1, Corollary 3.11] whose objects are those of \(\mathsf{Wires}\), and whose arrows are isomorphism classes of diagrams._
In order to define \(\mathsf{Diagram}_{\Sigma+\mathsf{Frob}}\) as a _sub_category, it is important to verify that composition and tensor product preserve well-formedness of diagrams. This follows from Proposition 3.11, and we give a proof in Corollary 4.19.
Composition of structured cospans is by pushout, and tensor product is pointwise; data-parallel algorithms for both will shortly be given in Propositions 4.16 and 4.11, respectively. In fact, the remainder of this section is dedicated to showing (1) how to represent diagrams efficiently; (2) how to construct 'primitive' diagrams like identity and symmetry; and (3) data-parallel algorithms for tensor and composition. In addition, the aim of this section is to serve as a guide to implementing the algorithms described. We therefore include several pseudo-Python code listings in several cases.
### Representing Diagrams Efficiently
The naive representation of Definition 4.1 would require a full bipartite multigraph for each of the feet of the cospan as well as a complete morphism of bipartite multigraphs for the legs. However, in order to represent large diagrams it is important to be economical with data. We therefore exploit that \(\mathsf{L}\) is left-adjoint in order to show that much of this data is redundant. This will also simplify the definition of several algorithms in the rest of the paper.
**Proposition 4.3**.: _There is a bijective correspondence between diagrams and triples \((s,t,G)\) where \(A\stackrel{{ s}}{{\rightsquigarrow}}G(W)\stackrel{{ t}}{{\leftarrow}}B\) is a cospan in \(\mathsf{FinFun}\)._
Proof.: We first show that any diagram uniquely defines such a triple. Let \(\mathsf{L}(A)\stackrel{{\sigma}}{{\rightsquigarrow}}G \stackrel{{ t}}{{\leftarrow}}\mathsf{L}(B)\) be a diagram. Since \(\mathsf{L}\) is left-adjoint (Proposition 3.13), there are unique morphisms \(s,t\) of \(\mathsf{Wires}_{\Sigma}\) such that \(\mathsf{L}(s)\,{\sharp}\,_{G}=\sigma\) and \(\mathsf{L}(t)\,{\sharp}\,_{G}=\tau\). By virtue of being morphisms of \(\mathsf{Wires}_{\Sigma}\), \(s\) and \(t\) are natural transformations with a single non-identity component defined on \(W\). We may therefore think of them as morphisms of \(\mathsf{FinFun}\).
In the reverse direction, suppose that \((s,t,G)\) is a triple where \(A\stackrel{{ s}}{{\rightsquigarrow}}G(W)\stackrel{{ t}}{{\leftarrow}}B\) is a cospan in \(\mathsf{FinFun}\). This defines a diagram \(\mathsf{L}(A)\stackrel{{\sigma}}{{\rightsquigarrow}}G \stackrel{{ t}}{{\leftarrow}}\mathsf{L}(B)\) in the following way. We must take \(\sigma=\mathsf{L}(\mathcal{W}(s,G((w_{n}))))\,{\flat}\,_{G}\) and \(\tau=\mathsf{L}(\mathcal{W}(t,G(w_{n})))\,{\flat}\,_{G}\)
This correspondence means that diagrams can be written in code as the following Diagram class.
```
class Diagram: s: FiniteFunction t: FiniteFunction G: BipartiteMultigraph
```
We will now use this alternate representation to define primitive diagrams and algorithms for tensor and composition. Note that in the remainder of this section we take the 2-categorical perspective, and consider specific diagrams (i.e., 1-cells) rather than isomorphism classes of diagrams. This is more natural from the perspective of the programmer working with diagrams, since one typically works with a representative of an isomorphism class instead of the class itself.
### Primitive Diagrams
**Proposition 4.4**.: _Given an object \(\mathcal{W}(w)\in\mathsf{Wires}\), The **identity diagram** is represented by the triple \((\mathsf{id},\mathsf{id},\mathsf{D}(w))\) where \(\mathsf{D}(w)\) is the discrete bipartite multigraph (Definition 3.14)._
Proof.: For an object \(A=\mathcal{W}(w)\) of \(\mathsf{Wires}\), the identity structured cospan is \(\mathsf{L}(A)\stackrel{{\mathsf{id}}}{{\rightarrow}}\mathsf{L}(A) \stackrel{{\mathsf{id}}}{{\leftarrow}}\mathsf{L}(A)\). We must therefore have \(s=t=\mathsf{id}\), and \(\mathsf{L}(A)=\mathsf{D}(\mathsf{L}(A))\) by definition.
In code, the identity diagram is constructed as follows.
@classmethod defidentity(cls, wm: FiniteFunction): s = FiniteFunction.identity(wm.source) t = FiniteFunction.identity(wm.source) G = BipartiteMultigraph.discrete(wn) return Diagram(s, t, G) ```
The symmetry of \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) as given as follows.
**Proposition 4.5** (Symmetry/Twist Diagram).: _Let \(a:A\to\Sigma_{0}\) and \(b:B\to\Sigma_{0}\) be labelings, then the **symmetry diagram** is given by \((\mathsf{id}_{A+B},\sigma_{A,B},\mathsf{D}(a+b))\)._
Proof.: The symmetry structured cospan is given by \(\mathsf{L}(A+B)\xrightarrow{\mathsf{id}_{A+B}}\mathsf{L}(A+B)\xleftarrow{ \sigma_{A,B}}\mathsf{L}(B+A)\), and so the result follows as in Proposition4.4.
\(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) is also a Hypergraph category, which means that every object is equipped with a Special Frobenius monoid. This is the reason we denote the category \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\). In Section6, we will see how the combinatorial condition of _monogamicity_ introduced in [4] allows us to define the category \(\mathsf{Diagram}_{\Sigma}\), without this additional structure.
**Proposition 4.6**.: \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) _is a Hypergraph category_
Proof.: \(\mathsf{L}\) is left-adjoint and therefore preserves colimits. \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) is then a hypergraph category by [1, Theorem 3.12]
The Frobenius spiders (Definition2.8) in \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) can be explicitly characterised as those cospans whose apexes are discrete hypergraphs with no operations. That is, diagrams of the form \((s,t,\mathsf{L}(A))\).
**Proposition 4.7**.: _A diagram \(d=(s,t,G)\) is a Frobenius spider iff \(G=\mathsf{L}(A)\) for some \(A\)._
Proof.: Deferred to AppendixC
In code, we provide a constructor for Frobenius spiders as follows.
def spider(s, t, w): G = BipartiteMultigraph.discrete(w) return Diagram(s, t, G)
Being a hypergraph category, \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) is equipped with a dagger in the following way.
**Proposition 4.8**.: _Hypergraph categories have a dagger given by_
Proof.: See [13, Section 1.3.3].
The dagger of diagrams has a particularly convenient form: it swaps the legs of the cospan. That is, \((s,t,G)^{\dagger}=(t,s,G)\). We give a proof of this fact in PropositionC, but note here that it leads to the following simple implementation.
def dagger(self): return Diagram(self.t, self.s, self.G)
The final 'primitive' diagram we need to define is the _singleton_, representing a single generator of \(\Sigma_{1}\).
**Definition 4.9** (Singleton Diagram).: _Let \(x:1\to\Sigma_{1}\) be an operation, and \(a:A\to\Sigma_{0}\) and \(b:B\to\Sigma_{0}\) a typing of \(x\). The **singleton** diagram of \(x\) is denoted \(\mathsf{singleton}(a,b,x):=(\iota_{0},\iota_{1},G)\) where_
\[G(W)=A+B\qquad G(E_{i})=A\qquad G(E_{o})=B\qquad G(X)=1\]
\[G(w_{i})=\iota_{0}\qquad G(w_{o})=\iota_{1}\qquad G(x_{i})=!_{A}\qquad G(x_{o})=!_{B}\]
\[G(\mathsf{port}_{i})=\mathsf{id}_{A}\qquad G(\mathsf{port}_{o})=\mathsf{id}_{B} \qquad G(w_{n})=a+b\qquad G(x_{n})=x\]
_We will sometimes write just \(\mathsf{singleton}(x)\) when a chosen typing is assumed._
**Example 4.10**.: _Given a generator \(f:A\to B\otimes C\) represented as the element \(x=\langle f\rangle\), and typing \(a=\langle A\rangle\) and \(b=\langle B,C\rangle\), the singleton diagram \(\mathsf{singleton}(x)\) is depicted below._
This definition translates directly to code as follows.
@classmethod defsingleton(cls, a: FiniteFunction, b: FiniteFunction, xn: FiniteFunction): G = BipartiteMultigraph( wi = F.inj0(a.source, b.source), wo = F.inj1(a.source, b.source), xi = F.terminal(a.source), xo = F.terminal(b.source), wn = a + b, pi = F.identity(a.source), po = F.identity(b.source), xn = xn) returnDiagram( s=FiniteFunction.inj0(a.source, b.source), t=FiniteFunction.inj1(a.source, b.source), G=G)
### Tensor Product of Diagrams
**Proposition 4.11** (Tensor Product of Diagrams).: _The tensor product of diagrams \(c_{0}:=(s_{0},t_{0},G_{0})\) and \(c_{1}:=(s_{1},t_{1},G_{1})\) is the diagram \(c_{0}\otimes c_{1}:=(s_{0}\otimes s_{1},t_{0}\otimes t_{1},G_{0}\otimes G_{1})\)_
Proof.: Let \(\mathsf{L}(A_{0})\xrightarrow{\sigma_{0}}G_{0}\xrightarrow{\tau_{0}}\mathsf{L} (B_{0})\) and \(\mathsf{L}(A_{1})\xrightarrow{\sigma_{1}}G_{1}\xrightarrow{\tau_{1}}\mathsf{L} (B_{1})\) be the respective structured cospans corresponding to \(c_{0}\) and \(c_{1}\) by Proposition 4.3. Then then tensor product \(c_{0}\otimes c_{1}\) is the following structured cospan.
\[\mathsf{L}(A_{0}\otimes A_{1})\xrightarrow{\sigma_{0}\otimes\sigma_{1}}G_{0} \otimes G_{1}\xrightarrow{\tau_{0}\otimes\tau_{1}}\mathsf{L}(B_{0}\otimes B_{1})\]
Since colimits are pointwise, we have \((\sigma_{0}\otimes\sigma_{1})_{W}=s_{0}\otimes s_{1}\) and \((\tau_{0}\otimes\tau_{1})_{W}=t_{0}\otimes t_{1}\) and so the representation of the tensor product \(c_{0}\otimes c_{1}\) is given by \((s_{0}\otimes s_{1},t_{0}\otimes t_{1},G_{0}\otimes G_{1})\)
Once again, this yields a straightforward implementation:
def tensor(c0, c1): returnDiagram( s = c0.s @ c1.s, t = c0.t @ c1.t, G = c0.G @ c1.G)
The sequential time complexity of tensor product is linear (sequential) and logarithmic (parallel) in the diagram components.
**Proposition 4.12**.: _Let \(d_{i}=(s_{i},t_{i},G_{i})\) be diagrams of type \(A_{i}\to B_{i}\) for \(i\in\{0,1\}\). Computing the tensor product \((s,t,G)=d_{0}\otimes d_{1}\) has sequential time complexity_
\[O(A_{0}(W)+A_{1}(W))+O(G(W))+O(G(E_{i}))+O(G(E_{o}))+O(G(X))+O(B_{0}(W)+B_{1}(W))\]
_and PRAM time complexity \(O(1)\)._
Proof.: Computing the coproduct of bipartite multigraphs has sequential time complexity \(O(G(W))+O(G(E_{i}))+O(G(E_{o}))+O(G(X))\) since one must essentially just concatenate arrays. The PRAM time complexity of the same coproduct is \(O(1)\). Then computing \(s_{0}+s_{1}\) is \(O(A_{0}(W)+A_{1}(W))\) sequential and \(O(1)\) PRAM time complexity, and similarly for \(t_{0}+t_{1}\).
It will later be convenient to have an efficient method of tensoring a large number of operations. In particular, simply repeating the binary tensor operation has poor complexity: quadratic in the sequential case and linear in the parallel case. We therefore give the following theorem, which gives a simpler closed form for the tensor product of \(n\) operations.
**Proposition 4.13**.: _Let \(x:N\to\Sigma_{1}\) be a list of \(N\) operations, and for each operation \(x(i)\) let \((a(i),b(i))\in\tau(x)\) be a chosen typing so that \(a,b:N\to\Sigma_{0}^{*}\). Let \(|a(i)|\) and \(|b(i)|\) denote the arity and coarsity of \(x(i)\) respectively. Then the \(N\)-fold tensor product of operations \(x\) is isomorphic to the diagram \((\iota_{0},\iota_{1},G)\), where_
\[G(W)=K_{i}+K_{o}\qquad G(E_{i})=K_{i}\qquad G(E_{o})=K_{o}\qquad G(X)=N\]
_and_
\[G(w_{i})=\iota_{0}\qquad G(w_{o})=\iota_{1}\qquad G(x_{i})=\bigotimes_{i\in N }!_{|a(i)|}\qquad G(x_{o})=\bigotimes_{i\in N}!_{|b(i)|} \tag{8}\]
\[G(\mathsf{port}_{i})=\left(\sum_{i\in N}\left(\iota_{0}:|a(i)|\to K_{i} \right)\right)\sharp\iota\qquad\qquad G(\mathsf{port}_{i})=\left(\sum_{i\in N }\left(\iota_{0}:|b(i)|\to K_{o}\right)\right)\sharp\iota\]
\[G(w_{n})=\sum_{i\in N}(j\mapsto a(i)_{j})+\sum_{i\in N}(j\mapsto b(i)_{j}) \qquad\qquad G(x_{n})=x\]
_where:_
* \(K_{i}=\sum_{i\in N}|a(i)|\) _is the total arity of all generators_
* \(K_{o}=\sum_{i\in N}|b(i)|\) _is the total coarsity of all generators_
* \(!_{k}:k\to 1\) _is the unique terminal map._
* \(\iota_{0}:A\to A+B\) _is the first injection of the coproduct in_ \(\mathsf{FinFun}\)__
* \(\iota\) _denotes the canonical inclusion of a finite set into_ \(\mathbb{N}\)__
Proof.: Induction. The empty (0-fold) and singleton (1-fold) diagrams are already in the form above, so it remains to check the inductive step.
Let \(c=(\iota_{0},\iota_{1},G)\) be the \(N\)-fold tensor of singleton diagrams \(x:N\to\Sigma_{1}\). By inductive hypothesis, \(c\cong\bigotimes_{i\in N}\mathsf{singleton}(x(i))\). Now let \(s=(\iota_{0},\iota_{1},H)\) be a singleton diagram with the chosen typing \((a^{\prime}:1\to\Sigma_{0}^{*},b^{\prime}:1\to\Sigma_{0}^{*})\). It suffices to construct an isomorphism of structured cospans \(\alpha:c\otimes s\leftrightarrow d\) where \(d=(\iota_{0},\iota_{1},J)\) is in the required form.
Defining \(\alpha\) is straightforward. Set \(\alpha_{W}:=\;\overline{\mathsf{SC}}\;\) with all other components as identities. Then one can compute that the image of \(w_{i}\) and \(w_{o}\) are the injections, and further that \(\alpha\) is an isomorphism of structured cospans. For example,
\[\alpha_{E_{o}}\,\sharp\,(G(w_{o})+H(w_{o}))\,\sharp\,\alpha_{W}=\;\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{figs/pwr}}\;=\;\raisebox{-1.0pt}{ \includegraphics[height=14.226378pt]{figs/pwr}}\;=\iota_{1}\]
with the other cases holding similarly.
Finally, we must have by naturality that \(J(w_{n})=\alpha_{W}^{-1}\,\sharp\,G(w_{n})\otimes H(w_{n})\), and so it remains to verify that this morphism is in the desired form.
Observe that both \(G(w_{n})\) and \(H(w_{n})\) are coproducts of maps: by definition \(H(w_{n})=a^{\prime}+b^{\prime}\) and \(G(w_{n})=g_{a}+g_{b}\), where \(g_{a}=\sum_{i\in N}(j\mapsto a(i)_{j})\) and \(g_{b}=\sum_{i\in N}(j\mapsto b(i)_{j})\). for some \(a,b:N\to\Sigma_{0}^{*}\). Then calculate as follows using the associativity and commutativity axioms.
We then have that \(g_{a}+a^{\prime}=\sum_{i\in N}(j\mapsto a(i)_{j})+(j\mapsto a^{\prime}(0)_{j})\) and \(g_{b}+b^{\prime}=\sum_{i\in N}(j\mapsto b(i)_{j})+(j\mapsto b^{\prime}(0)_{j})\), and thus \(J(w_{n})=\sum_{i\in N+1}(j\mapsto(a+a^{\prime})(i)_{j})+\sum_{i\in N+1}(j \mapsto(b+b^{\prime})(i)_{j})\), as required.
Computing the \(N\)-fold tensor product of operations can be computed in time linear (sequential) and logarithmic (parallel) in the resulting diagram, as witnessed by the following proposition.
**Proposition 4.14**.: _Let \(x:N\to\Sigma_{1}\) be operations, and \(d=(\iota_{0},\iota_{1},G)\) their \(N\)-fold tensor product as defined in Proposition 4.13. The sequential time complexity of computing \(d\) is \(O(G(W))+O(G(E_{i}))+O(G(E_{i}))+O(G(X))\) and the PRAM CREW time complexity is \(O(\log G(X))\)_
Proof.: Computing \(\iota_{0}\) and \(\iota_{1}\) is \(O(G(E_{i}))\) and \(O(G(E_{o}))\) respectively in the sequential case, and constant in the parallel case. Then to compute \(G\) in the sequential case is essentially just concatenation of arrays, and so the complexity is linear with respect to the size of each component, i.e. \(O(G(W))+O(G(E_{i}))+O(G(E_{o}))+O(G(X))\). Note that the \(G(x_{n})\) component does not need to be computed, since it is given.
In the parallel case, assuming the data for typings \(a\) and \(b\) is provided as a segmented array, one can compute components using the repeat and segmented arrange functions, giving \(O(\log N)=O(\log G(X))\) time complexity.
### Composition of Diagrams
Before giving the algorithm for composition of diagrams, we first illustrate it with an example. Composition in \(\mathtt{Diagram}_{\Sigma+\mathbf{Frob}}\) is by pushout, as computed via coequalizer. Concretely, when composing diagrams \(d_{0}\,\sharp\,d_{1}\), the basic idea is to first take their tensor product, and then coequalize the wires corresponding to the shared boundary.
**Example 4.15**.: _Let \(d_{0}=(s_{0},t_{0},G_{0})\) and \(d_{1}=(s_{1},t_{1},G_{1})\) be the diagrams illustrated below as cospans._
_The process of composition is illustrated with the following steps. We first tensor \(d_{0}\) and \(d_{1}\) (left)
_and then identify the \(\blacksquare\)-nodes connected red arrow between them (right)._
_Finally, we quotient together all connected components in the graph of \(\blacksquare\)-nodes, and take the source map of \(d_{0}\), and the target of \(d_{1}\)._
_This amounts to coequalizing the maps \(t_{0}\,\sharp\,\iota_{0}\) and \(s_{1}\,\sharp\,\iota_{1}\)._
Let us now describe this process formally.
**Proposition 4.16** (Composition of Diagrams).: _Let \(d_{0}:=(s_{0},t_{0},G_{0}):A_{0}\to A_{1}\) and \(d_{1}:=(s_{1},t_{1},G_{1}):A_{1}\to A_{2}\) be diagrams, and denote the coequalizer \(q:=\mathsf{c}\left(\begin{array}{c}\includegraphics[height=56.905512pt]{figs_ {0}}\\ \includegraphics[height=56.905512pt]{figs_{1}}\end{array}\right)\) so that \(q:G_{0}(W)+G_{1}(W)\to Q\). Then the composition \(d_{0}\,\sharp\,d_{1}=(s,t,G)\) is given by_
_where \(\alpha:G_{0}+G_{1}\to G\) is the \(\mathsf{ACSet}\) coequalizer whose \(W\) component is \(q\), and all other components are \(\mathsf{id}\), so that \(G\) is the functor with the following data_
(9)
_and \(G(w_{n})=u\) is the canonical morphism of the coequalizer \(q\), i.e. the unique function defined so that \(u(q(i))=(G_{0}(w_{n})+G_{1}(w_{n}))(i)\)._
Proof.: Let \(\mathsf{L}(A_{0})\xrightarrow{\sigma_{0}}G_{0}\xleftarrow{\tau_{0}}\mathsf{L} (A_{1})\) and \(\mathsf{L}(A_{1})\xrightarrow{\sigma_{1}}G_{1}\xleftarrow{\tau_{1}}\mathsf{L} (A_{2})\) be the respective structured cospans corresponding to \(d_{0}\) and \(d_{1}\) by Proposition 4.3. Composition of cospans \(d:=d_{0}\,\sharp\,d_{1}\) is given by pushout, which we compute by coequalizer \(q=\mathsf{c}(\tau_{0}\,\sharp\,\iota_{0},\sigma_{1}\,\sharp\,\iota_{1})\) as below.
Note that this makes \(G=G_{0}+_{\mathsf{L}(A_{1})}G_{1}\). Recall that composition and colimits in \(\mathsf{ACSets}\) are pointwise [21, Corollary 4], so we can calculate \(\sigma_{W}=(\sigma_{0}\,\sharp\iota_{0}\,\sharp\,\alpha)_{W}=s_{0}\,\sharp \iota_{0}\,\sharp\mathsf{c}(\iota_{0}\,\sharp\,\iota_{0})\). Moreover, since \(\mathsf{L}\) is left adjoint, we can factor \(\sigma=\mathsf{L}(s)\,\sharp\,\epsilon_{G}\), and since \(\epsilon_{G_{W}}=\mathsf{id}\), we must have \(s=s_{0}\,\sharp\iota_{0}\,\sharp\,\mathsf{c}(\iota_{0},s_{1})\). By a similar argument, we have that \(\tau=\mathsf{L}(t)\,\sharp\,\epsilon_{G}\) with \(t=t_{1}\,\sharp\,\iota_{1}\,\sharp\,\mathsf{c}(\iota_{0},s_{1})\). We also see that \(\alpha\) is indeed the coequalizer:
\[\alpha_{W}=\mathsf{c}(\tau_{0}\,\sharp\,\iota_{0},\sigma_{1}\, \sharp\,\iota_{1})_{W}=\mathsf{c}(t_{0}\,\sharp\,\iota_{0},s_{1}\,\sharp\, \iota_{1})=q\] \[\alpha_{Y}=\mathsf{c}(\tau_{0}\,\sharp\,\iota_{0},\sigma_{1}\, \sharp\,\iota_{1})_{Y}=\mathsf{id}\]
The former holds because colimits of \(\mathsf{ACSets}\) are pointwise, and the latter because for each \(Y\neq W\), both \((\tau_{0}\,\sharp\,\iota_{0})_{Y}\) and \((\sigma_{1}\,\sharp\,\iota_{1})_{Y}\) must equal the initial map by uniqueness, and the coequalizer of initial maps is the identity.
Finally, we must verify that \(u\) is indeed the canonical morphism of the coequalizer (and is thus well-defined). For this to hold, we must have that \(q^{\prime}=G_{0}(w_{n})+G_{1}(w_{n})\) defines a co-fork:
This equation holds by the counit law and because \(s\) and \(t\) are the \(W\) components of label-preserving natural transformations \(\sigma\) and \(\tau\), and so \(t_{0}\,\sharp\,G_{0}(w_{n})=A_{1}(w_{n})=s_{1}\,\sharp\,G_{1}(w_{n})\). It then follows by the universal property that there exists a unique morphism \(u:G(W)\to\Sigma_{0}\) such that \(q\,\sharp\,u=G_{0}(w_{n})+G_{1}(w_{n})\). Since \(u\) is unique, we are therefore justified in defining it as the function \(u(q(i)):=q^{\prime}(i)\). More directly, if \(q(i)=q(j)\), then by the existence of a unique \(u\) we must have \(q^{\prime}(i)=q^{\prime}(j)\), and so \(u\) is well-defined.
**Remark 4.17**.: _The above proposition says that in the case of composition, the coequalizer (and therefore pushout) of diagrams can be computed purely by considering their wirings. That is, there is always a choice of coequalizer which only quotients wire nodes of the bipartite graph, leaving other data is unchanged._
Code for computing the composite of diagrams is straightforward,
def compose(f, g): assert f.type[1] == g.type[0] h = f @ g q = f.t.inject0(g.G.W).coequalizer(g.s.inject1(f.G.W))
return Diagram( s = f.s.inject0(g.G.W) >> q, t = g.t.inject1(f.G.W) >> q, G = h.G.coequalize_wires(q))
We omit the implementation of coequalize_wires above, which is computed as in (9). However, we give an explicit, data-parallel implementation for computing the universal isomorphism \(u\) now.
defuniversal(q: FiniteFunction, f: FiniteFunction): u=zeros(q.target) _# preallocate thetable of u_ u[q.table]=f.table _# make thetable of u such that q ; u = f_ return FiniteFunction(f.target, u)
Note that in the above listing, we assume the PRAM CRCW model to achieve \(O(1)\) parallel time complexity. Specifically, the assignment u[q.table]=f.table will allow for multiple conflicting writes to the same memory location, with an arbitrary write succeeding. However, note that since both \(q\) and \(f\) must be label-preserving maps, all 'conflicts' _must_ write the same value, ensuring correctness of the implementation.
As expected, complexity of composition is linear (sequential) and logarithmic (parallel) in the size of the diagram.
**Proposition 4.18**.: _Let \(d_{i}=\mathsf{L}(A_{i})\xrightarrow{s_{i}}G_{i}\xleftarrow{\mathsf{L}(A_{i+1})}\) be diagrams for \(i\in\{0,1\}\). Let \((s,t,G)=d_{0}\otimes d_{1}\), then computing the composite \(d_{0}\mathbin{\sharp}d_{1}\) has sequential time complexity_
\[O(A_{0}(W))+O(A_{1}(W))+O(A_{2}(W))+O(G(W))+O(G(E_{i}))+O(G(E_{o}))+O(G(X))\]
_and PRAM CRCW time complexity \(O(\log A_{1}(W)+G(W))\)_
Proof.: Time complexity is that of tensor product, plus the additional cost of computing coequalizers and applying them to \(G\). In the sequential case, this additional cost is \(O(A_{1})+O(G(W))\). In the PRAM CRCW case, it is \(O(\log A_{1}(W))+O(\log G(W))\).
Finally, we verify that the tensor product and composite of diagrams with well-formed apexes are also well-formed. This ensures that \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) indeed forms a (sub)category.
**Corollary 4.19**.: _If diagrams \(d_{0}\) and \(d_{1}\) have well-formed apexes, then so do tensor products \(d_{0}\otimes d_{1}\) and composites \(d_{0}\mathbin{\sharp}d_{1}\)._
Proof.: The case of tensor products is immediate since the edge data of both \(d_{0}\) and \(d_{1}\) is contained in \(d_{0}\otimes d_{1}\). In the case of composites \(d_{0}\mathbin{\sharp}d_{1}\), observe that the coequalizer \(\alpha:G_{0}+G_{1}\to G\) has all components except \(W\) equal to the identity. Thus by Proposition 3.11, we can conclude that \(G\) is well-formed as well.
## 5 Diagrams as Combinatorial Syntax
We now construct an explicit isomorphism \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\cong\mathsf{Free}_{\Sigma+\mathbf{Frob}}\). This will justify our claim that the arrows of \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) are string diagrams over the signature \(\Sigma+\mathbf{Frob}\). Concretely, it will guarantee that equivalence classes of \((\Sigma+\mathbf{Frob})\)-terms are the same as equivalence classes of diagrams. With this isomorphism, we can then think of diagrams as a structure for representing syntax _combinatorially_. In Section 6, we will extend this to an isomorphism \(\mathsf{Diagram}_{\Sigma}\cong\mathsf{Free}_{\Sigma}\) by translating to our setting the notion of _monogamicity_ originally due to [4].
Our approach to constructing the isomorphism is as follows:
* Define the functor \(\mathcal{S}\) mapping \(\Sigma\)-terms to diagrams.
* Define the 'Frobenius Decomposition' of a diagram
* Define a functor \(\mathcal{D}\) using the Frobenius decomposition mapping diagrams to \(\Sigma\)-terms
* Prove that \(\mathcal{S}\) and \(\mathcal{D}\) form an isomorphism
Of particular note are the 'Frobenius decompositions' which we will shortly define. Such decompositions are not only useful for constructing the isomorhpism, but also in applying _functors_ to diagrams. We describe an algorithm based on this decomposition in Section 9.
Mapping \(\Sigma\)-terms to diagrams is straightforward. We simply map generators \(\Sigma_{1}\) to their corresponding singleton diagrams (Definition 4.9), and build the diagram inductively. More formally, we can construct the following functor.
**Definition 5.1** (\(\mathcal{S}\)).: \(\mathcal{S}:\mathsf{Free}_{\Sigma+\mathbf{Frob}}\to\mathsf{Diagram}_{\Sigma+ \mathbf{Frob}}\) _is the identity-on-objects strict symmetric monoidal hypergraph functor defined inductively on operations \(g\in\Sigma_{1}\) as \(\mathsf{singleton}(g)\)._
Notice that \(\mathcal{S}\) has poor computational complexity. Although individual operations of tensor and composition are linear in the size of the resulting diagram, the inductively-defined functor above has \(O(n^{2})\) complexity. This is because we repeatedly 'append' a small diagram to an accumulator, which is analogous to constructing a length-\(n\) array by repeated appending. We will fix this problem in Section 7. Before that, we first construct an inverse to the functor \(\mathcal{S}\); a functor mapping from diagrams to \(\Sigma\)-terms.
### Diagrams to \(\Sigma\)-terms
We now define the functor \(\mathcal{D}:\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\to\mathsf{Free}_{\Sigma+ \mathbf{Frob}}\), mapping diagrams to \(\Sigma\)-terms. To do so, we first show how each diagram can be factorised into what we call a 'Frobenius Decomposition'. This composition relies on the _hypergraph category_ structure of \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) as witnessed by [1, Theorem 3.12] via Proposition 4.6.
The main idea of the Frobenius decomposition is to separate the structure of wires and generators in a string diagram. To illustrate this, we begin with an example.
**Example 5.2**.: _Suppose we have a morphism \(f\,{\sharp}\,g\) in a hypergraph category._
_A **frobenius decomposition** of this morphism is a diagram like the one pictured below._
_The idea is to picture all the wires in the diagram at the top as a 'bus' whose width is the number of internal wires in the diagram. Wires can then be connected to operations \(g\) by the maps \(e_{s}\) and \(e_{t}\), which respectively map wires to inputs and outputs of operations. Similarly, the \(s\) and \(t\) maps specify which of the 'internal' wires appear on the left and right boundaries, respectively._
_Note however that this decomposition is not unique. For example, we may permute the order of generators in the center to obtain an isomorphic diagram._
**Remark 5.3**.: _The purpose of Frobenius decompositions is to separate the elements of a diagram into frobenius spiders and operations. This makes it easier to define a hypergraph functor, whose action on Frobenius spiders is fixed, and so it will suffice to define its action on the \(g\) part of the decomposition._
In fact, every morphism in an arbitrary Hypergraph Category has a Frobenius decomposition, which we state formally in the following proposition.
**Proposition 5.4** (Frobenius Decomposition).: _Let \(f:A\to B\) be a morphism in a hypergraph category. Then there is a factorisation of \(f\) called the **Frobenius Decomposition** of \(f\) with the following form._
_where \(\sigma\), \(\tau\), \(e_{s}\), and \(e_{t}\) are Frobenius spiders, and \(g\) is an \(n\)-fold tensoring of generators._
Proof.: Induction. See Appendix B.1 for the full proof.
Since \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) is a hypergraph category by [1, Theorem 3.12], every morphism must have a Frobenius decomposition. In order to define the functor \(\mathcal{D}\), it will be useful to give a _specific_ Frobenius decomposition in terms of which the functor will be defined. This will also serve to show that \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) is _presented_ by the generators of \(\Sigma+\mathbf{Frob}\).
**Proposition 5.5**.: _Let \(d=\mathsf{L}(A)\xrightarrow{\mathsf{L}(s)\mathsf{tr}_{G}}G\xleftarrow{\mathsf{ L}(t)\mathsf{tr}_{G}}\mathsf{L}(B)\) be a diagram over the signature \(\Sigma\). Then the following diagram is a Frobenius decomposition of \(d\)_
(10)
_where:_
* \(W=\mathcal{W}(w_{n})\)_,_ \(E_{i}=\mathcal{W}(G(w_{i}\,\sharp\,w_{n}))\)_, and_ \(E_{o}=\mathcal{W}(G(w_{o}\,\sharp\,w_{n}))\)__
* \(f_{i}=\mathcal{W}(G(w_{i}),G(w_{n}))\) _and_ \(f_{o}=\mathcal{W}(G(w_{o}),G(w_{n}))\)__
* \(p=(\mathsf{sort}_{(G(x_{o}),G(\mathsf{port}_{i}))},\mathsf{id},\mathsf{L}(E_{ i}))\) _and_ \(q=(\mathsf{sort}_{(G(x_{o}),G(\mathsf{port}_{o}))},\mathsf{id},\mathsf{L}(E_{ o}))\) _(where_ \(\mathsf{sort}\) _is defined in Proposition_ A.4_)_
* \(g=\bigotimes_{i}^{X}x_{n}(i)\) _is the diagram formed by the_ \(X\)_-fold tensor product of singleton diagrams_
Proof.: We defer the full proof to Appendix B.1, but provide a sketch here. It is clear that the composite pictured in (10) is of the correct form: \(\mathsf{S}(s)\) and \(\mathsf{S}(t)^{\dagger}\) correspond directly to \(s\) and \(t\) in Proposition 5.4, and the composites \(\mathsf{S}(G(w_{i}))^{\dagger}\,\sharp\,p\) and \(q\,\sharp\,\mathsf{S}(G(w_{o}))\) to \(e_{s}^{\dagger}\) and \(e_{t}\). It therefore remains to show that this decomposition is indeed equal to \((s,t,G)\). This follows because there is a choice of coequalizer such that \(p\,\sharp\,g\,\sharp\,q\) is isomorphic to a tensoring \(g^{\prime}=(\iota_{0},\iota_{1},G^{\prime})\) of the generators in \(G\). Then replacing \(p\,\sharp\,g\,\sharp\,q\) with \(g^{\prime}\), one can again choose coequalizers so that the full composite is equal to \((s,t,G)\).
**Remark 5.6**.: _Note that the sorting permutations \(\mathsf{sort}_{\langle x_{i},\mathsf{port}_{i}\rangle}\) and \(\mathsf{sort}_{\langle x_{o},\mathsf{port}_{o}\rangle}\) (Proposition A.4) are unique because their'sorting keys' are monomorphic, which follows because \(G\) is well-formed. In addition, instead of actually computing the sorting permutations using a sorting algorithm, one can compute them through arithmetic in linear (sequential) and constant (parallel) time._
Frobenius Decompositions allow us to define a functor from diagrams to \(\Sigma\)-terms. We simply need to map each component of the decomposition to a representative \(\Sigma\)-term.
**Proposition 5.7**.: _There is a strict symmetric identity-on-objects monoidal hypergraph functor \(\mathcal{D}:\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\to\mathsf{Free}_{\Sigma+ \mathbf{Frob}}\) which maps arrows to their frobenius decompositions._
Proof.: In Appendix B.1 we show that \(\mathcal{D}\) is well-defined (Proposition B.3), a functor (Proposition B.4), and strict monoidal (Proposition B.5). Finally, it preserves the symmetry and Special Frobenius monoid structure by definition, since Frobenius spiders are mapped to the corresponding spiders in \(\mathsf{Free}_{\Sigma+\mathbf{Frob}}\).
### Isomorphism
We can now show that \(\mathcal{S}\) and \(\mathcal{D}\) form an isomorphism. We use the 'Change of Basis' lemma from [26, B.1], which gives conditions under which two categories are isomorphic. We begin by proving one of these conditions.
**Proposition 5.8**.: _Diagrams are generated by the operations of \(\Sigma+\mathbf{Frob}\). That is, every diagram can be constructed by tensor and composition of \(\mathsf{id}\), \(\sigma\), and generators of \(\Sigma+\mathbf{Frob}\)._
Proof.: By Proposition 5.5, every diagram has a Frobenius decomposition. Since each element of the decomposition is either a Frobenius spider or a tensoring of operations, the full diagram can be constructed by tensor and composition of generators.
We can now give the main result.
**Proposition 5.9**.: \(\mathcal{S}\) _and \(\mathcal{D}\) form an isomorphism._
Proof.: The change of basis lemma from [26, B.1] says that if the following conditions hold then \(\mathcal{S}\) and \(\mathcal{D}\) form an isomorphism.
* \(\mathcal{S}\) and \(\mathcal{D}\) are strict symmetric monoidal identity-on-objects hypergraph functors (Definition 5.1, Proposition B.5)
* \(\mathsf{Free}_{\Sigma+\mathbf{Frob}}\) is generated by operations of \(\mathbf{Frob}\) (by definition)
* \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) is generated by singleton diagrams of operations in \(\mathbf{Frob}\) (Proposition 5.8)
* \(\mathcal{D}(\mathcal{S}(g))\cong g\) for \(g\in\Sigma_{1}\).
It therefore suffices to show that \(\mathcal{D}\circ\mathcal{S}\) is inverse on generators \(g\in\Sigma_{1}\). This is straightforward to derive: let \(\mathtt{singleton}(g):A\to B\) be an arbitrary generator, then the result follows by applying the unit and counit axioms of special frobenius monoids as below.
\[g\qquad=\qquad A\]
\(B\)\(B\)\(A\)\(B\)\(B\)
## 6 Symmetric Monoidal Case
We now examine the case of symmetric monoidal categories. Specifically, we will conclude that there is a subcategory \(\mathsf{Diagram}_{\Sigma}\) of \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) which is isomorphic to \(\mathsf{Free}_{\Sigma}\).
Our approach is based on that of [4], who characterise string diagrams over \(\Sigma+\mathbf{Frob}\) in terms of hypergraphs. By requiring an additional combinatorial condition-'monogamous acyclicity'- on their hypergraphs, they recover string diagrams over \(\Sigma\). That is, _monogamous acyclic_ hypergraphs correspond to string diagrams _without the additional hypergraph structure_ of \(\mathbf{Frob}\).
Since the datastructure we describe in Definition 4.1 is essentially an _encoding_ of the hypergraphs in [4], we simply translate this condition to our setting now.
**Definition 6.1** (Monogamicity [4]).: _A diagram \(d=\mathsf{L}(A)\xrightarrow{s}G\stackrel{{ t}}{{\leftarrow}} \mathsf{L}(B)\) is **monogamous** when \(s\) and \(t\) are mono, and the following hold for all \(\blacksquare\)-nodes \(i\)._
\[\mathsf{indegree}(i)=\begin{cases}0&\text{if }\exists j.s(j)=i\\ 1&\text{otherwise}\end{cases}\qquad\qquad\qquad\mathsf{outdegree}(i)=\begin{cases} 0&\text{if }\exists j.t(j)=i\\ 1&\text{otherwise}\end{cases}\]
_where we write \(\mathsf{indegree}(i)=|G(w_{i})^{-1}[i]|\) and \(\mathsf{outdegree}(i)=|G(w_{o})^{-1}[i]|\)_
**Definition 6.2** (Acyclicity).: _A bipartite multigraph is acyclic when its underlying graph is acyclic._
Let us now verify that these condition indeed characterise morphisms of \(\mathsf{Free}_{\Sigma}\). We begin with a lemma which says that the monogamous acyclic 'Frobenius spiders' are simply permutations.
**Proposition 6.3**.: _If \(d=(s,t,\mathsf{L}(Y))\) is a monogamous acyclic Frobenius spider then \(s\) and \(t\) are permutations._
Proof.: By Definition 6.1, \(s\) and \(t\) are monomorphisms (i.e., injective). Then observe that \(\mathsf{L}(Y)\) is a discrete graph, so each \(\blacksquare\)-node must have in- and out-degree \(0\). For this to hold, it must be that each node is in the image of both \(s\) and \(t\), so they are surjective. It then follows immediately that \(s\) and \(t\) are bijections.
Now check that those morphisms of \(\mathsf{Free}_{\Sigma}\) map under \(\mathsf{ToDiagram}\) to monogamous acyclic diagrams.
**Proposition 6.4**.: _If \(f\) is a morphism of \(\mathsf{Free}_{\Sigma}\), then \(\mathsf{ToDiagram}(f)\) is monogamous acyclic._
Proof.: Induction.
**Base Case: Generators**: By Proposition 6.3 generators \(\mathsf{id}\) and \(\sigma\) are monogamous acyclic. It is then straightforward to verify that singleton diagrams (Definition 4.9) are monogamous acyclic, and so we have for each \(g\in\Sigma_{1}\) that \(\mathsf{ToDiagram}(g)\) is monogamous acyclic.
**Inductive Step: Tensor Product**: Let \(d_{i}=(s_{i},t_{i},G_{i})\) be monogamous acyclic diagrams for \(i\in\{0,1\}\). Then \((s,t,G)\coloneqq d_{0}\otimes d_{1}=(s_{0}\otimes s_{1},t_{0}\otimes t_{1},G_ {0}\otimes G_{1})\) by definition. \(G\) is acyclic because it is the disjoint union of acyclic graphs, and \(s\) and \(t\) are mono because \(s_{i}\) and \(t_{i}\) are. It then remains to check the in/outdegree conditions. We verify the former and omit the latter for brevity.
Suppose \(i\) is a \(\blacksquare\)-node in \(G_{0}\). Then if \(\mathsf{indegree}(i)=1\), by assumption \(i\) is not in the image of \(s_{0}\), and thus not in the image of \(s_{0}\otimes s_{1}\). Moreover, if \(\mathsf{indegree}(i)=0\), then there is some \(j\) with \(s_{0}(j)=i\), and then \((s_{0}\otimes s_{1})(j)=i\).
Now suppose \(i\) is a \(\blacksquare\)-node in \(G_{1}\), and write \(m:=G_{0}(W)\). If \(\mathsf{indegree}(i)=1\) then by assumption \(i\) is not in the image of \(s_{1}\), and \(m+i\) is not in the image of \(s_{0}\otimes s_{1}\). Further, if \(\mathsf{indegree}(i)=0\), then there is some \(j\) with \(s_{1}(j)=i\), and then \((s_{0}\otimes s_{1})(m+j)=i\).
**Inductive Step: Composition**: Let \(d_{i}=\mathsf{L}(A_{i})\xrightarrow{\mathsf{L}(s_{i})\in C_{i}}G_{i} \xleftarrow{\mathsf{L}(t_{i})\in c_{i}}\mathsf{L}(A_{i+1})\) be diagrams for \(i\in\{0,1\}\). Their composite is given in terms of the coequalizer \(q=\mathsf{c}(t_{0}\,\sharp\,\iota_{0},s_{1}\,\sharp\,\iota_{1})\). Since \(t_{0}\,\sharp\,\iota_{0}\) and \(s_{1}\,\sharp\,\iota_{1}\) are mono, if \(q(i)=q(j)\) for \(i\neq j\), then there is some unique \(a\in A_{1}\) with \((t_{0}\,\sharp\,\iota_{0})(a)=i\) and \((s_{1}\,\sharp\,\iota_{1})(a)=j\) (or vice-versa). But then \(\mathsf{outdegree}(i)=0\) and and \(\mathsf{indegree}(j)=0\) because \(d_{0}\) and \(d_{1}\) are monogamous acyclic, so \(\mathsf{indegree}(q(i))=\mathsf{indegree}(i)\) and \(\mathsf{outdegree}(q(i))=\mathsf{outdegree}(j)\), and the quotiented graph is monogamous as well.
Finally, note that acyclicity is preserved for the same reason: if \(G_{0}\) and \(G_{1}\) are acyclic, then only \(\blacksquare\)-nodes in \(G_{1}\) are reachable from \(q(i)\), and thus no cycles are possible.
We now check the reverse: that all monogamous acyclic diagrams correspond to morphisms of \(\mathsf{Free}_{\Sigma}\). This fact is first proven in [4], and is thus somewhat expected. Moreover, our argument is essentially the same as [26, Proposition B.3], so we only sketch a proof here.
**Proposition 6.5**.: _If \(d\) is a monogamous acyclic diagram, then \(\mathcal{D}(d)\) is isomorphic to a morphism of \(\mathsf{Free}_{\Sigma}\)._
Proof.: Let \(d=(s,t,G)\) be a diagram. Using the acyclicity property, \(d\) can be decomposed into the form \(s\,\sharp\,\mathcal{S}\left(\,\overrightarrow{\boxed}\,\right)\,\sharp\,d^{\prime}\). Then, since \(s\) is mono, the result holds by induction on the number of generators \(d\).
By these two lemmas, there exists a category of monogamous acyclic diagrams whose morphisms are isomorphic to those of the free symmetric monoidal category over the same signature.
**Proposition 6.6**.: _There is a subcategory \(\mathsf{Diagram}_{\Sigma}\) of \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) whose morphisms are monogamous acyclic diagrams, and \(\mathsf{Diagram}_{\Sigma}\cong\mathsf{Free}_{\Sigma}\)._
Proof.: The property of monogamous acyclicity is closed under tensor and composition as guaranteed by Propositions 6.4 and 6.5. Moreover, the isomorphism \(\mathsf{Diagram}_{\Sigma}\cong\mathsf{Free}_{\Sigma}\) is given by the functors \(\mathcal{S}\) and \(\mathcal{D}\) restricted to monogamous acyclic diagrams.
**Remark 6.7**.: _Note that the functor \(\mathcal{D}\) does not directly give a \(\Sigma\)-term, but 'only' \(a\)\((\Sigma+\mathbf{Frob})\)-term equivalent to one. If one wishes to recover a bona fide \(\Sigma\)-term from a diagram, it is straightforward to use a decomposition based on e.g., topological sort as described in [26, Proposition B.13]._
## 7 A Faster Functor
In the previous section, we gave algorithms for tensor and composition of diagrams which had \(O(n)\) sequential and \(O(\log n)\) PRAM complexity. This allowed us to define a functor \(\mathcal{S}:\mathsf{Free}_{\Sigma+\mathbf{Frob}}\to\mathsf{Diagram}_{\Sigma+ \mathbf{Frob}}\). However, the complexity of applying this functor naively is \(O(n^{2})\).
We now give a faster functor \(\mathcal{F}:\mathsf{Free}_{\Sigma+\mathbf{Frob}}\to\mathsf{Diagram}_{\Sigma+ \mathbf{Frob}}\) which computes the resulting diagram directly. Importantly, the algorithm presented here will have linear (sequential) and logarithmic (PRAM CREW) time complexity in the size of the resulting diagram. Given a \(\Sigma\)-term \(t\), the basic idea is to tensor all the generators of \(t\) and then wire them all together in a single step. We illustrate this process with an example.
**Example 7.1**.: _Consider the \(\Sigma\)-term below (rendered as a tree) and its corresponding string diagram._
_The idea of this section is to build the string diagram on the right by first tensoring all the diagrams \(d_{0}\otimes d_{1}\otimes d_{2}\), and then 'wiring up' the diagram in a single step. More visually, we can picture this as the string diagram below, with the target of \(d_{0}\) 'wired up' to the source of \(d_{1}\)._
### A Faster Functor
In order to 'wire up a diagram in a single step', we must specify which \(\blacksquare\)-nodes are to be quotiented together, and which will belong on the boundaries. This specification comes in the form of a 4-tuple, which we call the 'wiring maps'.
**Definition 7.2** (Wiring Maps).: _Let \(e\) be a binary tree with \(n\) leaves the diagrams \(d_{i}=(s_{i},t_{i},G_{i})\), and whose nodes are labeled either \(\sharp\) or \(\otimes\). The four **wiring maps** of \(e\) have types_
\[e_{s}:A_{S}\to G(W)\qquad e_{t}:A_{T}\to G(W)\qquad e_{s}^{\prime}:A_{I}\to G(W) \qquad e_{t}^{\prime}:A_{I}\to G(W) \tag{11}\]
_where \(G=\bigotimes\limits_{i\in\overline{n}}G_{i}(W)\). and \(A_{S}\), \(A_{T}\), and \(A_{I}\) are computed recursively as follows. When \(e\) is a binary tree consisting of a single leaf \(e=\mathsf{L}(A)\xrightarrow{\mathsf{L}(s)\uparrow\mathsf{e}_{G}}G\xleftarrow{ \mathsf{L}(t)\uparrow\mathsf{e}_{G}}\mathsf{L}(B)\), the wiring maps are the \(\mathsf{Wires}\) morphisms given below._
_The wiring maps of the tree \(e=\bigotimes\limits_{l}\) are given by_
\[e_{s}:=\begin{array}{c}\includegraphics[height=56.905512pt]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figs/figs/
**Example 7.3**.: _Recall the example \(\Sigma\)-term and its corresponding string diagram below._
_Writing \(s_{i}\) and \(t_{i}\) for the source and target maps of \(d_{i}\), we have_
_Notice that coequalizing 'internal wiring maps' \(e^{\prime}_{s}\) and \(e^{\prime}_{t}\) will identify the targets of \(d_{0}\) with the sources of \(d_{1}\)._
Using the wiring maps, we can now define the 'fast' functor \(\mathcal{F}\).
**Definition 7.4**.: \(\mathcal{F}:\mathsf{Free}_{\Sigma+\mathsf{Frob}}\to\mathsf{Diagram}_{\Sigma+ \mathsf{Frob}}\) _is the identity-on-objects functor defined on arrows (\(\Sigma\)-terms) as follows. Let \(e\) be the binary tree formed by replacing the leaves of a \(\Sigma\)-term with the corresponding singleton diagrams, and let_
\[s:A_{S}\to G(W)\qquad t:A_{T}\to G(W)\qquad s^{\prime}:A_{I}\to G(W)\qquad t^{ \prime}:A_{I}\to G(W)\]
_be the wiring maps of \(e\). Define \(q:=c(t^{\prime},s^{\prime})\) to be the coequalizer of internal wiring maps._
(14)
_As in Proposition 4.16, \(q\) lifts to a coequalizer \(\alpha\) in \(\mathsf{BipartiteMultigraph}\) whose \(W\) component is \(q\) and all other components are \(\mathsf{id}\). \(\mathcal{F}(e)\) is then the diagram \((s\,\sharp\,,q,t\,\sharp\,,q,\mathsf{target}(\alpha))\)._
**Remark 7.5**.: _Notice that not only does \(\mathcal{F}\) define a functor from \(\mathsf{Free}_{\Sigma+\mathsf{Frob}}\), but it can also be used to arbitrarily 'wire up' existing diagrams, since it can be applied to any tree whose leaves are diagrams- not just singleton diagrams._
In Proposition 7.7 we will verify that \(\mathcal{F}\) and \(\mathcal{S}\) give isomorphic diagrams when applied to the same expression tree. This will also serve to verify the well-definedness of Definition 7.4. A key lemma in this proof will be the functoriality of \(\mathcal{F}\), which we now prove.
**Proposition 7.6** (Functoriality of \(\mathcal{F}\)).: \(\mathcal{F}(\mathsf{id})\cong\mathsf{id}\) _and \(\mathcal{F}\left(\begin{array}{c}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
an isomorphism of cospans.
(15)
The proof is in four parts. (1) Construction of the 'forward' map \(u:Q\to Q^{\prime}\), (2) the'reverse' map \(u^{\prime}:Q^{\prime}\to Q\), (3) showing they form an isomorphism, and (4) verifying this extends to a cospan isomorphism.
**(1) Forward**: Let us first construct the unique morphism \(u:Q\to Q^{\prime}\) by showing the existence of a map \(x:G(W)\to Q^{\prime}\) which coequalizes \(s^{\prime}\) and \(t^{\prime}\).
By definition, \(G(W)=G_{l}(W)\otimes G_{r}(W)\), and so we can define \(x\,:=\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
and so we can calculate
\[t^{\prime}\,\sharp\,x = \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[= \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[= \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[= \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[= \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[= \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\]
Thus since \(x\) coequalizes \(t^{\prime}\) and \(s^{\prime}\), by the universal property of coequalizers there is a unique morphism \(u:Q\to Q^{\prime}\) such that \(q\,\sharp\,u=x\).
**(2) Reverse**: We now construct the reverse morphism \(u^{\prime}:Q^{\prime}\to Q\). First define maps \(y_{l}:G_{l}(W)\to Q\) and \(y_{r}:G_{r}(W)\to Q\) as follows.
\[y_{l} := \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[y_{r} := \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] Now calculate that \(y_{l}\) coequalizes \(l^{\prime}_{t}\) and \(l^{\prime}_{s}\).
\[l^{\prime}_{t}\,\sharp\,y_{l} = \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] \[= \begin{array}{c}\includegraphics[height=142.26378pt]{images/142.eps} \end{array}\] A similar calculation shows that \(r^{\prime}_{t}\,\sharp\,y_{r}=r^{\prime}_{s}\,\sharp\,y_{r}\), and so by the universal property there must exist unique morphisms \(u_{l}:Q_{l}\to Q\) and \(u_{r}:Q_{r}\to Q\) such that \(y_{l}=q_{l}\,\sharp\,u_{l}\) and \(y_{r}=q_{r}\,\sharp\,u_{r}\).
Using \(u_{l}\) and \(u_{r}\), we can define \(y:=\begin{array}{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics
And so \(u_{l}\,\nmid u=\)\(\,\), as desired. A similar argument yields \(u_{r}\,\nmid u=\)\(\,\), completing the proof that \(u\) and \(u^{\prime}\) form an isomorphism.
**(4) Cospan Isomorphism**: We now show that \(u\) extends to an isomorphism of cospans. The following calculation equates the cospan source legs of (14) and (15) via the universal map \(u\).
A similar calculation shows the reverse direction and symmetric arguments apply to the target maps, completing the proof.
We now prove the isomorphism between of \(\mathcal{F}\) and \(\mathcal{S}\).
**Proposition 7.7**.: _Let \(e\) be a \(\Sigma\)-term. Then there is an isomorphism of diagrams \(\mathcal{F}(e)\cong\mathcal{S}(e)\)._
Proof.: We proceed by induction. In the base case when \(e\in\{\mathrm{id},\sigma\}\cup\Sigma_{1}\) we have \(\mathcal{F}(e)=\mathcal{S}(e)\). For the inductive step, assume that there are isomorphisms \(u_{l}:\mathcal{F}(l)\cong\mathcal{S}(l)\) and \(u_{r}:\mathcal{F}(r)\cong\mathcal{S}(r)\). We must now show that \(\mathcal{F}(e)\cong\mathcal{S}(e)\) for the cases \(e=\)\(\,\) and \(e=\)\(\,\). In the former case, observe that
\[\mathcal{F}\left(\,\),\(\,\),\(\,\),\(\,\),\(\,\),\(\,\),\(\,\),\(\,\,\),\(\,\,\),\(\,\,\),\(\,\,\),\(\,\,\),\(\,\,\,\),\(\,\,\,\),\(\,\,\,\),\(\,\,\,\),\(\,\,\,\),\(\,\,\,\),\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\)\(
**Definition 8.1** (Left and Right \(\natural\)-Ancestors).: _In a binary tree \(e\), a node \(i\) has a **right \(\natural\)-ancestor** if it is in the left subtree of any node \(j\) labeled \(\natural\). Respectively, \(i\) has a **left \(\natural\)-ancestor** if it is in the right subtree of any node \(j\) labeled \(\natural\)._
Computing the left/right ancestors of a node determines how to 'wire up' the diagram. We illustrate this with the following example.
**Example 8.2**.: _Pictured below is a \(\Sigma\)-expression rendered as a tree, with the closest left (resp. right) \(\natural\)-ancestor of each node listed below, with \(\cdot\) indicating a node has no left (resp. right) \(\natural\)-ancestor._
_
\begin{tabular}{} \end{tabular}_
_Diagrams (leaves) \(d_{0}\) and \(d_{1}\) are 'bound' by node \(1\), indicating that the outputs of \(d_{0}\) will be 'connected' to the inputs of \(d_{1}\)._
**Definition 8.3** (Ancestor Maps).: _Let \(e\) be a binary tree of \(n\) leaves and \(n-1\) (non-leaf) nodes. The **left and right ancestor maps** of \(e\) are the following functions._
\[a_{L} :n\to n\] \[a_{L} :=\begin{cases}0&\text{if $i$ has no left $\natural$-ancestors}\\ j+1&\text{for the largest $j$ such that $j$ is a left $\natural$-ancestor of $i$}\end{cases}\] \[a_{R} :n\to n\] \[a_{R} :=\begin{cases}j&\text{for the smallest $j$ such that $j$ is a right $\natural$-ancestor of $i$}\\ n&\text{if $i$ has no left $\natural$-ancestors}\end{cases}\]
**Remark 8.4**.: _The left ancestor map \(a_{L}\) returns the index of the 'closest' ancestor for which a given node is in the right subtree- meaning that the ancestor is 'left' of the node in an inorder traversal._
We will show how to efficiently compute the ancestor maps in Section 8.2.
**Proposition 8.5** (Wiring Maps Alternative Definition).: _Let \(e\) be a tree with \(n\) leaves the diagrams
\(d_{i}:A_{i}\to B_{i}\) for \(i\in\overline{n}\). The wiring maps of \(e\) can be equivalently defined as follows._
\[\begin{array}{ccccc}e_{s}&=&\iota_{0}^{S,S^{\prime}}\,\sharp\,p_{s}\,\sharp\, \bigotimes_{i\in\overline{n}}s_{i}&=&\includegraphics[scale=0.5]{fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/ fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig//fig/fig/fig/ fig//fig/fig/fig/fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig// fig//fig/ fig// fig// fig// fig// fig// fig///fig/ fig// fig// fig// fig///fig/ fig/// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig/// fig/// fig// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig/// fig/// fig//// fig//// fig//// fig/// fig//// fig/// fig//// fig/// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig///// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig//// fig//// fig///// fig///// fig//// fig///// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig///// fig//// fig///// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig/// fig//// fig//// fig/// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig//// fig//// fig/// fig//// fig/// fig//// fig/// fig//// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig///
(16)
_Where \(p_{s}\) and \(p_{t}\) are the natural transformations defined by the stable sorting permutations \(\mathsf{sort}_{\langle a_{L},\mathsf{id}\rangle}\) and \(\mathsf{sort}_{\langle a_{R},\mathsf{id}\rangle}\), respectively2, and the objects \(S\), \(S^{\prime}\), \(T^{\prime}\), and \(T\) are defined as follows._
Footnote 2: We define ‘stable’ sorts in Definition A.5, and omit subscripts of these natural transformations to reduce notational burden.
\[S=\bigotimes_{\{i\in\overline{n}|a_{L}(i)=0\}}A_{i}\qquad\qquad S^{\prime}= \bigotimes_{\{i\in\overline{n}|a_{L}(i)>0\}}A_{i}\]
\[T=\bigotimes_{\{i\in\overline{n}|a_{R}(i)=n-1\}}B_{i}\qquad\qquad T^{\prime}= \bigotimes_{\{i\in\overline{n}|a_{R}(i)<n-1\}}B_{i}\]
**Remark 8.6**.: _The objects \(S\) and \(S^{\prime}\) refer to unbound and bound source maps, respectively. The source map of a diagram is 'unbound' if it won't be quotiented with other wires in construction of the diagram. Notice that the objects in the tensor \(S\) are precisely those source objects for which there is no left \(\sharp\)-ancestor. Thus, these source maps will constitute the left dangling wires of the resulting diagram. Meanwhile, \(S^{\prime}\) represent the source maps which \(\mathsf{do}\) have a left \(\sharp\)-ancestor, and are therefore part of a composition- these will indicate the \(\blacksquare\)-nodes in the graph which need to be quotiented in the result. Similarly, \(T\) and \(T^{\prime}\) are the unbound and bound target maps, respectively._
Proof.: Induction.
**Base Case: Single Leaf**: Let \(e\) be a tree consisting of a single leaf, a diagram \(d=(s,t,G)\) of type \(A\to B\). Then \(a_{L}=\mathsf{id}\), so \(p_{s}=\mathsf{id}\), and \(a_{R}=\mathsf{id}\) so \(p_{t}=\mathsf{id}\). Moreover, \(S=A\), \(S^{\prime}=I\), \(T=B\), and \(T^{\prime}=I\), and one can calculate as follows.
\[\begin{array}{ccccc}e_{s}&=&\iota_{0}^{A,I}\,\sharp\,\mathsf{id}\,\sharp\,s &=&A&\includegraphics[scale=0.5]{fig/fig/fig/fig/fig//fig//fig//fig/// fig//// fig//// fig//// fig//// fig/// fig/// fig// fig/// fig// fig/// fig/// fig// fig/ fig/// fig// fig// fig// fig/// fig// fig/// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig/ fig// fig/ fig// fig/ fig/// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig/// fig/ fig// fig// fig// fig/ fig/ fig// fig/ fig// fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig/ fig/ fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig/ fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig// fig/ fig/ fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig// fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig
maps of \(l\) and \(r\), we have
\[a_{L}=\raisebox{-1.5pt}{\includegraphics[]{figures/1
completing the proof.
**Proposition 8.7**.: _Let \(e\) be a binary tree of \(n\) nodes, and let \(a_{L}\) and \(a_{R}\) denote the ancestor maps of \(e\). Computing the wiring maps of \(e\) takes \(O(n)\) sequential and \(O(\log n)\) parallel time._
Proof.: Assume the ancestor maps \(a_{L}\) and \(a_{R}\) are provided. Notice that \(p_{s}\) and \(p_{t}\) are obtained by a stable _integer_ sort of the array data of \(a_{L}\) and \(a_{R}\). This has \(O(n)\) sequential and \(O(\log n)\) PRAM time complexity: one can use e.g., counting sort in the sequential case and radix sort in the parallel case. The result then follows by the linear (sequential) and \(O(1)\) (parallel) time complexity of finite function composition.
### Computing left and right matching ancestors
In Section 8.1, we reduced the problem of computing the wiring maps (Definition 7.2) to a simple function on trees- computing the 'ancestor maps' (Definition 8.3). Although in the sequential case there is a straightforward top-down algorithm, it is not easily parallelizable.
In this section, we give a data-parallel algorithm for computing the closest ancestor matching a predicate for which a node is in either the left or right subtree. This is a generalisation of the problem of computing the ancestor maps. Our algorithm has \(O(\log n)\) PRAM CREW time complexity, which we prove in Corollary 8.16.
It is first necessary to represent trees in a 'parallel friendly' way. A number of suitable array-based representations of trees exist (e.g., [10, p. 128]) having constant-time operations for computing parent and child indices of a given node. Thus, instead of fixing a specific array-based representation of a tree, we simply assume that the following finite functions can be computed in linear (sequential) and constant (parallel) time.
**Definition 8.8**.: _Let \(e\) be a binary tree of size \(n=2m-1\) (i.e., \(e\) has \(m\) leaves and \(m-1\) nodes.) Denote by \(\mathtt{parent}:n\to n+1\) the finite function mapping a node to its parent, or \(n\) if none exists._
_We write \(\mathtt{isRight}:n\to 2\) for the predicate function returning \(1\) if a node is the right child of a node, and \(0\) otherwise. Similarly, write \(\mathtt{isLeft}:n\to 2\) for the predicate returning \(1\) if a node is the left child of a node._
**Example 8.9**.: _The actions of parent, isRight, and isLeft are illustrated with the following example, in which the nodes of the tree are labeled \(0\ldots 4\)._
\[\begin{array}{ccccc}\text{node}&0&1&2&3&4\\ \text{parent}&1&3&1&5&3\\ \text{isRight}&0&0&1&0&1\\ \mathtt{isLeft}&1&1&0&0&0\end{array}\]
Recall that the purpose of the ancestor maps (Definition 8.3) is to compute the closest ancestor labeled \(\sharp\) for each leaf in the tree. The algorithm presented here solves a slightly more general problem: computing the closest ancestor matching a predicate \(P\) for which a node is in the left or right subtree. Inductive definitions of these functions are given below.
**Definition 8.10** (Left/Right \(P\)-Ancestor Function).: _Let \(e\) be a binary tree of size \(n\), and let \(P:n\to 2\) be a predicate on nodes. The **Left \(P\)-Ancestor** function is denoted \(\mathtt{ancestor}_{L}\), and defined below._
\[\mathtt{ancestor}_{L}:n\to n+1\] \[\mathtt{ancestor}_{L}(i)=\begin{cases}n&\mathtt{root}(i)\\ \mathtt{parent}(i)&P(\mathtt{parent}(i))\vee\mathtt{isRight}(i)\\ \mathtt{ancestor}_{L}(\mathtt{parent}(i))+\mathtt{isRight}(i)&\text{ otherwise}\end{cases}\]
_Similarly, the **Right \(P\)-Ancestor** function is given by_
\[\mathtt{ancestor}_{R}:n\to n+1\] \[\mathtt{ancestor}_{R}(i)=\begin{cases}n&\mathtt{root}(i)\\ \mathtt{parent}(i)&P(\mathtt{parent}(i))\vee\mathtt{isLeft}(i)\\ \mathtt{ancestor}_{R}(\mathtt{parent}(i))+\mathtt{isLeft}(i)&\text{ otherwise}\end{cases}\]
The following example illustrates the action of the \(\mathtt{ancestor}_{L}\) map.
**Example 8.11**.: _Pictured below is a tree in which each node \(i\) is drawn in black if \(P(i)=1\) and white otherwise, and the value of \(\mathtt{ancestor}_{L}(i)\) is shown for each node._
\[\begin{array}{ccccc}\text{node}&0&1&2&3&4\\ P&0&1&0&0&0\\ \mathtt{ancestor}_{L}&5&5&1&5&5\end{array}\]
Our approach to efficiently computing \(\mathtt{ancestor}_{L}\) and \(\mathtt{ancestor}_{R}\) is 'bottom-up'. We first transform the tree of size \(n\) to a functional graph with \(2n+1\) nodes, and then iterate its adjacency function until the result converges. We illustrate this process with the example below.
**Example 8.12**.: _Pictured below is a tree with \(n=5\) nodes, and its 'left ancestor graph', with
\(2n+1=11\) nodes._
_node_ \(0\)\(1\)\(2\)\(3\)\(4\)\(\Longrightarrow\)_node_ \(0\)\(1\)\(2\)\(3\)\(4\)_node_ \(0\)\(1\)\(2\)\(3\)\(4\)_node_ \(0\)\(1\)\(2\)\(3\)\(4\)_node_ \(0\)\(1\)\(2\)\(3\)\(4\)_node_ \(0\)\(1\)\(2\)\(3\)\(4\)_node_ \(5\)\(6\)\(7\)\(8\)\(9\)\(10\)_
_For each node \(i\) in the tree there are two nodes in the graph, \(2i\) and \(2i+1\). The edges of the graph represent the parent relation, split into two distinct cases: when a child is a left or right child of its parents. Left children of the node \(i\) are adjacent to the graph node \(2i\), while right children are adjacent to \(2i+1\). When a \(P(i)\), its corresponding graph node \(2i+1\) points to itself instead of to its parent. Thus, when iterating the adjacency relation, eventually all right-descendents are 'captured' by their closest left-ancestor._
This graph is defined more formally below.
**Definition 8.13** (Ancestor Graph).: _Let \(e\) be a tree with \(m\) leaves and \(n=2m-1\) nodes, and \(P:n\to 2\) a predicate on nodes. The **Left Ancestor Graph** of \(e\) has \(2n+1\) vertices, and edges given by the following adjacency function._
\[r_{L}:2n+1\to 2n+1\] \[r_{L}(j)=\begin{cases}j&j=2n\\ j&\mathtt{odd}(j)\wedge P\left(\left\lfloor\frac{j}{2}\right\rfloor\right)\\ 2\cdot\mathtt{parent}\left(\left\lfloor\frac{j}{2}\right\rfloor\right)+ \mathtt{isRight}\left(\left\lfloor\frac{j}{2}\right\rfloor\right)&\text{ otherwise}\end{cases}\]
_Symmetrically, the **Right Ancestor Graph** has adjacency relation
\[r_{R}:2n+1\to 2n+1\] \[r_{R}(j)=\begin{cases}j&j=2n\\ j&\mathtt{even}(j)\wedge P\left(\left\lfloor\frac{j}{2}\right\rfloor\right)\\ 2\cdot\mathtt{parent}\left(\left\lfloor\frac{j}{2}\right\rfloor\right)+ \mathtt{isRight}\left(\left\lfloor\frac{j}{2}\right\rfloor\right)&\text{ otherwise}\end{cases}\]
We can now relate the ancestor graph and the functions \(\mathtt{ancestor}_{L}\) and \(\mathtt{ancestor}_{R}\).
**Proposition 8.14**.: _For all nodes \(i\) of depth \(\mathtt{depth}(i)\) in a tree \(e\)_
\[\mathtt{ancestor}_{L}(i)=\left\lfloor\frac{r_{L}^{k}(2i)}{2}\right\rfloor \mathtt{ancestor}_{R}(i)=\left\lfloor\frac{r_{R}^{k}(2i)}{2}\right\rfloor\]
_for \(k>\mathtt{depth}(i)\) where \(r\) is the adjacency function of the ancestor graph._
Proof.: Top-down induction by depth. We write \(r\) for \(r_{L}\) and prove only the former case, omitting the symmetric proof for the \(\mathtt{ancestor}_{R}\) function.
**Base Case**: Suppose \(\mathtt{root}(i)\) so \(\mathtt{depth}(i)=0\). We have \(r(2i)=2\cdot\mathtt{parent}(i)+\mathtt{isRight}(i)=2\cdot\mathtt{parent}(i)=2n\), and so
\[\left\lfloor\frac{r(2i)}{2}\right\rfloor=n=\mathtt{ancestor}_{L}(i)\]
Moreover, \(2n\) is a fixed point for \(r\), so this holds for any \(r^{k}\) where \(k\geq 1\).
**Inductive Case 1**: In the second case, we have \(P(\mathtt{parent}(i))\wedge\mathtt{isRight}(i)\). Suppose \(\mathtt{depth}(i)=m\), where \(m\geq 1\) because \(i\) is not the root node. We have
\[r(2i)=2\cdot\mathtt{parent}(i)+\mathtt{isRight}(i)=2\cdot\mathtt{parent}(i)+1\]
Now let \(k\geq 1\) so that \(m+k>\texttt{depth}(i)\). Then
\[\left\lfloor\frac{r^{m+k}(2i)}{2}\right\rfloor=\left\lfloor\frac{r^{m+k-1}(2 \cdot\texttt{parent}(i)+1)}{2}\right\rfloor=\left\lfloor\frac{2\cdot\texttt{ parent}(i)+1}{2}\right\rfloor=\texttt{parent}(i)=\texttt{ ancestor}_{L}(i)\]
as required.
**Inductive Case 2**: In the final case, let \(i\) be a node, and let \(k>\texttt{depth}(i)\) so that \(k-1>\texttt{depth}(parent(i))\) and the inductive hypothesis is as below.
\[\left\lfloor\frac{r^{k-1}(2\cdot\texttt{parent}(i))}{2}\right\rfloor=\texttt{ ancestor}_{L}(\texttt{parent}(i))\]
Now split into two further cases: (a) when \(\neg\texttt{isRight}(i)\) and (b) when \(\neg P(\texttt{parent}(i))\wedge\texttt{isRight}(i)\).
**Case (a)**: Observe that \(r(2i)=2\cdot\texttt{parent}(i)+\texttt{isRight}(i)=2\cdot\texttt{parent}(i)\). Then
\[r^{k}(2i)=r^{k-1}(r(2i))=r^{k-1}(2\cdot\texttt{parent}(i))\]
from which we can derive
\[\left\lfloor\frac{r^{k}(2i)}{2}\right\rfloor=\left\lfloor\frac{r^{k-1}(2\cdot \texttt{parent}(i))}{2}\right\rfloor=\texttt{ ancestor}_{L}(\texttt{parent}(i))= \texttt{ ancestor}_{L}(i)\]
**Case (b)**: When \(\neg P(\texttt{parent}(i))\wedge\texttt{isRight}(i)\), we have \(r(2i)=2\cdot\texttt{parent}(i)+\texttt{isRight}(i)=2\cdot\texttt{parent}(i)+1\) and
\[r(2\cdot\texttt{parent}(i)+1) =2\cdot\texttt{parent}\left(\left\lfloor\frac{2\cdot\texttt{ parent}(i)+1}{2}\right\rfloor\right)+\texttt{isRight}\left(\left\lfloor\frac{2\cdot \texttt{parent}(i)+1}{2}\right\rfloor\right)\] \[=2\cdot\texttt{parent}(\texttt{parent}(i))+\texttt{isRight}( \texttt{parent}(i))\] \[=r(2\cdot\texttt{parent}(i))\]
where we use that \(\neg P(\texttt{parent}(i))\) in the first step. One can then derive the result immediately as follows.
\[\left\lfloor\frac{r^{k}(2i)}{2}\right\rfloor=\left\lfloor\frac{r^{k-1}(2 \cdot\texttt{parent}(i)+1)}{2}\right\rfloor=\left\lfloor\frac{r^{k-1}(2\cdot \texttt{parent}(i))}{2}\right\rfloor =\texttt{ ancestor}_{L}(\texttt{parent}(i))\] \[=\texttt{ ancestor}_{L}(i)\]
**Corollary 8.15**.: _Let \(e\) be a tree of size \(n\). Then_
\[\texttt{ancestor}_{L}(i)=\left\lfloor\frac{r^{n}(2i)}{2}\right\rfloor\]
Proof.: The maximum depth of a leaf in a tree of \(n\) nodes is \(n-1\). Thus \(n>\texttt{depth}(i)\) for all nodes \(i\), and so the equality holds by Proposition 8.14.
**Corollary 8.16**.: _Let \(e\) be a tree of size \(n\). Computing the finite functions \(\texttt{ancestor}_{L}\) and \(\texttt{ancestor}_{R}\) for \(e\) has \(O(\log n)\) PRAM CREW time complexity._
Proof.: Let \(r\) denote either \(r_{L}\) or \(r_{R}\). Computing \(r\) is \(O(1)\) because each entry of the underlying array can be computed in parallel in constant time. A composition \(r\mathord{\text{\text{\text{\text{\text{\text{\text{\text{\
**Remark 8.17**.: _The technique of'repeated squaring' a functional graph is known as 'pointer jumping' in the parallel computing literature. See for example [19, 2.2]. Note also that while its time complexity is logarithmic, the algorithm given here is not work-efficient and requires \(O(n\log n)\) operations. We expect this can be improved to \(O(n)\), but leave this to future work._
Finally, note that the ancestor maps \(a_{L}\) and \(a_{R}\) of Definition 8.3 are not exactly the same functions as those in Definition 8.10. However, it is a straightforward matter of elementwise arithmetic to transform between the two finite functions, and so we omit the details.
## 9 Applying Functors to Diagrams
We now give an efficient, parallel algorithm for applying strict symmetric monoidal hypergraph functors to diagrams. We will hereafter just say 'functor' and assume the strict symmetric monoidal hypergraph structure. Note that in what follows, we assume arbitrary signatures \(\Sigma\) and \(\Omega\), which optionally include the chosen Special Frobenius structure \(\mathbf{Frob}\).
The algorithm given here is defined _without_ having to first decompose a diagram into a \(\Sigma\)-term. The example below illustrates the importance of this for the case of diagrams in \(\mathsf{Diagram}_{\Sigma}\) which are not equipped with hypergraph structure.
**Example 9.1**.: _Suppose we are working in the category \(\mathsf{Free}_{\mathsf{CMon}}\). The terminal map \(!_{N}:N\to 1\) is represented as the following string diagram._
_Represented combinatorially as a diagram, this has \(O(N)\) internal wires, generators, and edges. However, suppose we decompose it by taking'vertical slices', then the number of terms in the decomposition is \(O(N^{2})\). The first'slice' is \({}^{A}_{A}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
function_. (This latter assumption is somewhat more restrictive than required; we discuss a weaker condition in Section 11.)
A functor \(F:\mathsf{Diagram}_{\Sigma}\to\mathsf{Diagram}_{\Omega}\) consists of two maps. A function on _objects_\(F_{0}:\Sigma_{0}^{*}\to\Omega_{0}^{*}\) and a function on _arrows_\(F_{1}:\mathsf{Diagram}_{\Sigma}(A,B)\to\mathsf{Diagram}_{\Omega}(F(A),F(B))\). In this section, we introduce an auxiliary data structure, the'segmented finite function', which will allow us to encode these maps in a manner suitable for parallel application.
To motivate this, consider the object map \(F_{0}\). Since \(F\) is a _strict_ symmetric monoidal hypergraph functor between categories of diagrams, we have the following.
\[F_{0}\left(\sum_{i\in N}A_{i}\right)=\sum_{i\in N}F_{0}(A_{i})\]
We may therefore consider the object map of \(F\) to have the type \(F_{0}:\Sigma_{0}\to\Omega^{*}\) since (by strictness) it is completely defined by its action on generating objects. Similarly, we think of the map on arrows as having the type \(F_{1}:\Sigma_{1}\to\mathsf{Diagram}_{0}\).3
Footnote 3: Note that strictly speaking morphisms of \(\mathsf{Diagram}_{\Omega}\) are equivalence classes, but the map \(F_{1}\) maps a generator to specific diagram. We gloss over this detail to avoid introducing new notation.
Now, for each \(i\in\Sigma_{0}\), we can think of \(F_{0}(i):\Omega_{0}^{*}\) as a finite function \(F_{0}(i):s(i)\to\Omega_{0}\) where \(s:\Sigma_{0}\to K\) is a function denoting the length (source) of each list \(s(i)=|F_{0}(i)|\), and \(K\) is the maximum such length. Encoding the object map \(F_{0}\) will mean storing the data of these finite functions in a 'flat' array structure, with the array \(s\) encoding their sources.
**Definition 9.2**.: _Let \(f_{i}:A_{i}\to B_{i}\) be a collection of \(N\) finite functions. A **segmented finite function** encodes their sources, targets, and array data as the following three maps._
\[\mathsf{sources}:N\to\max_{i\in N}\mathsf{source}(f_{i}) \mathsf{targets}:N\to\max_{i\in N}\mathsf{target}(f_{i})\] \[\mathsf{sources}(i)=\mathsf{source}(f_{i}) \mathsf{targets}(i)=\mathsf{target}(f_{i})\]
\[\mathsf{values}:\sum_{i\in N}\mathsf{sources}(i)\to\max_{i\in N} \mathsf{target}(f_{i})\] \[\mathsf{values}=\sum_{i\in N}(f_{i}\,\sharp\,\iota_{0})\]
**Remark 9.3**.: _In parallel programming terminology, \(\mathsf{values}\) is a segmented array with segment sizes are given by \(\mathsf{sources}\). Note that we also store the \(\mathsf{targets}\) of each morphism, which will be necessary to take tensor products._
Now, given an indexing function, we can then take arbitrarily ordered coproducts of the functions \(f_{i}\) as follows.
**Proposition 9.4**.: _Let \(f_{i}:A_{i}\to B\) be a collection of \(N\) finite functions sharing a codomain, and \(x:X\to N\) a finite function. Then the \(x\)-indexed coproduct is calculated as follows._
\[\sum_{i\in X}f_{x(i)}:\sum_{i\in X}A_{x(i)}\to B\] \[\sum_{i\in X}f_{x(i)}=\left(\sum_{i\in X}\iota_{f(i)}\right)\, \sharp\,\mathsf{values}\]
Proof.: We may calculate \(\left(\sum_{i\in X}\iota_{x(i)}\right)\,\sharp\,\mathsf{values}=\sum_{i\in X} f_{x(i)}\,\sharp\,\iota_{0}=\sum_{i\in X}f_{x(i)}\) because \(\iota_{0}=\mathsf{id}\) since targets of all \(f_{i}\) are equal. Note also that the source \(\sum_{i\in X}A_{x(i)}\) can be computed by the sum of entries of the array
This kind of indexed coproduct is precisely what we need to apply the object map \(F_{0}\) of a functor to _Frobenius spiders_. We begin by defining precisely how the object map is encoded.
**Definition 9.5**.: _Given finite signatures \(\Sigma\) and \(\Omega\) and a strict symmetric monoidal hypergraph functor \(F:\mathsf{Diagram}_{\Sigma}\to\mathsf{Diagram}_{\Omega}\), the **object map encoding** of \(F\) is the segmented finite function with \(\mathsf{sources}=s\), \(\mathsf{values}=v\), and \(\mathsf{targets}(i)=\Omega\), where_
\[s:\Sigma_{0}\to K v:\sum_{i\in\Sigma_{0}}s(i)\to\Omega_{0}\] \[s(i)=|F_{0}(i)| v=\sum_{i\in\Sigma_{0}}F_{0}(i)\]
In the above, the injections of the coproduct \(\mathsf{source}(v)=\sum_{i\in\Sigma_{0}}s(i)\) have the type \(\iota_{x}:s(x)\to\sum_{i\in\Sigma_{0}}s(i)\). By definition of the coproduct, we have the following.
\[\iota_{x}\,\natural\,v:s(x)\to\Omega_{0}\] \[\iota_{x}\,\natural\,v=F_{0}(x)\]
In other words, precomposing the segmented array of values with an injection amounts to taking a slice of the array corresponding to a specific'segment'.
With the object map \(F_{0}\) suitably represented, we can now show how to map the data of a Frobenius spider \(f=(s,t,\mathsf{L}(B))\) in \(\mathsf{Diagram}_{\Sigma}\) to its corresponding spider in \(\mathsf{Diagram}_{\Omega}\). Recall that any Frobenius spider can be regarded as the composite of two _half-spiders_ (Definition 3.5): \(f=(s,\mathsf{id},\mathsf{L}(B))\,\sharp\,(\mathsf{id},t,\mathsf{L}(B))\). It therefore suffices to determine the action of \(F\) on half-spiders.
Since half-spiders are in the image of the functor \(\mathsf{S}\), we may regard them simply as morphisms of \(\mathsf{Wires}\). The action of \(F\) on such morphisms is as follows.
**Proposition 9.6**.: _Let \(f:A\to B\) be a morphism of \(\mathsf{Wires}\), and suppose \(F\) is a strict symmetric monoidal hypergraph functor whose object encoding is the segmented finite function with \(\mathsf{sources}=s\) and \(\mathsf{values}=v\). Then \(F(\mathsf{S}(f))=\mathsf{S}(f^{\prime}):A^{\prime}\to B^{\prime}\), where_
\[B^{\prime}(w_{n}):\sum_{b\in B(W)}s(B(w_{n})(b))\to\Omega_{0} f^{\prime}:\sum_{a\in A(W)}s(A(w_{n})(a))\to\sum_{b\in B(W)}s(B(w_{n})(b))\] \[B^{\prime}(w_{n}):=\left(\sum_{b\in B(W)}\iota_{B(w_{n})(b)} \right)\,\sharp\,v f^{\prime}:=\sum_{a\in A(W)}\iota_{f_{W}(a)}\]
_and \(A^{\prime}(w_{n})=f^{\prime}\,\sharp\,B^{\prime}(w_{n})\) is fixed by naturality._
Proof.: The definition of \(B^{\prime}\) is required by the strictness assumption. We must then verify that \(A^{\prime}(w_{n})=f^{\prime}\,\sharp\,B^{\prime}(w_{n})\), which follows by naturality of \(f\):
\[f^{\prime}_{W}\,\sharp\,B^{\prime}(w_{n}) :\sum_{a\in A(w_{n})}s(A(w_{n})(a))\to\Omega_{0}\] \[:\sum_{a\in A(w_{n})}s(B(w_{n})(f(a)))\to\Omega_{0}\] \[f^{\prime}_{W}\,\sharp\,B^{\prime}(w_{n}) =\left(\sum_{a\in A(W)}\iota_{f_{W}(a)}\right)\,\sharp\,\left( \sum_{b\in B(W)}\iota_{B(w_{n})(b)}\right)\,\sharp\,v\] \[=\left(\sum_{a\in A(W)}\iota_{B(w_{n})(f(a))}\right)\,\sharp\,v\] \[=\left(\sum_{a\in A(W)}\iota_{A(w_{n})(a)}\right)\,\sharp\,v\]
Lastly, we must check that the hypergraph structure is preserved, and that \(F\) satisfies the functor laws. This is done by induction: in the base case, it is straightforward to verify that the definition holds for generating operations in the image of \(\mathsf{S}\). The inductive step for tensor product is similarly
straightforward. One can verify the case for composition as follows. Suppose \(A\stackrel{{ f}}{{\rightarrow}}B\stackrel{{ g}}{{\rightarrow}}C\) are morphisms of \(\mathsf{Wires}\). Then
\[f^{\prime}\mathbin{\,\sharp\,}g^{\prime}: \sum_{a\in A(W)}s(B(w_{n})(f(a)))\rightarrow\sum_{c\in C(W)}s(C(wn )(c))\] \[f^{\prime}\mathbin{\,\sharp\,}g^{\prime}=\left(\sum_{a\in A(W)} \iota_{f(a)}\right)\mathbin{\,\sharp\,}\left(\sum_{b\in B(W)}\iota_{g(b)} \right)=\sum_{a\in A(W)}\iota_{g(f(a))}=(f\mathbin{\,\sharp\,}g)^{\prime}\]
Finally, since \(F\) is a hypergraph functor, we have \(F(\mathsf{S}(f))=\mathsf{S}(f^{\prime})\) by uniqueness of \(\mathsf{S}\).
**Remark 9.7**.: _Note carefully that the injections in the definition of \(B^{\prime}\) and \(f^{\prime}\) above are not the same. The indices of injections used in the definition of \(B^{\prime}\) range over the segments of \(v\), whereas those in the definition of \(f^{\prime}\) range over elements of \(B(W)\)._
The object \(B^{\prime}(w_{n})\) of the mapped spider is precisely a \(B(w_{n})\)-indexed coproduct of the values table. Similarly, the value of \(f^{\prime}\) is given by 'indexed injections'. In order to compute both, we use the following algorithm. Given a finite function \(s:N\to K\) and a map \(x:X\to N\) thought of as an array of indices, injections(s, x) computes a coproduct of injections \(\iota_{x(0)}\cdots\iota_{x(A-1)}\).
def injections(s, x): p = prefix_sum(s) r = segmented_arange(x >> s) return FiniteFunction(sum(s), r.table + repeat(x >> p, x >> s).table)
Note carefully that the \(+\) operation in the above algorithm denotes pointwise sum of integer arrays, not the coproduct. Given the object map encoding of \(F\), one can then compute \(f^{\prime}=\mathsf{injections(s,f>>B(w_{n}))}\) and \(B^{\prime}(w_{n})=\mathsf{injections(s,B(w_{n}))>>v}\).
**Proposition 9.8**.: _Let \(s:N\to K\) and \(x:X\to N\) be finite functions. The sequential time complexity of computing injections is \(O(N)+O(X)+O(\mathsf{sum}(x\mathbin{\,\sharp\,}s))\) and its PRAM CREW complexity is \(O(\log N)+O(\log X)\)._
Proof.: In the sequential case, both function compositions are \(O(X)\), prefix sum is \(O(N)\), and remaining operations are \(O(\mathsf{sum}(x\mathbin{\,\sharp\,}s))\). In the parallel case, the same operations are \(O(\log N)\), \(O(1)\), and \(O(\log X)\).
Note that when computing \(f^{\prime}\) and \(B^{\prime}(w_{n})\) using the injections function, \(\mathsf{sum}(x\mathbin{\,\sharp\,}s)\) is proportional to \(B^{\prime}(W)\), and so the computing spiders is linear (sequential) in the size of the result.
### Mapping Tensors of Generators
Having defined the action of \(F\) on the _spiders_ of a Frobenius decomposition, it now remains to define its action on a _tensoring of generators_. Such a tensoring is a diagram of the form \(g=(\iota_{0},\iota_{1},G)\) whose data is defined up to isomorphism by the list \(G(x_{n}):G(X)\rightarrow\Sigma_{1}\). In other words, the isomorphism class of diagrams represented by \(g\) is in the image of the \(\Sigma\)-term \(\bigotimes_{i\in G(X)}G(x_{n})(i)\) under \(F\), and so by strictness it is required that \(F(g)\cong\bigotimes_{i\in G(X)}F_{1}(G(x_{n})(i))\).
As with the object map, the action of \(F\) on arrows is then completely defined by its action on operations \(\Sigma_{1}\). We can therefore consider it to have the type \(F_{1}:\Sigma_{1}\rightarrow\mathsf{Diagram}_{\Omega}\). This leads to a straightforward implementation in the sequential case. Encode the data of \(F_{1}\) as a list of length \(|\Sigma_{1}|\) whose \(i^{\text{th}}\) entry is the diagram \(F_{1}(i)\), then in pseudocode, we might write
def apply_functor(F1: List[Diagram], g: Diagram): diagrams = [ F1[g.G.xn(i)] for i in range(0, g.X) ] Diagram.tensor_list(diagrams)
The \(N\)-fold tensor of diagrams is then a straightforward extension of the binary case. Recall that a diagram is essentially a collection of finite maps, upon which coproducts and tensor products are pointwise. Thus, taking the tensor product of a list of diagrams amounts to taking _finite_
coproducts instead of binary coproducts. The parallel case is'morally' the same, but obtaining a proper logarithmic time algorithm takes special care: computing the finite tensor product of a list of finite functions does not immediately translate to the parallel case. Consider the following naive implementation.
def coproduct_list(fs: List[FiniteFunction]): return FiniteFunction(fs[0].cod, concatenate([f.table for f in fs]))
The problem is that constructing the argument to concatenate takes time linear in the size of fs. Since the length of fs equals \(G(X)\), this takes time linear in the number of operations in the diagram, and therefore does not enjoy a speedup in the parallel case. The same problem exists for the tensor product of finite maps.
The solution is to encode the data of \(F_{1}\) not as a list of diagrams, but as a 'diagram of lists'. More precisely, we instead encode each diagram component as a separate segmented finite function. From this representation we will be able to extract arbitrary coproducts and tensors of the'segments' to construct the result.
The encoding of \(F_{1}\) is a collection of segmented finite functions: one for each of the source and target maps, and one for each of the components \(G(f)\) for \(f\in\mathcal{S}\).
**Definition 9.9**.: _Let \(\Sigma\) and \(\Omega\) be finite signatures, and \(F:\mathsf{Diagram}_{\Sigma}\to\mathsf{Diagram}_{\Omega}\) a strict symmetric monoidal hypergraph functor, and denote by \(d_{i}=(s_{i},t_{i},G_{i})\) the collection of diagrams \(d_{i}=F_{1}(i)\) for \(i\in\Sigma_{1}\). The **arrow map encoding** of \(F\) consists of segmented finite functions for \(s_{i}\), \(t_{i}\), and \(G_{i}(f)\) for each component \(f\) in the schema \(\mathcal{S}\)._
In order to compute tensor products of diagrams, it will be necessary to compute indexed tensor products from segmented finite functions.
**Proposition 9.10**.: _Let \(f_{i}:A_{i}\to B_{i}\) be a collection of \(N\) finite functions, and \(x:X\to N\) a finite function indexing the collection. Then the indexed tensor product is given by the array_
\[\bigotimes_{i\in X}f_{x(i)} :\sum_{i\in X}A_{x(i)}\to\sum_{i\in X}B_{x(i)}\] \[=\sum_{i\in X}f_{x(i)}\oplus\mathsf{repeat}(x\,;\mathsf{sources},p)\]
_where \(\oplus\) denotes elementwise addition of natural numbers and \(p\) denotes the partial sums of codomains \(p(i)=\sum_{j\in\ddagger}\mathsf{targets}(x(j))\)._
Proof.: Recall that the tensor product can be written in terms of the coproduct as below.
\[\bigotimes_{i\in X}f_{x(i)}=\sum_{i\in X}(f_{x(i)}\,;\ddagger\,\iota_{i})\]
Here, each injection has type \(\iota_{i}:B_{x(i)}\to\sum_{j\in X}B_{x(j)}\) with array data given by \(\langle p(i),p(i)+1,\dots,p(i)+B_{i}-1\rangle\) The array data of the composition \(f_{x(i)}\,;\ddagger\,\iota_{i}\) is therefore that of \(f_{x(i)}\) plus the constant \(p(i)\), which we may rewrite as below
\[\sum_{i\in X}(f_{x(i)}\,;\ddagger\,\iota_{i})=\sum_{i\in X}f_{x(i)}\oplus \mathsf{repeat}(x\,;\ddagger\,\mathsf{sources},p)\]
Computing \(F(h)\) for a tensoring \(h=(\iota_{0},\iota_{1},H)\) is then straightforward. Since coproducts and tensor products are pointwise, each finite function con the data of \(F(h)\) is the \(H(x_{n})\)-indexed coproduct (or tensor product) of its corresponding segmented finite function.
**Proposition 9.11**.: _Complexity of applying the arrow map to an \(N\)-fold tensoring of generators \((\iota_{0},\iota_{1},g)\) is \(O(N)\) (sequential) and \(O(\log N)\) (PRAM) in the number of operations of \(g\)._
Proof.: As in Corollary 7.8, the number of wires and edges in the diagram is proportional to \(N\). Applying \(F_{1}\) consists of computing the \(G(x_{n})\)-indexed co- and tensor products for each component morphism of \(G\), and so complexity is the same as these operations, which are \(O(N)\) (sequential) and \(O(\log N)\) (PRAM) by Proposition 9.8.
**Corollary 9.12**.: _Let \(F:\mathsf{Diagram}_{\Sigma}\to\mathsf{Diagram}_{\Omega}\) be a strict symmetric monoidal hypergraph functor, and \(d=(s,t,G)\) be a diagram in \(\mathsf{Diagram}_{\Sigma}\) of type \(d:A\to B\). Given object and arrow map encodings, the sequential time complexity of computing \(F(d)\) is linear in the number of wires \(G(W)\), edges \(G(E_{i})\) and \(G(E_{o})\), operations \(G(X)\), and boundaries \(A(W)\) and \(B(W)\) of \(d\). The parallel time complexity is logarithmic in the same._
## 10Optic Composition using Frobenius Structure
We now show how the Hypergraph structure of \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\) can be used to encode _optics_. Our approach is inspired by the string diagrams of [3] and [15]. Along with the parallel algorithm for functor application described in section 9, this allows us to define an algorithm for mapping diagrams into diagrams of optics. In general, this allows for modelling systems with bidirectional information flow. An example of such systems are neural networks [11] viewed as morphisms in categories with _reverse derivatives_[9]. We explore this example specifically, and define a scalable method for taking reverse derivatives of large diagrams. Moreover, this addresses an efficiency issue with the naive approach to taking reverse derivatives as pointed out in [15].
As in Section 9, we assume arbitrary signatures \(\Sigma\) and \(\Omega\), which are now assumed to include the additional chosen Special Frobenius structure. We begin by informally recalling optics.
**Definition 10.1** (Informal, [15, 22]).: _An **optic** of type \(\left(\frac{\mathcal{A}}{A}\right)\to\left(\frac{\mathcal{B}}{B}\right)\) is a triple_
\[M\in\mathscr{C}\qquad\qquad\overrightarrow{f}:\overrightarrow{A}\to \overrightarrow{B}\otimes M\qquad\qquad\overleftarrow{f}:M\otimes \overleftarrow{B}\to\overleftarrow{A}\]
_where we call \(\overrightarrow{f}\) and \(\overleftarrow{f}\) the **forward** and **reverse** map, respectively._
The whole optic is considered as a system modelling forward and backward information flow. The \(\overrightarrow{f}\) morphism captures the 'forward' information flow of the model, mapping data \(\overrightarrow{A}\) to \(\overrightarrow{B}\). The object \(M\) is some'memory' of the input \(\overrightarrow{A}\), which is passed to the'reverse' map \(\overleftarrow{f}\), which maps 'output-like' data and _memory_ back into 'input-like' data. In the specific case of reverse derivatives (see Section 10.1), 'input-like' and 'output-like' will mean _changes_ in input and output, respectively.
Note that, strictly speaking, morphisms in categories of optics are _equivalence classes_ of triples \((M,f,f^{\prime})\). This will not concern us. Instead, the focus of this section is on how representatives of such classes-specific triples \((M,f,f^{\prime})\)-can be encoded as diagrams. The main contribution of this section is to recognise that optic composition can be'simulated' using the hypergraph structure present in \(\mathsf{Diagram}_{\Sigma+\mathbf{Frob}}\). In order to state this formally, it is first necessary to define the _interleave_ morphism.
**Definition 10.2**.: _Assume for each generator \(X\in\Sigma_{0}\) a pair of objects \(\overrightarrow{X}\in\Omega_{0}^{*}\) and \(\overleftarrow{X}\in\Omega_{0}^{*}\). Then let \(A=A_{0}\otimes A_{1}\otimes\ldots\otimes A_{N-1}\) be a tensoring of generating objects in \(\Sigma_{0}\). The **interleaving** at \(A\) is the permutation with the following type._
\[\mathsf{interleave}_{A}:\bigotimes_{i\in N}\overrightarrow{A_{i}}\otimes \bigotimes_{i\in N}\overleftarrow{A_{i}}\longrightarrow\bigotimes_{i\in N} (\overrightarrow{A_{i}}\otimes\overleftarrow{A_{i}})\]
Given a chosen forward and reverse map for each generating operation \(f:A\to B\), this interleaving makes it possible to define a strict monoidal functor taking each \(f\) to its chosen optic.
**Definition 10.3**.: _Let \(\Sigma\) and \(\Omega\) be monoidal signatures, and assume the following data is given._
* _For each generating object_ \(A\in\Sigma_{0}\)_, a pair of objects (lists)_ \(\overrightarrow{A}\in\Omega_{0}^{*}\) _and_ \(\overleftarrow{A}\in\Omega_{0}^{*}\)__
* _For each generating operation_ \(f:\bigotimes_{i\in N_{0}}A_{i}\to\bigotimes_{i\in N_{1}}B_{i}\) _in_ \(\Sigma_{1}\)_, a pair of morphisms_
* \(\overrightarrow{f}:\bigotimes_{i\in N_{0}}\overrightarrow{A_{i}}\longrightarrow \bigotimes_{i\in N_{1}}\overrightarrow{B_{i}}\bigotimes M\)_,_
* \(\overleftarrow{f}:M\otimes\Big{(}\bigotimes_{i\in N_{1}}\overleftarrow{B_{i}} \Big{)}\longrightarrow\bigotimes_{i\in N_{0}}\overleftarrow{A_{i}}\)__
_Denote by \(\mathcal{O}:\mathsf{Diagram}_{\Sigma}\to\mathsf{Diagram}_{\Omega}\) the strict symmetric monoidal hypergraph functor defined inductively whose action on generating objects is \(F(A)=\overrightarrow{A}\otimes\overleftarrow{A}\) and on arrows is given below._
Note that diagrams in the image of \(\mathcal{O}\) compose in the same way as optics.
Intuitively, the 'flow of information' in wires labeled \(\overleftarrow{A}\) and \(\overleftarrow{B}\) is right-to-left: the output of \(\overleftarrow{g}\) is connected to the input of \(\overleftarrow{f}\), which itself connects to the left boundary.
Compare this to the graphical syntax of [3], wherein the author gives a diagrammatic language for Tambara modules using oriented wires. _Optics_ are then diagrams in this language having the following type.
These diagrams then compose analogously as below.
Diagrams in the image of \(\mathcal{O}\) are not in general of this type, but it is straightforward to adapt them by pre- and post-composing with interleave.
This is useful because for a given \(d:A\to B\) the diagram \(\mathcal{O}(d)\) is _not_ monogamous acyclic (Definitions 6.1 and 6.2), since the _outputs_ of each \(\overleftarrow{f}\) connect to the _left_ boundary. However, if for all \(f\in\Sigma_{1}\), every \(\overrightarrow{f}\) and \(\overleftarrow{f}\) is monogamous acyclic, then the resulting diagram can be converted to a monogamous acyclic one using the interleave morphism as below.
(17)
This will be important to our case study because it means that we can extract a morphism which can be interpreted in a _symmetric monoidal_ category. This means it can be regarded as a _function_ which simultaneously computes the forward and reverse maps of reverse derivatives regarded as optics.
### Case Study: Efficient Reverse Derivatives via Optic Composition
To conclude the paper, we now discuss how the constructions given so far can be applied to the setting of machine learning. In particular, we will show how the encoding of optics using hypergraph structure allows for computing reverse derivatives of large diagrams in an efficient way.
We begin by informally recalling reverse derivatives, a key component of the formulation of gradient-based learning given in [11]. A (cartesian) reverse derivative category (RDC), equips morphisms \(f:A\to B\) with a _reverse derivative_\(\mathsf{R}[f]:A\times B^{\prime}\to A^{\prime}\) satisfying various axioms. Intuitively, this maps an input \(A\) and a _change in output_\(B^{\prime}\) to a change in _input_\(A^{\prime}\).
Reverse derivatives can also be thought of as _lenses_: pairs of morphisms \((f,\mathsf{R}[f])\) which compose as in the diagram below.
This definition leads to two kinds of inefficiency. First, we are required to represent two distinct diagrams for each morphism. This means that it is not possible to apply a functor of the kind defined in Section 9. Thus, in order to compute the reverse derivative of a map \(f\) as a lens, it seems necessary to first extract a \(\Sigma\)-term, and then incrementally build a pair of diagrams. As we have seen in Example 9.1, extracting such a \(\Sigma\)-term can cause a quadratic penalty. The second problem, pointed out in [15], is that the definition of lens composition leads to a space-for-time tradeoff which leads to multiple redundant 'copies' of the forward morphism of the lens.
To avoid this redundant computation, one can use the observation of [15, Figure 10] which shows how lenses can be composed as _optics_. This relies on lenses being a special case of optic whose base category is cartesian. In terms of optics, the forward maps of lenses are of the form, where is the diagonal map. We may therefore map a morphism \(f\) in an RDC to the following optic.
Having done so, we can then use (17) to extract a morphism of the RDC which runs the forward and reverse passes simultaneously. This gives two specific benefits: (1) we can efficiently compute a large symbolic description of a reverse derivative in parallel, and (2) decomposition to \(\Sigma\)-terms is completely avoided.
Note that what has been presented here is more general than just reverse derivatives. Since we have already given an algorithm for applying _hypergraph_ functors, we can apply it to those traced symmetric monoidal categories whose trace is given by the hypergraph structure. We conjecture that this will allow for modelling systems of bidirectional information flow _with feedback_. More precisely, the algorithm given here will allow us to map a morphism with feedback into an _optic with feedback_.
## 11 Discussion and Future Work
The datastructures and algorithms presented here improve on previous work [26] in a number of ways. We handle the case of diagrams equipped with Special Frobenius structure, extend to arbitrary sets of generating objects, and eliminate the dependency on an underlying sparse array implementation. In addition, 'natively' allowing for diagrams equipped with Special Frobenius structure, allows for the representation of diagrams of optics. This in turn enables us to give a parallel algorithm for taking reverse derivatives.
However, as in [26], a number of improvements remain. Aside from algorithms for matching and rewriting, future work should also include algorithms for _evaluating_ diagrams. While we expect
this can be added with little effort, it may be the case that alternative representations of internal diagram wiring affect the efficiency of evaluation algorithms. For instance, we currently store the operations of a diagram in a single array \(G(x_{n}):G(X)\rightarrow\Sigma_{1}\). However, it may be advantageous for performance reasons to store _multiple_ such arrays so that applying operations in parallel on GPU hardware is more performant.
It may also be possible to weaken the assumption of the PRAM CRCW model assumed in the paper. Since in most cases the CREW model is sufficient, this should require only a few modifications. For example, the algorithm to compute the universal morphism of coequalizers requires the CRCW assumption, but it may be possible to obtain a PRAM CREW algorithm by replacing concurrent writes with an integer sort. In addition, we may also be able to improve the work-efficiency of some of the parallel algorithms presented here, in particular the ancestor maps (Corollary8.16).
Finally, when defining algorithms to apply a functor \(F\) to a diagram in Section9, we required that the typing relation \(\tau\) be a function. However, one of the advantages of our representation is in allowing for _polymorphic_ generators. Thus, a natural extension is to allow functors between categories with polymorphic generators. To do so would require using _type_ information when mapping a given operation, since there is no longer a unique diagram \(F(g)\) which can be determined simply from an operation label \(g\in\Sigma_{1}\).
|
2310.15282 | Final velocity and radiated energy in numerical simulations of binary
black holes | The evolution of global binary black holes variables such as energy or linear
momentum are mainly obtained by applying numerical methods near coalescence,
post-Newtonian (PN) expansions, or a combination of both. In this paper, we use
a fully relativistic formalism presented several years ago that only uses
global variables defined at null infinity together with the gravitational
radiation emitted by the source to obtain the time evolution of such variables
for binary black holes (BBH) systems. For that, we use the Rochester catalog
composed of 776 BBHs simulations. We compute the final velocity, radiated
energy, and intrinsic angular momentum predicted by the dynamical equations in
this formalism for nonspinning, aligned and antialigned spins, and several
different precessing configurations. We compare obtained values with reported
values in numerical simulations. As BBHs parameter space is still not
completely covered by numerical simulations, we fit phenomenological formulas
for practical applications to the radiated energy and final velocities
obtained. Also, we compare the fits with reported values. In conclusion, we see
that our formulae and correlations for the variables described in this work are
consistent with those found in the general literature. | Emmanuel A. Tassone, Carlos N. Kozameh | 2023-10-23T18:38:59Z | http://arxiv.org/abs/2310.15282v1 | # Final velocity and radiated energy in numerical simulations of binary black holes
###### Abstract
The evolution of global binary black holes variables such as energy or linear momentum are mainly obtained by applying numerical methods near coalescence,post-Newtonian expansions or a combination of both. In this paper, we use a fully relativistic formalism presented several years ago that only uses global variables defined at null infinity together with the gravitational radiation emitted by the source to obtain the time evolution of such variables for binary black holes (BBH) systems. For that, we use the Rochester catalog composed of 776 BBHs simulations. We compute the final velocity, radiated energy, and intrinsic angular momentum predicted by the dynamical equations in this formalism for non-spinning, aligned/anti-aligned spins, and several different precessing configurations. We compare obtained values with reported values in numerical simulations. As BBHs parameter space is still not completely covered by numerical simulations, we fit phenomenological formulas for practical applications to the radiated energy and final velocities obtained. Also, we compare the fits with reported values. In conclusion, we see that our formulae and correlations for the variables described in this work are consistent with those found in the general literature.
## I Introduction
The RIT Catalog of Numerical Simulations [1] is a catalog containing the gravitational wave data for 776 numerical relativity simulations. The catalog spans a large variety of initial conditions and provides the gravitational strain for the evolution of the mergers during all the phases.
It is believed that spin orientation on the initial state of the binary black hole (BBH) can constrain the evolutionary processes that lead the binary to the merger [2]. Spin orientations can influence the kick velocities [3], making them high enough for a merged binary to be ejected from the nucleus of a massive host galaxy [4; 5]. The probability of these large recoils depends on the distribution of mass ratios and spins of the progenitor binaries [6]. Spin orientations also influence the energy radiated, and particularly, the maximum luminosity of the merger[2]. Additionally, there is also research made on the _hangup effect_, which is a change in the merger duration compared to non-spinning binaries due to the presence of spins, resulting in faster or slower fusion of the binaries [7; 8].
Other important parameters when setting up a black hole simulation are the masses of the black holes which are represented typically by the quotient \(q=m_{1}/m_{2}\), with \(m_{1}\) being the smaller black hole progenitor. The mass quotient, along with the spin orientation, may determine the remnant mass, spin and recoil velocity [9].
Although the spectrum of parameters along all the 776 simulations in Rochester catalog is wide, it is still computationally prohibitive to cover the whole BBH parameter space exhaustively, making fitting formulas useful for practical purposes. Many fitting formulas have been deduced that relate the final spin of the remnant black hole with the initial spins and mass parameters [10; 11; 12; 13]. However, most of the relations established from the data analysis are constructed based on models from the parametrized post-Newtonian formalism (PPN). The PPN formalism allows finding approximate solutions to the Einstein field equations for the metric tensor, and therefore, obtaining physical quantities such as energy, velocity and angular momentum when the binaries are far away from each other[14].
In this paper, we perform the numerical evolution of four global physical variables defined at null infinity, namely, the Bondi energy and linear momentum, the center of mass velocity and intrinsic angular momentum. The last two variables are defined using a formalism developed by Kozameh and Quiroga (KQ)[15].
The evolution of the before-mentioned variables is made using the Rochester gravitational wave strain data catalog. We analyze the dynamics of the final state of the resulting black hole and relate it to the most significant variables before the coalescence. Hence, the initial spin orientations, orbital angular momentum, initial masses, maximum energy lost, final total velocity, final total angular momentum and its tilting angle are going to be studied in this work. Many interesting correlations between these variables are shown in the work.
This paper is organized as follows. In Sec. II, we describe the equations of motion used for this work. In Sec. III, we mention some aspect of the code employed to compute the evolution and the subsequent data analysis. In Sec IV, we analyze the results obtained for the final states of the evolution. Then we make a deeper analysis, classifying the results into three categories according to the kinematics of the BBH spins: non spinning (NS), aligned (A) and precessing (P). There are 29 simulations in first category (NS), 447 with aligned spins (A) and 300 simulations are precessing spins BBH (P). Finally, Sec. V closes the work with a discussion and conclusion about the most relevant aspects found and also a discus
sion over the formalism employed and potential future work.
## II Background
The KQ formalism [15] provides a fully relativistic treatment that defines the center of mass frame and intrinsic angular momentum using the asymptotic symmetries represented by the Winicour linkages [16]. It is worth mentioning that a different formulation based on null congruences with twist by Adamo, Newman and Kozameh (ANK) [17] yields similar evolution equations at a quadrupole level. In this sense, the KG formulation has been shown to be consistent with PN results up to octupole terms in the gravitational radiation [18].
The physical quantities for asymptotically flat spacetimes are given by the Weyl scalars \(\psi_{0}\), \(\psi_{1}\), \(\psi_{2}\), \(\psi_{3}\), and \(\psi_{4}\) at null infinity. Those scalars are defined on special reference frames called Newman-Unti coordinates [19]. Each Newman-Unti foliation is then associated to an arbitrary timelike wordline on a fiducial flat space, called the Observation space, where the equations of motion are given and numerically solved. These special frames allow us to define physical variables on "non-inertial" frames. Using the Winicour linkages together with a gauge fixing procedure, the KG formalism defines the center of mass and intrinsic angular momentum. Using the Bianchi identities at null infinity, one obtains the evolution equations for those variables on the Observation space. The reader may refer to [15; 18; 20] for a thorough construction of the physical quantities using the KG formalism. The relevant evolution equations for the purpose of this work read
\[\dot{M} =-\frac{c}{10\sqrt{2}G}(\dot{\sigma}^{ij}_{R}\dot{\sigma}^{ij}_{R }+\dot{\sigma}^{ij}_{I}\dot{\sigma}^{ij}_{I}), \tag{1}\] \[\dot{P}^{i} =\frac{2c^{2}}{15\sqrt{2}G}\dot{\sigma}^{jl}_{R}\dot{\sigma}^{kl} _{I}\epsilon^{ijk},\] (2) \[\dot{D}^{i} =\sqrt{2}P^{i}+\frac{3c^{2}}{7G}[\dot{\sigma}^{ijk}_{R}\sigma^{jk }_{R}+\dot{\sigma}^{ijk}_{I}\sigma^{jk}_{I}-\sigma^{ijk}_{R}\dot{\sigma}^{jk}_ {R}-\sigma^{ijk}_{I}\dot{\sigma}^{jk}_{I}],\] (3) \[\dot{J}^{i} =-\frac{c^{3}}{5G}(\sigma^{kl}_{R}\sigma^{jl}_{R}+\sigma^{kl}_{I} \dot{\sigma}^{jl}_{I})\epsilon^{ijk}, \tag{4}\]
where \(M^{i}\) and \(P^{i}\) are the total Bondi mass and linear momentum respectively, \(D^{i}\) is the dipole mass momentum, and \(J^{i}\) the total angular momentum. The first two equations yield the evolution of the Bondi four-momentum whereas the last two yield the corresponding evolution of the dipole mass momentum and total angular momentum. These last two variables are the components of the so-called relativistic angular momentum 2-form and behave like the electric and magnetic fields under a Lorentz transformation. The symbols \(\sigma_{R}\) and \(\sigma_{I}\) denote the real and imaginary part of the Bondi shear at null infinity. Commonly, the Bondi shear is denoted as \(\sigma_{0}\) and accounts for the deformation of the spacetime due to the gravitational radiation that arrives at null infinity. Usually \(\sigma^{0}\) is expanded in terms of spin-weighted spherical harmonics \(Y^{s}_{lm}\), very useful for analyzing gravitational waves [21]. Yet, the KQ formalism expands the asymptotic shear \(\sigma^{0}\) in terms of tensor spherical harmonics \(Y^{s}_{i_{1}i_{2}\ldots}\)[22]. This base is fundamental to define covariant quantities as in eqs. (1) to (4). The dot on the \(\sigma^{ij}\) stands for the derivative with respect to the Bondi time coordinate at future null infinity. It is worth mentioning that in the above equations we have omitted quadratic octupole terms and all remaining coefficients for \(l\geq 4\) since they are negligible in BBH coalescence.
Finally we write down the equations for the center of mass \(R^{i}\) and intrinsic angular momentum \(S^{i}\) as given in the KG formalism. They are related to the previous variables by
\[D^{i} =MR^{i}+c^{-2}\epsilon^{ijk}\frac{P^{j}}{M}S^{k}-\frac{8}{5\sqrt{ 2}c}P^{j}\Delta\sigma^{ij}_{R} \tag{5}\] \[J^{i} =S^{i}+\epsilon^{ijk}R^{j}P^{k}-\frac{137c^{2}}{168\sqrt{2}G}( \sigma^{ijk}_{R}\sigma^{jk}_{I}-\sigma^{ijk}_{I}\sigma^{jk}_{R}) \tag{6}\]
The center of mass \(R^{i}\), is the spatial component of a covariant vector of the formalism and represents the physical analogue of the newtonian center of mass but also accounting for the radiated energy. One can take the derivative with respect to the Bondi time \(u\) and define the center of mass velocity \(V^{i}\) as the change of the center of mass with respect to the Bondi time. One of the purposes of this work will be to show that the center-of-mass velocities given by the KQ formalism agree very well with the recoil velocities found in the RIT catalog.
The equation for \(J^{i}\) resembles its newtonian counterpart except for the extra radiative terms. In this work the last term is negligible for the BBH catalog but it could be important in other situations where the octupole term is not negligible. A fully relativistic version of these equations can be found in reference [20]. Note that the orbital angular momentum can be important when the center of mass is displaced from the origin and achieves high velocities due to the gravitational kick.
## III Numerical Framework
The numerical code employed to process data and run the evolution equations is described in this section. The code can be accessed at [23].
The Rochester catalog has been used before to obtain the dynamical evolution of several physical observables using the KG formalism [24]. However, the latter work has a numerical error associated to the strain in the last steps of the evolution. This error is due to a well-known effect of performing a raw numerical integration [25]. Although the numerical error was small enough to keep the qualitatively analysis of the physics still valid, in this work we corrected the code so as to avoid the linear drifts coming from the integration on sigma.
The amplitude and phase for the simulations are obtained in compressed format from the Rochester catalog [26]. An interpolation is made with one dimensional smoothing splines. Order k=5 splines are implemented in the interpolation. Subsequently, the polarizations of gravitational \(h_{+}^{lm},h_{\times}^{lm}\) wave strain can be obtained by the relation,
\[rh_{+}^{lm}-irh_{\times}^{lm}=A_{lm}e^{i\phi_{lm}},\]
where \(r\) is a radial coordinate, \(l\) and \(m\) the modes of the gravitational wave, \(A_{lm}\) and \(e^{i\phi_{lm}}\) the amplitude and phase of the gravitational wave respectively. Asymptotically, the strain and shear are directly related by the \(\Psi_{4}\) scalar, implying
\[h_{+}=-\sigma_{R},\] \[h_{\times}=-\sigma_{I}.\]
The code implements the calculation of the shear and then it transforms from the spinor to the tensor harmonics basis using the transformation relations in [27]. After that, the calculation for all the physical variables is implemented. In particular, the evolution of eqs. (1), (2) and (4) can be carried out for all times. Finally, the code implements functions to collect and analyze the results obtained. Data analysis is shown in Sec. IV.
## IV Main results
In this section we first describe the general results obtained for the global variables of an asymptotically flat space time containing a BH binary system. We provide the distribution of the data and give an overview of the physics obtained for all the simulations. Then, we dedicate a subsection for each categories made of the Rochester repository: non-spinning Binaries, aligned-spins binaries and precessing binaries.
We made a special analysis of the variables obtained by eqs. (1) to (6) and the correlations with initial parameters from the simulations, such as initial total angular momentum \(J_{in}\) and mass ratio \(q\). It is important to distinguish these two sets of parameters. The first set, the global variables, are well-defined in our formalism and can be obtained at null infinity without knowledge of the BH masses, spins and orbital angular momentum. For instance, to obtain \(J_{in}\) we only need the knowledge of the center of mass position and the definition of intrinsic angular momentum.
The second set of parameters corresponds to all the initial conditions contained in the Rochester repository, locally defined, containing the initial spins, masses, and orbital angular momentum of the BH binaries. Knowledge of this second set is very important to analyse the behaviour of BH coalescence, and gives further understanding of this kind of isolated system.
Throughout the following subsections we will distinguish between six classes of binaries: equal mass non-spinning (EM-NS), equal mass aligned (EM-A), equal mass precessing (EM-P), non-equal mass non-spinning (NEM-NS), non-equal mass aligned (NEM-A), and non-equal mass precessing (NEM-P).
### General results
The results for the final center of mass final velocity \(V_{f}\) and initial total angular momentum \(J_{in}\) are presented in Fig. 1. The final velocity \(V_{f}\) is defined as,
\[V_{f}^{i}=\dot{R}_{f}^{i}, \tag{7}\]
where \(\dot{R}_{f}^{i}=\dot{R}^{i}(t_{f})\) is the derivative with respect to Bondi time evaluated at the final time of the simulation and the index \(i\) denotes the component of the vector.
The plot in Fig. 1 shows the final velocities for the 776 simulations in the catalog. The vertical axis \(V_{f}\) is the norm of the vector velocity defined in Eq. (7). One can see that precessing binaries exhibit higher final velocities. This observation could be relevant both at a level of predicting the final velocities of the residual black hole or at the level of observational data. If the final velocity of the residual black hole can be observed (via Doppler effect of a surrounding coalescing gas), then receding speeds higher than \(\sim 750~{}km/s\) implies that the original binaries had precessing spins. One can even speculate that if the final speeds are higher than \(\sim 3000~{}km/s\), then the initial binaries were in the class of EM-P. It is also possible to recognize in Fig. 1, that within each class, there is a value for the initial total angular momentum \(J_{in}\) with the highest final velocity \(V_{f}\). The distribution of the simulations seems to exhibit an asymmetric peaked-like distribution, whose peak value varies among each subclass in the classification. On account of its symmetry, the EM-NS class has vanishing final velocities (see formula (10) below). The reader should be aware these assertions are rather approximate due to the bias in the RIT catalog parameter space. However, the parameter space is vast enough to visualize the claims made in the
Figure 1: Correlation between the final center of mass velocity, \(V_{f}\) (in \(km/s\)), and the initial total angular momentum, \(J_{in}\).
paragraph and in this sense phenomenological formulae are fitted in the following sections with the idea of filling the blanks in the initial parameter space. A special exception must be made with precessing binaries, for which the final state is known to be chaotic with respect to the initial parameters.
We have solved the eqs. (1) to (6) and plotted the data distribution for the variables \(E_{rad}\),\(V_{f}\) and \(\Delta S\) in form of histogram (Fig.2). Again, the variables of distributions in Fig.2 are also biased by the selection of simulations in RIT catalog. All the distributions are normalized as a probability density function and a gaussian function is fitted to get representative values. For the final velocity in Fig.2b, note that most of the simulations will end with kicks less than \(\sim 1000\ km/s\). It is our purpose to investigate in the following sections which are the factors that cause this large variation of final velocities. Something important to remark is that final velocities are expressed in Bondi time used in our formalism. For comparison purposes, we should divide our final velocities by a \(\sqrt{2}\) factor. This factor accounts for the transformation from the Bondi time \(u\) to the standard time \(t\) due to have defined \(u=\frac{t-r}{\sqrt{2}}\) and not \(u=t-r\) as found in common literature. Consequently, the kick velocities in terms of the coordinate time \(t\) are lower than those showed in Fig. 2b.
It is more convenient to plot the data distribution for a normalized radiated energy instead of the raw energy change, as in Fig. 2c. We define the total energy radiated as
\[E_{rad}=1-\frac{M_{f}}{M_{i}}, \tag{8}\]
where \(M_{i}\) is the initial ADM mass of the BBH and \(M_{f}\) the final Bondi mass of the remnant obtained by Eq. (1). Note the difference between the final \(M_{f}\) in this work and the commonly used Christodoulou mass in the numerical literature. The distribution of radiated energy for our set of solutions is found between the range \(\sim 0-13\%\). The histogram is normalized and the mean of the radiated energy is \(\mu=0.047\) and the dispersion \(\sigma=0.021\).
Definition (8) is directly proportional to the peak energy loss \(\hat{M}_{max}\) for binary coalescence. The bigger the peak energy loss, the bigger will also be the total energy radiated. In Fig. 3, we fitted a linear function of the form \(E_{rad}=a_{1}\hat{M}_{max}\) to the relation and found a proportionality constant
\[a_{1}=41.348\pm 0.194 \tag{9}\]
with an error of \(0.47\%\). Moreover, the proportionality relation could be made more precisely if fitted for each mass ratio as seen in Fig. 3. Having this result in mind, figures present in this work with \(E_{rad}\) variable will be nearly the same as figures with \(\hat{M}_{max}\) variable but with a change of scale. It must be emphasized that this proportionality relation is a feature of the BBH coalescence and will not be valid in general. We analyze these variables for each subclass in the following sections more specifically.
To end this section, we also give an overview of the distribution of radiated energy with respect to the initial total angular momentum \(J_{in}\) in Fig.4. \(E_{rad}\) vs \(J_{in}\) clearly shows a non-linear relationship. This feature will be analysed for each group made in the next subsection and also in the discussion. A model to understand the behaviour of the radiated energy with respect to the initial total angular momentum is developed in Appendix A. It is worth mentioning that this model has a post-Newtonian derivation. Thus, we will contrast the fully relativistic numerical evolution of variables that have asymptotic nature with a PN model describing its dynamics. It is rather remarkable that with few phenomenological parameters that are adjusted to fit the numerical data, the PN formula matches the numerical data.
### Non-spinning binaries
In this subsection we describe the group of simulations which have no initial spins. This group of binaries has the simplest physics to describe since the space of initial parameters has the lowest dimension.
As a first step, we calculate the correlations of the final velocity and the maximum gravitational luminosity of the binary system with the mass ratio \(q\). The gravitational radiation is calculated by Eq.(1) and it is directly related to the luminosity by integrating it into a spherical surface area far away from the source. As the final luminosity is a quantity that can be usually measured, it is relevant to know if a direct relation exists between the radiation per unit time with the mass ratio of the BBH. Likewise, assuming the final velocity of the remnant lack hole can be observed, the correlation with \(q\) also yields information about this particular coalescence.
Correlations between \(V_{f}\), \(q\) and \(\hat{M}_{max}\) are outlined in Fig. 5. We use the Fitchett's recoil model for circular orbits proposed in [28] whose dependence with q is
\[V_{f}(q)=a\ \frac{q^{2}(1-q)}{(1+q)^{5}}. \tag{10}\]
with \(a\) a fitting constant. We compute the parameter \(a\) by a least square method. The best fit to our data has the coefficient \(a=16317.78\pm 207.93\ km/s\). The derivation of the Eq. (10) is also given in Appendix A.
We found the maximum final velocity \(V_{f}=309.43\) reached at the value \(q=0.4\). Null velocities are reached for limit cases when \(q=0\) or \(q=1\), i.e. mass \(m_{1}=0\) or \(m_{1}=m_{2}\). This is consistent with the fact that when \(q=0\), the center of mass is in the center of the mass \(m_{2}\); hence, it does not move while the mass \(m_{1}\) goes to zero. Similarly, when \(q=1\), the center of mass does not move either due to the symmetry of the problem and coincides with half the distance between the binaries.
Even though limit cases of \(q\) yield vanishing velocities, Fig. 5 shows that while \(q=0\) has no energy loss, \(q=1\) allows the most radiating systems. Thus, \(q=1\) non-spinning binaries systems are most likely to be detectable
by its higher luminosity, whereas intermediate values for \(q\) are most likely to be detectable by their kick velocities.
Another interesting relation we have explored is the radiated energy \(E_{rad}\) vs the initial total angular momentum \(J_{in}\). For the NS subgroup, this plot yields Fig 6.
Fig.6 is fitted with a quadratic curve of the form
\[E_{rad}=b\ J_{in}^{2}, \tag{11}\]
with \(b=0.050\pm\ 0.001\). The model fitted in Eq. (11) is
Figure 4: Correlation between the radiated energy \(E_{rad}\) and the initial angular momentum \(J_{in}\) for all the simulations
Figure 5: Correlation between the final velocity \(V_{f}\) and mass ratio \(q\). Colorbar indicates the maximum energy lost \(\dot{M}_{max}\). Dashed line shows the Fitchett model fit.
Figure 3: Correlation between total radiated energy \(E_{rad}\) and the peak radiated energy \(\dot{M}_{max}\) for all simulations. Dashed line shows fit to data. Colorbar indicates the value of mass ratio \(q=m_{1}/m_{2}\).
Figure 6: Correlation between the total radiation emitted \(E_{rad}\) and the initial total angular momentum \(J_{in}\). Colorbar indicates different levels of the initial mass ratio \(q\). Letter \(k\) indicates polynomial degree of the fit.
Figure 2: Bar plots of the distribution from main variables of interest obtained after evolution in all the simulations.
motivated by the derivation made in Appendix A.
Likewise, we can also give the correlation between \(\dot{M}_{max}\) and \(q\). The plot is given in Fig. 7 and the phenomenological formula containing two parameters is derived in Appendix A. The relation is given as,
\[\dot{M}=-\frac{A}{1000}\frac{q^{2}}{(1+q)^{4}}\left(1+\frac{B}{216}\left(\frac{ 1-q}{1+q}\right)^{2}\right), \tag{12}\]
with \(A\) and \(B\) to be obtained from the numerical data. Notice that the formula has a PN background as can be checked in Appendix A. It follows from the data or the formula that the luminosity has a minimum at \(q=0\) and a maximum at \(q=1\), i.e., equal BBH masses yield maximum luminosity. The coefficients found with least squares method are
\[A =22.44\pm 0.06\ (0.30\%)\, \tag{13}\] \[B =-159.00\pm 4.74\ (2.98\%). \tag{14}\]
Note also that relationship (12) is injective. Thus, given the coefficients in eqs. (13) and (14), one can invert the formula (12) and determine which mass ratio \(q\) corresponds to a particular value of the measured luminosity, i.e., global dynamical information is used to obtain local information of the BBH.
### Aligned-spins binaries
In this section, we analyse the aligned-spin class. The initial spins in this kind of configurations are likely to remain in the z-direction as aligned-spins binaries are known to be stable configurations [29]. These kinds of binaries are more realistic ones and hence relevant for astrophysical applications.
In Fig.8 we outline the dependence of the final center of mass velocity \(V_{f}\) with the initial mass ratio \(q\). Figures (a) and (b) highlight different initial configurations for magnitudes and spins directions. In figure (c) a colorbar is used to show the highest luminosity released for each configuration. Over all the 407 aligned simulations, the norm of \(S_{2}\) sweeps values in [0,0.7] interval and the absolute value of \(S_{1}\) in [0.,0.25] interval.
Fig.8a shows the highest final velocity \(V_{f}=771.78\ km/s\) for the quotient \(q=0.6628\) in the whole class of aligned-spins binaries. In the same plot we find that top velocities can only be achieved for BBHs with anti-aligned spins (\(\angle S_{1}S_{2}=\pi\)). This property has already been reported several times [30; 31]. On the other hand, there seems to be no clear picture of the influence between the spin alignment of \(S_{1}\) and \(S_{2}\) in lower final velocities.
Furthermore, it can be seen in Fig.8b that when the initial orbital angular momentum \(L_{in}\) is anti-aligned with the initial spin \(S_{2}\) (\(\angle L_{in}S_{2}=\pi\)), higher final velocities are reached. In short, it seems that the higher absolute value spins, in this case \(S_{2}\), should be anti-aligned with initial orbital angular momentum \(L_{in}\) and the initial lower spin, in this case \(S_{1}\), to achieve the maximum velocities for the aligned group.
The correlation between final velocities \(V_{f}\) and maximum rate of mass loss \(\dot{M}_{max}\) is shown in Fig.9. A similar pattern discussed for NS binaries IV.2 is also present. The maximum possible velocity versus q has a local (and absolute) maximum at \(q\sim 0.66\). Final velocities vanishes at \(q=0\), reach a peak value and then decrease at \(q=1\). Moreover, energy loss also vanish at \(q=0\) which indicates that both the NS and A classes are very similar at this part of the mass ratio range.
At the other side of the range, the equal mass pattern \(q=1\) is quite interesting. Like the NS case, the maximum energy loss and vanishing final velocity also occurs at \(q=1\). However, unlike the NS case, there are now non-vanishing final velocities in the EM-A configuration, a big difference from the NS case.
The relation of the energy loss is then analyzed with respect to the initial total angular momentum \(J_{in}\). In Fig. 10, the dependence between the initial total angular momentum \(J_{in}\) and total radiated energy \(E_{rad}\) is shown for five mass range ratios: \(0<q\leq 0.2\), \(0.2<q\leq 0.4\), \(0.4<q\leq 0.6\), \(0.6<q\leq 0.8\) and \(0.8<q\leq 1\). We fit a quadratic polynomial model for each range which could be useful for estimating energy on astrophysical scenarios or also for future comparisons. We provided the coefficients of the fit for each range of masses in Table 1.
To end the section of aligned-spins BBHs and for comparison purposes, we will restrict our analysis to \(q=1\) aligned BBHs and employ Reisswig _et al._ model from Ref. [32]. The authors have shown that radiated energy via gravitational waves from infinity of equal-mass binaries with aligned spins can be estimated by a quadratic polynomial on the average initial spin \(\bar{\chi}=(\chi_{1}+\chi_{2})/2\), where \(\chi_{1}\) and \(\chi_{2}\) are the projections of the initial spin in
Figure 7: Correlation between the peak energy radiated \(\dot{M}_{max}\) and mass quotient \(q\). Colorbar indicates values of energy radiated which are proportional to the peak energy loss. Dashed line shows the model fit to data.
the \(L_{in}\) direction. Therefore, the model reads
\[E_{rad}=a_{0}+a_{1}\bar{\chi}+a_{2}\bar{\chi}^{2}. \tag{15}\]
Our fit to the data give us the following vector coefficients
\[\vec{a}=\begin{pmatrix}a_{0}\\ a_{1}\\ a_{2}\end{pmatrix}=\begin{pmatrix}0.051\pm 0.001\\ 0.040\pm 0.002\\ 0.029\pm 0.003\end{pmatrix}, \tag{16}\]
whereas the corresponding coefficients in Ref. [32] yield
\[\vec{p}=\begin{pmatrix}p_{0}\\ p_{1}\\ p_{2}\end{pmatrix}=\begin{pmatrix}0.036\pm 0.003\\ 0.030\pm 0.006\\ 0.02\pm 0.01\end{pmatrix} \tag{17}\]
We see in Fig.11 there is a notable difference between the polynomials. The difference could be attributed to
Figure 8: Correlation between final velocity \(V_{f}\) and mass ratio \(q\) for aligned binaries.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(q\) & \(a_{0}\) & \(a_{1}\) & \(a_{2}\) \\ \hline \(0<q\leq 0.2\) & \(0.037\pm 0.016\) & \(-0.023\pm 0.017\) & \(0.0095\pm 0.0031\) \\ \(0.2<q\leq 0.4\) & \(0.032\pm 0.004\) & \(-0.007\pm 0.005\) & \(0.0138\pm 0.0013\) \\ \(0.4<q\leq 0.6\) & \(0.093\pm 0.009\) & \(-0.102\pm 0.015\) & \(0.0553\pm 0.0065\) \\ \(0.6<q\leq 0.8\) & \(0.12\pm 0.01\) & \(-0.156\pm 0.019\) & \(0.082\pm 0.009\) \\ \(0.8<q\leq 1\) & \(0.149\pm 0.014\) & \(-0.212\pm 0.029\) & \(0.114\pm 0.014\) \\ \hline \end{tabular}
\end{table}
Table 1: Coefficients from fitted second-degree polynomials in Fig.10
Figure 10: Correlation between initial total angular momentum \(J_{in}\) and total radiation emitted \(E_{rad}\) for aligned binaries. Mass ratio intervals have been divided and selected to be studied and are illustrated with different colors
Figure 9: Correlation between \(V_{f}\) and \(q\) for aligned binaries. Colorbar shows levels of peak energy loss \(\hat{M}_{max}\). Dashed line indicates border limit to final velocities (in \(km/s\)).
the fact that the parameters for the used simulations differ significantly and thus the final the \(E_{rad}\). Particularly, no comment is said about initial orbital angular momentum \(L_{in}\) in Ref. [32] which can generate different outcomes in the radiated energy for the same values of the initial spins \(\chi_{1}\),\(\chi_{2}\). However, it could be also that radiated energy predicted for aligned BBHs in the framework of KQ formalism is slightly higher than radiated energy obtained using the local quantities such as the Christodoulou mass in numerical relativity.
The largest error is in 2nd-order coefficient and is \(\sim 10\%\). We find the maximum radiated energy evaluating at \(a=1\) in the polynomial and we find the value \(E_{rad}(1)=12.2\%\). The limit is closer to \(E_{rad}(1)=11.3\%\) reported in [2] than \(E_{rad}(1)=9.9\%\) reported in [32]. The maximum radiated energy is also below the value \(E_{rad}\sim 14\%\) found in Ref. [33] for head-on collision of two highly boosted EM-NS black holes.
### Precessing spins binaries
We provide in this section a deeper analysis on the parameters for the precessing spins group. For this group of BBH, initial spins \(S_{1}\) and \(S_{2}\) will, in general, not be aligned nor even along \(\mathbf{L}_{in}\) axis. Different behaviour with respect to the previous groups mentioned in Secs.IV.2 and IV.3 is expected as non aligned configurations are known to present chaotic behaviour [29].
We first study, as in previous sections, the dependence of the final velocity \(V_{f}\) with the initial mass ratio \(q\). In Fig.12b one can appreciate that the most notable difference in the behaviour of \(V_{f}\) with respect to the non-spinning or aligned BBHs is the achievement of highest velocities for equal mass binaries. By contrast, \(\dot{M}_{max}\) behaves equally as in other classes, i.e., systems with \(q=0\) have no energy emission as expected and \(q=1\) are the most energetic BBHs. The rate of energy loss increment as the mass ratio does.
On the left side, Fig.12a shows \(E_{rad}\) vs initial total angular momentum \(J_{in}\). We used the same model as with aligned and non-spinning classes derived in Appendix A to fit the data, i.e., a 2nd degree polynomials for each range of masses. The coefficients of the polynomials are listed in table 2. Even though second degree polynomials may not be the best fit for the data, we prioritize its simplicity for describing the ascending behaviour of each mass range in the precessing group. Note also that range \(0<q<0.2\) is missing in plot 12a due to the lack of enough data to provide a representative curve in that interval.
## V Discussion and conclusion
In this work, we have studied final velocities of the remnant black hole for the non-spinning (NS), aligned (A) and precessing (P) groups. We have shown that final velocities find their maximum value for different mass ratios in each group: \(V_{f}\approx 310\ km/s\) for \(q=0.4\) in NS, \(V_{f}=771\ km/s\) for \(q=0.66\) in A, and \(V_{f}=6120.15\ km/s\) for \(q=1\) in P. We have given the final velocities in Bondi time. For comparison purposes, we should divide our final velocities by a \(\sqrt{2}\) factor, as explained in Sec.IV.1. Thus, the maximum final velocity in the precessing group is \(V_{f}=4327.59\ km/s\), which is slightly lower than the final kick reported for this simulation in Rochester metadata (\(v_{kick}=4625.73\ km/s\)).
We have found \(V_{f}=219.20\ km/s\) at \(q=0.4\) to be the maximum final velocity for NS group. This value is consistent with the reported maximum value \(V_{f}=175\ km/s\) for the ratio \(q\approx 0.36\) found in Ref. [3]. For the NS group, we employed the Fitchett's recoil model to fit the set of NS simulations in the catalog. We obtained a fitting constant \(a=16317.78\ km/s\) and associated error of \(\sim 1.5\%\). The model can be used for posterior simulations comparison or to model astrophysical BBHs with negligible rotation. Likewise, we derived a formula to fit the correlation between \(\dot{E}_{max}\) and \(q\) using a phenomenological dimensionless parameter \(b\) which should be greater than 2. The value obtained \(b=2.1867\) is a reasonable result.
For the A group, we found that the top final velocity is \(V_{f}=545.73\ km/s\), which is consistent with the maximum value in the metadata \(v_{kick}=500\ km/s\) and of the same order found in Ref.[30], \(V_{f}=448\ km/s\). Moreover, our results also show that final velocities are maximal
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(q\) & \(a_{2}\) & \(a_{1}\) & \(a_{0}\) \\ \hline \(0.2<q\leq 0.4\) & \(0.040\pm 0.009\) & \(-0.020\pm 0.014\) & \(0.017\pm 0.005\) \\ \(0.4<q\leq 0.6\) & \(0.070\pm 0.013\) & \(-0.071\pm 0.025\) & \(0.046\pm 0.011\) \\ \(0.6<q\leq 0.8\) & \(0.089\pm 0.024\) & \(-0.097\pm 0.048\) & \(0.059\pm 0.024\) \\ \(0.8<q\leq 1\) & \(0.180\pm 0.038\) & \(-0.271\pm 0.081\) & \(0.143\pm 0.041\) \\ \hline \end{tabular}
\end{table}
Table 2: Coefficients of second-degree polynomials.
Figure 11: Correlation between initial total spin \(\bar{\chi}\) and total radiation emitted \(E_{rad}\). Dashed grey line shows the fit to our data and below its residuals. Dashed light blue line shows Reisswig polynomial for comparison purposes.
when spins are antialigned. This result have been also reported in other works [30; 31].
We have also found that the highest attainable final speeds come from the EM-P group. According to the Rochester repository, if the remnant black hole is found with speeds higher than \(800\)\(km/s\), then the BBH had precessing spins. Moreover, if the speed of the remnant BH is higher than \(3000\)\(km/s\) then the inital binaries belong to the EM-P subclass.
We have also studied in depth the radiation for each class predicted by eq. (8). For the NS group, we find that using the model derived in Appendix A, a quadratic polynomial fits notably well the data with a coefficient \(b=0.04976\) and error \(\sim 1.74\%\). The simplicity of the non-spinning binaries allows one to model the dependence of final variables with initial parameters and to obtain formulas with smaller errors than in other classes of BBHs.
In the group of aligned spins binaries we made a fit for different ranges of mass ratios and found higher error values than non-spinning binaries fit. The higher errors could be attributed to the coefficients dependence of the spin. Perhaps more complex models should be considered to reduce the coefficient errors. Despite that, the coefficients calculated still provide a simple and useful model to the relation between \(E_{rad}\) and \(J_{in}\). For the case of \(q=1\) aligned binaries we fit again the radiated energy but use the variable \(\bar{\chi}\) instead of \(J_{in}\). The plot in terms of the variable \(\bar{\chi}\) allows us to compare the predicted radiated energy with other works. We see the fit is consistent although barely higher than most reported radiated energies. This could be a consequence of defining 8 with Bondi mass. A more thorough analysis on the difference between radiated energy defined via Eq.(1) and via Christodoulou mass should be made, and we left its study for future work.
We found similar results for the precessing group. There are some ranges of mass ratio where the fits have higher error in the linear coefficients (up to \(\sim 70\%\)) and data is not enough for fitting low mass ratio binaries, i.e., for \(0<q<0.2\).
We now make some general remarks related to this work:
* Reisswig _et al_[32] have shown that the gravitational wave pattern for EM-NS and EM-A are quite similar. Since the numerical evolution for EM-NS binaries are much simpler than the EM-A subclass, they argue that one could simply use the NS class to obtain the waveforms for both subgroups. On the other hand, our results show that the final velocities for the EM-NS and EM-A sub classes are completely different. Whereas the former class have vanishing recoil velocities, EM-A cases do not. This is a distinctive difference between the two subclasses and helps to distinguish between them. Thus, even though the waveform templates are virtually the same for these sub classes, the numerical evolution of the EM-A subclass yields valuable information of the binary coalescence.
* Figs.10 and 12a represent truly global results. The plot of \(E_{rad}\) vs \(J_{in}\) is motivated by the following. Both variables can be obtained at null infinity without knowledge of the BH masses, spins and orbital angular momentum. To obtain \(J_{in}\) we only need the knowledge of the center of mass position and the definition of total angular momentum, whereas \(E_{rad}\) is computed with the use of the Bondi mass loss equation. This plot shows a clear correlation between \(E_{rad}\) and \(J_{in}\). Using a quadratic fit we obtained an empirical relationship for the coefficients. The results in this works show that the initial total
Figure 12: Detailed description of the many variable dependence for the precessing binaries.
angular momentum \(J_{in}\) is a relevant variable in the parameter space to analyze the different numerical evolutions. It is fair to ask why there should be a quadratic dependence on the initial total intrinsic angular momentum of the BBH system. The answer lies in the formula for \(E_{rad}\), and it depends on quadratic gravitational radiation terms. At the same time, the radiation depends on the total initial angular momentum and it is conserved if we only keep up to quadratic terms in the formula for \(E_{rad}\) since \(\dot{J}\) is quadratic in the shear. Thus, we should expect this dependency in the BBH coalescence. Note the same results for \(E_{rad}\) applies also for \(\dot{M}_{max}\) as a linear relation between them was shown in fig. 3.
* The angular momentum loss \(\Delta S\) only depends on the amount of angular momentum radiated away, and it is computed with knowledge of the available radiation data at null infinity. The angular momentum loss is still a main reason of concern in our formalism. As previously shown in [24], there is a discrepancy of the predicted intrinsic angular momentum with respect to the predicted values in RIT catalog, which is a subject to be studied in future works. Of course, a physically relevant definition of intrinsic angular momentum is a difficult task but the one we provide appears to be free from ambiguities. Despite that, we have shown throughout this paper that the equations of motion for energy, linear momentum, center of mass position and final velocity [eqs. (1) to (3) and (5)] employed in the framework of KQ formalism are consistent with the general literature.
###### Acknowledgements.
This research has been supported by grants from CONICET and the Agencia Nacional de Promocion de la Investigacion, el Desarrollo Tecnologico y la Innovacion.
|
2305.11252 | Brain-inspired learning in artificial neural networks: a review | Artificial neural networks (ANNs) have emerged as an essential tool in
machine learning, achieving remarkable success across diverse domains,
including image and speech generation, game playing, and robotics. However,
there exist fundamental differences between ANNs' operating mechanisms and
those of the biological brain, particularly concerning learning processes. This
paper presents a comprehensive review of current brain-inspired learning
representations in artificial neural networks. We investigate the integration
of more biologically plausible mechanisms, such as synaptic plasticity, to
enhance these networks' capabilities. Moreover, we delve into the potential
advantages and challenges accompanying this approach. Ultimately, we pinpoint
promising avenues for future research in this rapidly advancing field, which
could bring us closer to understanding the essence of intelligence. | Samuel Schmidgall, Jascha Achterberg, Thomas Miconi, Louis Kirsch, Rojin Ziaei, S. Pardis Hajiseyedrazi, Jason Eshraghian | 2023-05-18T18:34:29Z | http://arxiv.org/abs/2305.11252v1 | # Brain-inspired learning in artificial neural networks: a review
###### Abstract
Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics. However, there exist fundamental differences between ANNs' operating mechanisms and those of the biological brain, particularly concerning learning processes. This paper presents a comprehensive review of current brain-inspired learning representations in artificial neural networks. We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks' capabilities. Moreover, we delve into the potential advantages and challenges accompanying this approach. Ultimately, we pinpoint promising avenues for future research in this rapidly advancing field, which could bring us closer to understanding the essence of intelligence.
[email protected]
## Introduction
The dynamic interrelationship between memory and learning is a fundamental hallmark of intelligent biological systems. It empowers organisms to not only assimilate new knowledge but also to continuously refine their existing abilities, enabling them to adeptly respond to changing environmental conditions. This adaptive characteristic is relevant on various time scales, encompassing both long-term learning and rapid short-term learning via short-term plasticity mechanisms, highlighting the complexity and adaptability of biological neural systems [1, 2, 3]. The development of artificial systems that draw high-level, hierarchical inspiration from the brain has been a long-standing scientific pursuit spanning several decades. While earlier attempts were met with limited success, the most recent generation of artificial intelligence (AI) algorithms have achieved significant breakthroughs in many challenging tasks. These tasks include, but are not limited to, the generation of images and text from human-provided prompts [4, 5, 6, 7], the control of complex robotic systems [8, 9, 10], and the mastery of strategy games such as Chess and Go [11] and a multimodal amalgamation of these [12].
While ANNs have made significant advancements in various fields, there are still major limitations in their ability to continuously learn and adapt like biological brains [13, 14, 15]. Unlike current models of machine intelligence, animals can learn throughout their entire lifespan, which is essential for stable adaptation to changing environments. This ability, known as lifelong learning, remains a significant challenge for artificial intelligence, which primarily optimizes problems consisting of fixed labeled datasets, causing it to struggle generalizing to new tasks or retain information across repeated learning iterations [14]. Addressing this challenge is an active area of research, and the potential implications of developing AI with lifelong learning abilities could have far-reaching impacts across multiple domains.
In this paper, we offer a unique review that seeks to identify the mechanisms of the brain that have inspired current artificial intelligence algorithms. To better understand the biological processes underlying natural intelligence, the first section will explore the low-level components that shape neuromodulation, from synaptic plasticity, to the role of local and global dynamics that shape neural activity. This will be related back to ANNs in the third section, where we compare and contrast ANNs with biological neural systems. This will give us a logical basis that seeks to justify why the brain has more to offer AI, beyond the inheritance of current artificial models. Following that, we will delve into algorithms of artificial learning that emulate these processes to improve the capabilities of AI systems. Finally, we will discuss various applications of these AI techniques in real-world scenarios, highlighting their potential impact on fields such as robotics, lifelong learning, and neuromorphic computing. By doing so, we aim to provide a comprehensive understanding of the interplay between learning mechanisms in the biological brain and artificial intelligence, highlighting the potential benefits that can arise from this synergistic relationship. We hope our findings will encourage a new generation of brain-inspired learning algorithms.
## Processes that support learning in the brain
A grand effort in neuroscience aims at identifying the underlying processes of learning in the brain. Several mechanisms have been proposed to explain the biological basis of
learning at varying levels of granularity-from the synapse to population-level activity. However, the vast majority of biologically plausible models of learning are characterized by _plasticity_ that emerges from the interaction between local and global events [16]. Below, we introduce various forms of plasticity and how these processes interact in more detail.
Synaptic plasticity Plasticity in the brain refers to the capacity of experience to modify the function of neural circuits. The plasticity of synapses specifically refers to the modification of the strength of synaptic transmission based on activity and is currently the most widely investigated mechanism by which the brain adapts to new information [17, 18]. There are two broader classes of synaptic plasticity: short- and long-term plasticity. Short-term plasticity acts on the scale of tens of milliseconds to minutes and has an important role in short-term adaptation to sensory stimuli and short-lasting memory formation [19]. Long-term plasticity acts on the scale of minutes to more, and is thought to be one of the primary processes underlying long-term behavioral changes and memory storage [20].
NeuromodulationIn addition to the plasticity of synapses, another important mechanism by which the brain adapts to new information is through neuromodulation [3, 21, 22]. Neuromodulation refers to the regulation of neural activity by chemical signaling molecules, often referred to as neurotransmitters or hormones. These signaling molecules can alter the excitability of neural circuits and the strength of synapses, and can have both short- and long-term effects on neural function. Different types of neuromodulation have been identified, including acetylcholine, dopamine, and serotonin, which have been linked to various functions such as attention, learning, and emotion [23]. Neuromodulation has been suggested to play a role in various forms of plasticity, including short- [19] and long-term plasticity [22].
MetaplasticityThe ability of neurons to modify both their function and structure based on activity is what characterizes synaptic plasticity. These modifications which occur at the synapse must be precisely organized so that changes occurs at the right time and by the right quantity. This regulation of plasticity is referred to as _metaplasticity_, or the 'plasticity of synaptic plasticity,' and plays a vital role in safeguarding the constantly changing brain from its own saturation [24, 25, 26]. Essentially, metaplasticity alters the ability of synapses to generate plasticity by inducing a change in the physiological state of neurons or synapses. Metaplasticity has been proposed as a fundamental mechanism in memory stability, learning, and regulating neural excitability. While similar, metaplasticity can be distinguished from neuromodulation, with metaplastic and neuromodulatory events often overlapping in time during the modification of a synapse.
NeurogenesisThe process by which newly formed neurons are integrated into existing neural circuits is referred to as _neurogenesis_. Neurogenesis is most active during embryonic development, but is also known to occur throughout the adult lifetime, particularly in the subventricular zone of the lateral ventricles [27], the amygdala [28], and in the dentate gyrus of the hippocampal formation [29]. In adult mice, neurogenesis has been demonstrated to increase when living in enriched environments versus in standard laboratory conditions [30]. Additionally, many environmental factors such as exercise [31, 32] and stress [33, 34] have been demonstrated to change the rate of neurogenesis in the rodent hippocampus. Overall, while the role of neurogenesis in learning is not fully understood, it is believe to play an important role in support
Figure 1: Graphical depiction of long-term potentiation (LTP) and depression (LTD) at the synapse biological neurons. \(A\). Synaptically connected pre- and post-synaptic neurons. \(B\). Synaptic terminal, the connection point between neurons. \(C\). Synaptic growth (LTP) and synaptic weakening (LTD). \(D\). Top. Membrane potential dynamics in the axon hillok of the neuron. _Bottom_. Pre- and post-synaptic spikes. \(E\). Spike-timing dependent plasticity curve depicting experimental recordings of LTP and LTD.
ing learning in the brain.
Glial CellsGlial cells, or neuroglia, play a vital role in supporting learning and memory by modulating neurotransmitter signaling at synapses, the small gaps between neurons where neurotransmitters are released and received [35]. Astrocytes, one type of glial cell, can release and reuptake neurotransmitters, as well as metabolize and detoxify them. This helps to regulate the balance and availability of neurotransmitters in the brain, which is essential for normal brain function and learning [36]. Microglia, another type of glial cell, can also modulate neurotransmitter signaling and participate in the repair and regeneration of damaged tissue, which is important for learning and memory [37]. In addition to repair and modulation, structural changes in synaptic strength require the involvement of different types of glial cells, with the most notable influence coming from astrocytes [36]. However, despite their crucial involvement, we have yet to fully understand the role of glial cells. Understanding the mechanisms by which glial cells support learning at synapses are important areas of ongoing research.
## Deep neural networks and plasticity
Artificial and spiking neural networksArtificial neural networks have played a vital role in machine learning over the past several decades. These networks have catalyzed tremendous progress toward solving a variety of challenging problems. Many of the most impressive accomplishments in AI have been realized through the use of large ANNs trained on tremendous amounts of data. While there have been many technical advancements, many of the accomplishments in AI can be explained by innovations in computing technology, such as large-scale GPU accelerators and the accessibility of data. While the application of large-scale ANNs have led to major innovations, there do exist many challenges ahead. A few of the most pressing practical limitations of ANNs is that they are not efficient in terms of power consumption and they are not very good at processing dynamic and noisy data. In addition, ANNs are not able to learn beyond their training period (e.g. during deployment) from which data assumes an independent and identically distributed (IID) form without time, which does not reflect physical reality where information is highly temporally and spatially correlated. These limitations have led to their application requiring vast amounts of energy when deployed in large-scale settings [38] and has also presented challenges toward integration into edge computing devices, such as robotics and wearable devices [39].
Looking toward neuroscience for a solution, researchers have been exploring spiking neural networks (SNNs) as an alternative to ANNs [40]. SNNs are a class of ANNs that are designed to more closely resemble the behavior of biological neurons. The primary difference between ANNs and SNNs is the idea that SNNs incorporate the notion of timing into their communication. Spiking neurons accumulate information across time from connected (presynaptic) neurons (or via sensory input) in the form of a membrane potential. Once a neuron's membrane potential surpasses a threshold value, it fires a binary "spike" to all of its outgoing (postsynaptic) connections. Spikes have been theoretically demonstrated to contain more information than rate-based representations of information (such as in ANNS) despite being both binary and sparse in time [41]. Additionally, modelling studies have shown advantages of SNNs, such as better energy efficiency, the ability to process noisy and dynamic data, and the potential for more robust and fault-tolerant computing [42]. These benefits are not solely attributed to their increased biological plausibility, but also to the unique properties of spiking neural networks that distinguish them from conventional artificial neural networks. A simple working model of a leaky integrate-and-fire neuron is described below:
\[\tau_{m}\frac{dV}{dt}=E_{L}-V(t)+R_{m}I_{inj}(t)\]
where \(V(t)\) is the membrane potential at time \(t\), \(\tau_{m}\) is the membrane time constant, \(E_{L}\) is the resting potential, \(R_{m}\) is the membrane resistance, \(I_{inj}(t)\) is the injected current, \(V_{th}\) is the threshold potential, and \(V_{reset}\) is the reset potential. When the membrane potential reaches the threshold potential, the neuron spikes and the membrane potential is reset to the reset potential (if \(V(t)\geq V_{\text{th}}\) then \(V(t)\gets V_{\text{reset}}\)).
Despite these potential advantages, SNNs are still in the early stages of development, and there are several challenges that need to be addressed before they can be used more widely. One of the most pressing challenges is regarding how to optimize the synaptic weights of these models, as traditional backpropagation-based methods from ANNs fail due to the discrete and sparse nonlinearity. Irrespective of these challenges, there do exist some works that push the boundaries of what was thought possible with modern spiking networks, such as large spike-based transformer models [43]. Spiking models are of great importance for this review since they form the basis of many brain-inspired learning algorithms.
Hebbian and spike-timing dependent plasticityHebbian and spike-timing dependent plasticitySTDP are two prominent models of synaptic plasticity that play important roles in shaping neural circuitry and behavior. The Hebbian learning rule, first proposed by Donald Hebb in 1949 [44], posits that synapses between neurons are strengthened when they are coactive, such that the activation of one neuron causally leads to the activation of another. STDP, on the other hand, is a more recently proposed model of synaptic plasticity that takes into account the precise timing of pre- and post-synaptic spikes [45] to determine synaptic strengthening or weakening. It is widely believed that STDP plays a key role in the formation and refinement of neural circuits during development and in the ongoing adaptation of circuits in response to experience. In the following subsection, we will provide an overview of the basic principles of Hebbian learning and STDP.
Hebbian learningHebbian learning is based on the idea that the synaptic strength between two neurons should be increased if they are both active at the same time, and decreased
if they are not. Hebb suggested that this increase should occur when one cell "repeatedly or persistently takes part in firing" another cell (with causal implications). However this principle is often expressed correlatively, as in the famous aphismism "cells that fire together, wire together" (variously attributed to Siegrid Lowel [46] for Carla Shatz [47])1 Hebbian learning is often used as an unsupervised learning algorithm, where the goal is to identify patterns in the input data without explicit feedback [48]. An example of this process is the Hopfield network, in which large binary patterns are easily stored in a fully-connected recurrent network by applying a Hebbian rule to the (symmetric) weights [49]. It can also be adapted for use in supervised learning algorithms, where the rule is modified to take into account the desired output of the network. In this case, the Hebbian learning rule is combined with a teaching signal that indicates the correct output for a given input.
Footnote 1: As Hebb himself noted, the general idea has a long history. In their review, Brown and colleagues cite William James: “When two elementary brain-processes have been active together or in immediate succession, one of them, on reoccurring, tends to propagate its excitement into the other.”
A simple Hebbian learning rule can be described mathematically using the equation:
\[\Delta w_{ij}=\eta x_{i}x_{j}\]
where \(\Delta w_{ij}\) is the change in the weight between neuron \(i\) and neuron \(j\), \(\eta\) is the learning rate, and \(x_{i}\) "activity" in neurons \(i\), often thought of as the neuron firing rate. This rule states that if the two neurons are activated at the same time, their connection should be strengthened.
One potential drawback of the basic Hebbian rule is its instability. For example, if \(x_{i}\) and \(x_{j}\) are initially weakly positively correlated, this rule will increase the weight between the two, which will in turn reinforce the correlation, leading to even larger weight increases, etc. Thus, some form of stabilization is needed. This can be done simply by bounding the weights, or by more complex rules that take into account additional factors such as the history of the pre- and post-synaptic activity or the influence of other neurons in the network (see ref. [50] for a practical review of many such rules).
Three-factor rules: Hebbian reinforcement learningBy incorporating information about rewards, Hebbian learning can also be used for reinforcement learning. An apparently plausible idea is simply to multiply the Hebbian update by the reward directly, as follows:
\[\Delta w_{ij}=\eta x_{i}x_{j}R\]
with R being the reward (for this time step or for the whole episode). Unfortunately this idea does not produce reliable reinforcement learning. This can be perceived intuitively by noticing that, if \(w_{ij}\) is already at its optimal value, the rule above will still produce a net change and thus drive \(w_{ij}\) away from the optimum.
More formally, as pointed out by Fremaux et al. [53], to properly track the actual covariance between inputs, outputs and rewards, at least one of the terms in the \(x_{i}x_{j}R\) product must be centered, that is, replaced by zero-mean fluctuations around its expected value. One possible solution is to center the rewards, by subtracting a baseline from \(R\), generally equal to the expected value of \(R\) for this trial. While helpful, in practice this solution is generally insufficient.
A more effective solution is to remove the mean value from the _outputs_. This can be done easily by subjecting neural activations \(x_{j}\) to occasional random perturbations \(\Delta x_{j}\), taken from a suitable zero-centered distribution - and then using the perturbation \(\Delta x_{j}\), rather than the raw post-synaptic activation \(x_{j}\), int he three-factor product:
\[\Delta w_{ij}=\eta x_{i}\Delta x_{j}R\]
This is the so-called "node perturbation" rule proposed by Fiete and Seung [54, 55]. Intuitively, notice that the effect of the \(x_{i}\Delta x_{j}\) increment is to push future \(x_{j}\) responses (when encountering the same \(x_{i}\) input) in the direction of the perturbation: larger if the perturbation was positive, smaller if the perturbation was negative. Multiplying this shift by \(R\) results in pushing future responses towards the perturbation if \(R\) was positive, and away from it if \(R\) was negative. Even if \(R\) is not zero-mean, the net effect (in expectation) will still be to drive \(w_{ij}\) towards higher \(R\), though the variance will be higher.
This rule turns out to implement the REINFORCE algorithm (Williams' original paper [56] actually proposes an algorithm which is exactly node-perturbation for spiking stochastic neurons), and thus estimates the theoretical gradient of \(R\) over \(w_{ij}\). It an also be implemented in a biologically plausible manner, allowing recurrent networks to learn non-trivial cognitive or motor tasks from sparse, delayed rewards [57].
Spike-timing dependent plasticitySpike-timing dependent plasticitySTDP is a theoretical model of synaptic plasticity that allows the strength of connections between neurons to be modified based on the relative timing of their spikes. Unlike the Hebbian learning rule, which relies on the simultaneous activation of pre- and post-synaptic neurons, STDP takes into account the precise timing of the pre- and post-synaptic spikes. Specifically, STDP suggests that if a presynaptic neuron fires just before a postsynaptic neuron, the connection between them should be strengthened. Conversely, if the post-synaptic neuron fires just before the presynaptic neuron, the connection should be weakened.
STDP has been observed in a variety of biological systems, including the neocortex, hippocampus, and cerebellum. The rule has been shown to play a crucial role in the development and plasticity of neural circuits, including learning and memory processes. STDP has also been used as a basis for the development of artificial neural networks, which are designed to mimic the structure and function of the brain.
The mathematical equation for STDP is more complex than the Hebbian learning rule and can vary depending on the specific implementation. However, a common formulation is:
\[\Delta w_{ij}=\begin{cases}A_{+}\exp(-\Delta t/\tau_{+})&\text{if }\Delta t>0\\ -A_{-}\exp(\Delta t/\tau_{-})&\text{if }\Delta t<0\end{cases}\]
where \(\Delta w_{ij}\) is the change in the weight between neuron \(i\) and neuron \(j\), \(\Delta t\) is the time difference between the pre- and post-synaptic spikes, \(A_{+}\) and \(A_{-}\) are the amplitudes of the potentiation and depression, respectively, and \(\tau_{+}\) and \(\tau_{-}\) are the time constants for the potentiation and depression, respectively. This rule states that the strength of the connection between the two neurons will be increased or decreased depending on the timing of their spikes relative to each other.
## Processes that support learning in artificial neural networks
There are two primary approaches for weight optimization in artificial neural networks: error-driven global learning and brain-inspired local learning. In the first approach, the network weights are modified by driving a global error to its minimum value. This is achieved by delegating error to each weight and synchronizing modifications between each weight. In contrast, brain-inspired local learning algorithms aim to learn in a more biologically plausible manner, by modifying weights from dynamical equations using locally available information. Both optimization approaches have unique benefits and drawbacks. In the following sections we will discuss the most utilized form of error-driven global learning, backpropagation, followed by in-depth discussions of brain-inspired local algorithms. It is worth mentioning that these two approaches are not mutually exclusive, and will often be integrated in order to compliment their respective strengths [58, 59, 60, 61].
Backpropagation.Backpropagation is a powerful error-driven global learning method which changes the weight of connections between neurons in a neural network to produce a desired target behavior [62]. This is accomplished through the use of a quantitative metric (an objective function) that describes the quality of a behavior given sensory information (e.g. visual input, written text, robotic joint positions). The backpropagation algorithm consists of two phases: the forward pass and the backward pass. In the forward pass, the input is propagated through the network, and the output is calculated. During the backward pass, the error between the predicted output and the "true" output is calculated, and the gradients of the loss function with respect to the weights of the network are calculated by propagating the error backwards through the network. These gradients are then used to update the weights of the network using an optimization algorithm such as stochastic gradient descent. This process is repeated for many iterations until the weights converge to a set of values that minimize the loss function.
Lets take a look at a brief mathematical explanation of backpropagation. First, we define a desired loss function, which is a function of the network's outputs and the true values:
\[L(y,\hat{y})=\frac{1}{2}\sum_{i}(y_{i}-\hat{y}_{i})^{2}\]
where \(y\) is the true output and \(\hat{y}\) is the network's output. In this case we are minimizing the squared error, but could very well optimize for any smooth and differentiable loss function. Next, we use the chain rule to calculate the gradient of the
Fig. 2: **There are strong parallels between artificial and brain-like learning algorithms.**_Left._Top._Graphical depiction of a rodent and a cluster of interconnected neurons. Middle._Robert is participating in the _Morris_ water maze task to test its learning capabilities. _Bottom_A graphical depiction of biological pre- and post-synaptic pyramidal neuron. _Right._Top._A rodent musculoskeletal physics model with artificial neural network policy and critic heads regulating learning and control (see _ref.[58]_). _Middle._A virtual maze environment used for benchmarking learning algorithms (see _ref.[58]_). _Bottom._An artificial pre- and post-synaptic neuron with forward propagation equations.
loss with respect to the weights of the network. Let \(w^{l}_{ij}\) be the weight between neuron \(i\) in layer \(l\) and neuron \(j\) in layer \(l+1\), and let \(a^{l}_{i}\) be the activation of neuron \(i\) in layer \(l\). Then, the gradients of the loss with respect to the weights are given by:
\[\frac{\partial L}{\partial w^{l}_{ij}}=\frac{\partial L}{\partial a^{l+1}_{j}} \frac{\partial a^{l+1}_{j}}{\partial z^{l+1}_{j}}\frac{\partial z^{l+1}_{j}}{ \partial w^{l}_{ij}}\]
where \(z^{l+1}_{j}\) is the weighted sum of the inputs to neuron \(j\) in layer \(l+1\). We can then use these gradients to update the weights of the network using gradient descent:
\[w^{l}_{ij}=w^{l}_{ij}-\alpha\frac{\partial L}{\partial w^{l}_{ij}}\]
where \(\alpha\) is the learning rate. By repeatedly calculating the gradients and updating the weights, the network gradually learns to minimize the loss function and make more accurate predictions. In practice, gradient descent methods are often combined with approaches to incorporate momentum in the gradient estimate, which has been shown to significantly improve generalization [63].
The impressive accomplishments of backpropagation have led neuroscientists to investigate whether it can provide a better understanding of learning in the brain. While it remains debated as to whether backpropagation variants could occur in the brain [64, 65], it is clear that backpropagation in its current formulation is biologically implausible. Alternative theories suggest complex feedback circuits or the interaction of local activity and top-down signals (a "third-factor") could support a similar form of backprop-like learning [64].
Despite its impressive performance there are still fundamental algorithmic challenges that follow from repeatedly applying backpropagation to network weights. One such challenge is a phenomenon known as catastrophic forgetting, where a neural network forgets previously learned information when training on new data [13]. This can occur when the network is fine-tuned on new data or when the network is trained on a sequence of tasks without retaining the knowledge learned from previous tasks. Catastrophic forgetting is a significant hurdle for developing neural networks that can continuously learn from diverse and changing environments. Another challenge is that backpropagation requires backpropagating information through all the layers of the network, which can be computationally expensive and time-consuming, especially for very deep networks. This can limit the scalability of deep learning algorithms and make it difficult to train large models on limited computing resources. Nonetheless, backpropagation has remained the most widely used and successful algorithm for applications involving artificial neural networks
### Evolutionary and genetic algorithms
Another class of global learning algorithms that has gained significant attention in recent years are evolutionary and genetic algorithms. These algorithms are inspired by the process of natural selection and, in the context of ANNs, aim to optimize the weights of a neural network by mimicking the evolutionary process.
In _genetic algorithms_[66], a population of neural networks is initialized with random weights, and each network is evaluated on a specific task or problem. The networks that perform better on the task are then selected for reproduction, whereby they produce offspring with slight variations in their weights. This process is repeated over several generations, with the best-performing networks being used for reproduction, making their behavior more likely across generations. _Evolutionary algorithms_ operate similarly to genetic algorithms but use a different approach by approximating a stochastic gradient [67, 68]. This is accomplished by perturbing the weights and combining the network objective function performances to update the parameters. This results in a more global search of the weight space that can be more efficient at finding optimal solutions compared to local search methods like backpropagation [69].
One advantage of these algorithms is their ability to search a vast parameter space efficiently, making them suitable for problems with large numbers of parameters or complex search spaces. Additionally, they do not require a differentiable objective function, which can be useful in scenarios where the objective function is difficult to define or calculate (e.g. spiking neural networks). However, these algorithms also have some drawbacks. One major limitation is the high computational cost required to evaluate and evolve a large population of networks. Another challenge is the potential for the algorithm to become stuck in local optima or to converge too quickly, resulting in suboptimal solutions. Additionally, the use of random mutations can lead to instability and unpredictability in the learning process.
Regardless, evolutionary and genetic algorithms have shown promising results in various applications, particularly when optimizing non-differentiable and non-trivial parameter spaces. Ongoing research is focused on improving the efficiency and scalability of these algorithms, as well as discovering where and when it makes sense to use these approaches instead of gradient descent.
## Brain-inspired representations of learning in artificial neural networks
### Local learning algorithms
Unlike global learning algorithms such as backpropagation, which require information to be propagated through the entire network, local learning algorithms focus on updating synaptic weights based on local information from nearby or synaptically connected neurons. These approaches are often strongly inspired by the plasticity of biological synapses. As will be seen, by leveraging local learning algorithms, ANNs can learn more efficiently and adapt to changing input distributions, making them better suited for real-world applications. In this section, we will review recent advances in brain-inspired local learning algorithms and their potential for improving the performance and robustness of ANNs.
### Backpropagation-derived local learning.
Backpropagation-derived local learning algorithms are a class of local learning algorithms that attempt to emulate
the mathematical properties of backpropagation. Unlike the traditional backpropagation algorithm, which involves propagating error signals back through the entire network, backpropagation-derived local learning algorithms update synaptic weights based on local error gradients computed using backpropagation. This approach is computationally efficient and allows for online learning, making it suitable for applications where training data is continually arriving.
One prominent example of backpropagation-derived local learning algorithms is the Feedback Alignment (FA) algorithm [70, 71], which replaces the weight transport matrix used in backpropagation with a fixed random matrix, allowing the error signal to propagate from direct connections thus avoiding the need for backpropagating error signals. A brief mathematical description of feedback alignment is as follows: let \(w^{out}\) be the weight matrix connecting the last layer of the network to the output, and \(w^{in}\) be the weight matrix connecting the input to the first layer. In Feedback Alignment, the error signal is propagated from the output to the input using the fixed random matrix \(B\), rather than the transpose of \(w^{out}\). The weight updates are then computed using the product of the input and the error signal, \(\Delta w^{in}=-\eta xz\) where \(x\) is the input, \(\eta\) is the learning rate, and \(z\) is the error signal propagated backwards through the network, similar to traditional backpropagation.
Direct Feedback Alignment [71] (DFA) simplifies the weight transport chain compared with FA by directly connecting the output layer error to each hidden layer. The Sign-Symmetry (SS) algorithm is similar to FA except the feedback weights symmetrically share signs. While FA has exhibited impressive results on small datasets like MNIST and CIFAR, their performance on larger datasets such as ImageNet is often suboptimal [72]. On the other hand, recent studies have shown that the SS algorithm algorithm is capable of achieving comparable performance to backpropagation, even on large-scale datasets [73].
Eligibility propagation [59, 74] (e-prop) extends the idea of feedback alignment for spiking neural networks, combining the advantages of both traditional error backpropagation and biologically plausible learning rules, such as spike-timing-dependent plasticity (STDP). For each synapse, the e-prop algorithm computes and maintains an eligibility trace \(e_{ji}(t)=\frac{dz_{j}(t)}{dW_{ji}}\). Eligibility traces measure the total contribution of this synapse to the neuron's current output, taking into account all past inputs [3]. This can be computed and updated in a purely forward manner, without backward passes. This eligibility trace is then multiplied by an estimate of the gradient of the error over the neuron's output \(L_{j}(t)=\frac{dE(t)}{dz_{j}(t)}\). to obtain the actual weight gradient \(\frac{dE(t)}{dW_{ji}}\). \(L_{j}(t)\) itself is computed from the error at the output neurons, either by using symmetric feedback weights or by using fixed feedback weights, as in feedback alignment. A possible drawback of e-prop is that it requires a real-time error signal \(L_{t}\) at each point in time, since it only takes into account past events and is blind to future errors. In particular, it cannot learn from delayed error signals that extend beyond the time scales of individual neurons (including short-term adaptation) [59], in contrast with methods like REINFORCE and node-perturbation.
In the work of ref. [75, 76] a normative theory for synaptic learning based on recent genetic findings [77] of neuronal signaling architectures is demonstrated. They propose that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type-specific local neuromodulation, and that neuron-type diversity and neuron-type-specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. In this work, the authors instantiate a simplified computational model based on eligibility propagation to explore this theory and show that their model, which includes both dopamine-like temporal difference and neuropeptide-like local modulatory signaling, leads to improvements over previous methods such as e-prop and feedback alignment.
Generalization propertiesTechniques in deep learning have made tremendous strides toward understanding the generalization of their learning algorithms. A particularly useful discovery was that flat minima tend to lead to better generalization [78]. What is meant by this is that, given a perturbation \(\epsilon\) in the parameter space (synaptic weight values) more significant performance degradation is observed around _narrower_ minima. Learning algorithms that find _flatter_ minima in parameter space ultimately lead to better generalization.
Recent work has explored the generalization properties exhibited by (brain-inspired) backpropagation-derived local learning rules [79]. Compared with backpropagation through time, backpropagation-derived local learning rules exhibit worse and more variable generalization which does not improve by scaling the step size due to the gradient approximation being poorly aligned with the true gradient. While it is perhaps unsurprising that _local approximations_ of an optimization process are going to have worse generalization properties than their complete counterpart, this work opens the door toward asking new questions about what the best approach toward designing brain-inspired learning algorithms is. It also opens the question as to whether backpropagation-derived local learning rules are even worth exploring given that they are fundamentally going to exhibit _sub-par_ generalization.
In conclusion, while backpropagation-derived local learning rules present themselves as a promising approach to designing brain-inspired learning algorithms, they come with limitations that must be addressed. The poor generalization of these algorithms highlights the need for further research to improve their performance and to explore alternative brain-inspired learning rules. It also opens the question as to whether backpropagation-derived local learning rules are even worth exploring given that they are fundamentally going to exhibit sub-par generalization.
Meta-optimized plasticity rulesMeta-optimized plasticity rules offer an effective balance between error-driven global learning and brain-inspired local learning. Meta-learning can be defined as automation of the search for learning algorithms themselves, where, instead of relying on hu
man engineering to describe a learning algorithm, a search process to find that algorithm is employed [80]. The idea of meta-learning naturally extends to brain-inspired learning algorithms, such that the brain-inspired mechanism of learning itself can be optimized thereby allowing for _discovery_ of more efficient learning without manual tuning of the rule. In the following section, we discuss various aspects of this research starting with differentiably optimized synaptic plasticity rules.
### Differentiable plasticity
One instantiation of this principle in the literature is _differentiable plasticity_, which is a framework that focuses on optimizing synaptic plasticity rules in neural networks through gradient descent [81, 82]. In these rules, the plasticity rules are described in such a way that the parameters governing their dynamics are differentiable, allowing for backpropagation to be used for meta-optimization of the plasticity rule parameters (e.g. the \(\eta\) term in the simple hebbian rule or the \(A_{+}\) term in the STDP rule). This allows the weight dynamics to precisely solve a task that requires the weights to be optimized during execution time, referred to as intra-lifetime learning.
Differentiable plasticity rules are also capable of the differentiable optimization of neuromodulatory dynamics [60, 82]. This framework includes two main variants of neuromodulation: global neuromodulation, where the direction and magnitude of weight changes is controlled by a network-output-dependent global parameter, and retroactive neuromodulation, where the effect of past activity is modulated by a dopamine-like signal within a short time window. This is enable by the use of eligibility traces, which are used to keep track of which synapses contributed to recent activity, and the dopamine signal modulates the transformation of these traces into actual plastic changes.
Methods involving differentiable plasticity have seen improvements in a wide range of applications from sequential associative tasks [83], familiarity detection [84], and robotic noise adaptation [60]. This method has also been used to optimize short-term plasticity rules [84, 85] which exhibit improved performance in reinforcement and temporal supervised learning problems. While these methods show much promise, differentiable plasticity approaches take a tremendous amount of memory, as backpropagation is used to optimize multiple parameters _for each synapse_ through time. Practical advancements with these methods will likely require parameter sharing [86] or a more memory-efficient form of backpropagation [87].
### Plasticity with spiking neurons
Recent advances in backpropagating through the non-differentiable part of spiking neurons with surrogate gradients have allowed for differentiable plasticity to be used to optimize plasticity rules in spiking neural networks [60]. In ref. [61] the capability of this optimization paradigm is demonstrated through the use of a differentiable spike-timing dependent plasticity rule to enable "learning to learn" on an online one-shot continual learning problem and on an online one-shot image class recognition problem. A similar method was used to optimize the third-factor signal using the gradient approximation of e-prop as the plasticity rule, introducing a meta-optimization form of e-prop [88]. Recurrent neural networks tuned by evolution can also be used for meta-optimized learning rules. Evolyable Neural Units [89] (ENUs) introduce a gating structure that controls how the input is processed, stored, and dynamic parameters are updated. This work demonstrates the evolution of individual somatic and synaptic compartment models of neurons and show that a network of ENUs can learn to solve a T-maze environment task, independently discovering spiking dynamics and reinforcement-type learning rules.
### Plasticity in RNNs and Transformers
Independent of research aiming at learning plasticity using update rules, Transformers have recently been shown to be good intra-lifetime learners [90, 91, 9]. The process of in-context learning works not through the update of synaptic weights but purely within the network activations. Like in Transformers, this process can also happen in recurrent neural networks [92]. While in-context learning appears to be a different mechanism from synaptic plasticity, these processes have been demonstrated to exhibit a strong relationship. One exciting connection discussed in the literature is the realization that parameter-sharing of the meta-learner often leads to the _interpretation of activations as weights_[93]. This demonstrates that, while these models may have fixed weights, they exhibit some of the same learning capabilities as models with plastic weights. Another connection is that self-attention in the Transformer involves outer and inner products that can be cast as learned weight updates [94] that can even implement gradient descent [95, 96].
### Evolutionary and genetic meta-optimization
Much like differentiable plasticity, evolutionary and genetic algorithms have been used to optimize the parameters of plasticity rules on a variety of applications [97], including: adaptation to limb damage on robotic systems [98, 99]. Recent work has also enabled the optimization of both plasticity coefficients and plasticity rule _equations_ through the use of Cartesian genetic programming [100], presenting an automated approach for discovering biologically plausible plasticity rules based on the specific task being solved. In these methods, the genetic or evolutionary optimization process acts similarly to the differentiable process such that it optimizes the plasticity parameters in an outer-loop process, while the plasticity rule optimizes the reward in an inner-loop process. These methods are appealing since they have a much lower memory footprint compared to differentiable methods since they do not require backpropagating errors through time. However, while memory efficient, they often require a tremendous amount of data to get comparable performance to gradient-based methods [101].
### Self-referential meta-learning
While synaptic plasticity has two-levels of learning, the meta-learner, and the discovered learning rule, self-referential meta-learning [102, 103] extends this hierarchy. In plasticity approaches only a subset of the network parameters are updated (e.g. the synaptic weights), whereas the meta-learned update rule remains fixed after meta-optimization. Self-referential architectures
enable a neural network to modify all of its parameters in recursive fashion. Thus, the learner can also modify the meta-learner. This in principles allows arbitrary levels of learning, meta-learning, meta-meta-learning, etc. Some approaches meta-learn the parameter initialization of such a system [102, 104]. Finding this initialization still requires a hard-wired meta-learner. In other works the network self-modifies in a way that eliminates even this meta-learner [103, 105]. Sometimes the learning rule to be discovered has structural search space restrictions which simplify self-improvement where a gradient-based optimizer can discover itself [106] or an evolutionary algorithm can optimize itself [107]. Despite their differences, both synaptic plasticity, as well as self-referential approaches, aim to achieve self-improvement and adaptation in neural networks.
### Generalization of meta-optimized learning rules
The extent to which discovered learning rules generalize to a wide range of tasks is a significant open question-in particular, when should they replace manually derived general-purpose learning rules such as backpropagation? A particular observation that poses a challenge to these methods is that when the search space is large and few restrictions are put on the learning mechanism [92, 108, 109], generalization is shown to become more difficult. However, toward amending this, in variable shared meta learning [93] flexible learning rules were parameterized by parameter-shared recurrent neural networks that locally exchange information to implement learning algorithms that generalize across classification problems not seen during meta-optimization. Similar results have also been shown for the discovery of reinforcement learning algorithms [110].
## Applications of brain-inspired learning
### Neuromorphic Computing
Neuromorphic computing represents a paradigm shift in the design of computing systems, with the goal of creating hardware that mimics the structure and functionality of the biological brain [111, 42, 112]. This approach seeks to develop artificial neural networks that not only replicate the brain's learning capabilities but also its energy efficiency and inherent parallelism. Neuromorphic computing systems often incorporate specialized hardware, such as neuromorphic chips or memristive devices, to enable the efficient execution of brain-inspired learning algorithms [112]. These systems have the potential to drastically improve the performance of machine learning applications, particularly in edge computing and real-time processing scenarios.
A key aspect of neuromorphic computing lies in the development of specialized hardware architectures that facilitate the implementation of spiking neural networks, which more closely resemble the information processing mechanisms of biological neurons. Neuromorphic systems operate based on the principle of brain-inspired local learning, which allows them to achieve high energy efficiency, low-latency processing, and robustness against noise, which are critical for real-world applications [113]. The integration of brain-inspired learning techniques with neuromorphic hardware is vital for the successful application of this technology.
In recent years, advances in neuromorphic computing have led to the development of various platforms, such as Intel's Loihi [114], IBM's TrueNorth [115], and SpiNNaker [116], which offer specialized hardware architectures for implementing SNNs and brain-inspired learning algorithms. These platforms provide a foundation for further exploration of neuromorphic computing systems, enabling researchers to design, simulate, and evaluate novel neural network architectures and learning rules. As neuromorphic computing continues to progress, it is expected to play a pivotal role in the future of artificial intelligence, driving innovation and enabling the development of more efficient, versatile, and biologically plausible learning systems.
### Robotic learning
Brain-inspired learning in neural networks has the potential to overcome many of the current challenges present in the field of robotics by enabling robots
Figure 3: A feedforward neural network computes an output given an input by propagating the input information downstream. The precise value of the output is determined by the weight of synaptic coefficients. To improve the output for a task given an input, the synaptic weights are modified. _Synaptic Plasticity_ algorithms represent computational models that emulate the brain’s ability to strengthen or weaken synapses-connections between neurons-based on their activity, thereby facilitating learning and memory formation. _Three-Factor Plasticity_ refers to a model of synaptic plasticity in which changes to the strength of neural connections are determined by three factors: pre-synaptic activity, post-synaptic activity, and a undorulatory signal, facilitating more nuanced and adaptive learning processes. The _Feedback Alignment_ algorithm is a learning technique in which artificial neural networks are trained using random, fixed feedback connections rather than symmetric weight matrices, demonstrating that successful learning can occur without precise backpropagation. _Backpropagation_ is a fundamental algorithm in machine learning and artificial intelligence, used to train neural networks by calculating the gradient of the loss function with respect to the weights in the network.
to learn and adapt to their environment in a more flexible way [117, 118]. Traditional robotics systems rely on preprogrammed behaviors, which are limited in their ability to adapt to changing conditions. In contrast, as we have shown in this review, neural networks can be trained to adapt to new situations by adjusting their internal parameters based on the data they receive.
Because of their natural relationship to robotics, brain-inspired learning algorithms have a long history in robotics [117]. Toward this, synaptic plasticity rules have been introduced for adapting robotic behavior to domain shifts such as motor gains and rough terrain [119, 120, 60, 121] as well as for obstacle avoidance [122, 123, 124] and articulated (arm) control [125, 126]. Brain-inspired learning rules have also been used to explore how learning occurs in the insect brain using robotic systems as an embodied medium [127, 128, 129, 130].
Deep reinforcement learning (DRL) represents a significant success of brain-inspired learning algorithms, combining the strengths of neural networks with the theory of reinforcement learning in the brain to create autonomous agents capable of learning complex behaviors through interaction with their environment [131, 132, 133]. By utilizing a reward-driven learning process emulating the activity of dopamine neurons [134], as opposed to the minimization of an e.g classification or regression error, DRL algorithms guide robots toward learning optimal strategies to achieve their goals, even in highly dynamic and uncertain environments [135, 136]. This powerful approach has been demonstrated in a variety of robotic applications, including dexterous manipulation, robotic locomotion [137], and multi-agent coordination [138].
Lifelong and online learningLifelong and online learning are essential applications of brain-inspired learning in artificial intelligence, as they enable systems to adapt to changing environments and continuously acquire new skills and knowledge [14]. Traditional machine learning approaches, in contrast, are typically trained on a fixed dataset and lack the ability to adapt to new information or changing environments. The mature brain is an incredible medium for lifelong learning, as it is constantly learning while remaining relatively fixed in size across the span of a lifetime [139]. As this review has demonstrated, neural networks endowed with brain-inspired learning mechanisms, similar to the brain, can be trained to learn and adapt continuously, improving their performance over time.
The development of brain-inspired learning algorithms that enable artificial systems to exhibit this capability has the potential to significantly enhance their performance and capabilities and has wide-ranging implications for a variety of applications. These applications are particularly useful in situations where data is scarce or expensive to collect, such as in robotics [140] or autonomous systems [141], as it allows the system to learn and adapt in real-time rather than requiring large amounts of data to be collected and processed before learning can occur.
One of the primary objectives in the field of lifelong learning is to alleviate a major issue associated with the continuous application of backpropagation on ANNs, a phenomenon known as catastrophic forgetting [13]. Catastrophic forgetting refers to the tendency of an ANN to abruptly forget previously learned information upon learning new data. This happens because the weights in the network that were initially optimized for earlier tasks are drastically altered to accommodate the new learning, thereby erasing or overwriting the previous information. This is because the backpropagation algorithm does not inherently factor in the need to preserve previously acquired information while facilitating new learning. Solving this problem has remained a significant hurdle in AI for decades. We posit that by employing brain-inspired learning algorithms, which emulate the dynamic learning mechanisms of the brain, we may be able to capitalize on the proficient problem-solving strategies inherent to biological organisms.
Toward understanding the brainThe worlds of artificial intelligence and neuroscience have been greatly benefiting from each other. Deep neural networks, specially tailored for certain tasks, show striking similarities to the human brain in how they handle spatial [142, 143, 144] and visual [145, 146, 147] information. This overlap hints at the potential of artificial neural networks (ANNs) as useful models in our efforts to better understand the brain's complex mechanics. A new movement referred to as _the neuroconnectionist research programme_[148] embodies this combined approach, using ANNs as a computational language to form and test ideas about how the brain computes. This perspective brings together different research efforts, offering a common computational framework and tools to test specific theories about the brain.
While this review highlights a range of algorithms that imitate the brain's functions, we still have a substantial amount of work to do to fully grasp how learning actually happens in the brain. The use of backpropagation, and backpropagation-like local learning rules, to train large neural networks may provide a good starting point for modelling brain function. Much productive investigation has occurred to see what processes in the brain may operate similarly to backpropagation [64], leading to new perspectives and theories in neuroscience. Even though backpropagation in its current form might not occur in the brain, the idea that the brain might develop similar internal representations to ANNs despite such different mechanisms of learning is an exciting open question that may lead to a deeper understanding of the brain and of AI.
Explorations are now extending beyond static network dynamics to the networks which unravel a function of time much like the brain. As we further develop algorithms in continual and lifelong learning, it may become clear that our models need to reflect the learning mechanisms observed in nature more closely. This shift in focus calls for the integration of local learning rules--those that mirror the brain's own methods--into ANNs.
We are convinced that adopting more biologically authentic learning rules within ANNs will not only yield the aforementioned benefits, but it will also serve to point neuroscience researchers in the right direction.. In other words, it's a strategy with a two-fold benefit: not only does it promise to invigorate
innovation in engineering, but it also brings us closer to unravelling the intricate processes at play within the brain. With more realistic models, we can probe deeper into the complexities of brain computation from the novel perspective of artificial intelligence.
## Conclusion
In this review, we investigated the integration of more biologically plausible learning mechanisms into ANNs. This further integration presents itself as an important step for both neuroscience and artificial intelligence. This is particularly relevant amidst the tremendous progress that has been made in artificial intelligence with large language models and embedded systems, which are in critical need for more energy efficient approaches for learning and execution. Additionally, while ANNs are making great strides in these applications, there are still major limitations in their ability to adapt like biological brains, which we see as a primary application of brain-inspired learning mechanisms.
As we strategize for future collaboration between neuroscience and AI toward more detailed brain-inspired learning algorithms, it's important to acknowledge that the past influences of neuroscience on AI have seldom been about a straightforward application of ready-made solutions to machines [149]. More often, neuroscience has stimulated AI researchers by posing intriguing algorithmic-level questions about aspects of animal learning and intelligence. It has provided preliminary guidance towards vital mechanisms that support learning. Our perspective is that by harnessing the insights drawn from neuroscience, we can significantly accelerate advancements in the learning mechanisms used in ANNs. Likewise, experiments using brain-like learning algorithms in AI can accelerate our understanding of neuroscience.
## Acknowledgements
We thank the OpenBioML collaborate workspace from which several of the authors of this work were connected. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE2139757.
|
2304.06407 | Graph-theoretic insights on the constructability of complex entangled
states | The most efficient automated way to construct a large class of quantum
photonic experiments is via abstract representation of graphs with certain
properties. While new directions were explored using Artificial intelligence
and SAT solvers to find such graphs, it becomes computationally infeasible to
do so as the size of the graph increases. So, we take an analytical approach
and introduce the technique of local sparsification on experiment graphs, using
which we answer a crucial open question in experimental quantum optics, namely
whether certain complex entangled quantum states can be constructed. This
provides us with more insights into quantum resource theory, the limitation of
specific quantum photonic systems and initiates the use of graph-theoretic
techniques for designing quantum physics experiments. | L. Sunil Chandran, Rishikesh Gajjala | 2023-04-13T11:13:17Z | http://arxiv.org/abs/2304.06407v3 | # Graph-theoretic insights on the constructability of complex entangled states
###### Abstract
The most efficient automated way to construct a large class of quantum photonic experiments is via abstract representation of graphs with certain properties. While new directions were explored using Artificial intelligence and SAT solvers to find such graphs, it becomes computationally infeasible to do so as the size of the graph increases. So, we take an analytical approach and introduce the technique of local sparsification on experiment graphs, using which we answer a crucial open question in experimental quantum optics, namely whether certain complex entangled quantum states can be constructed. This provides us with more insights into quantum resource theory, the limitation of specific quantum photonic systems and initiates the use of graph-theoretic techniques for designing quantum physics experiments.
Perfect matchings, Graph colourings, Quantum entanglement 2012 acmshort
## 1 Introduction
Recent years have seen dramatic advances in quantum optical technology [16, 17] as photons are at the core of many quantum technologies and in the experimental investigation of fundamental questions about our universe's local and realistic nature. Due to the peculiar behaviour of multi-particle interference, designing experimental setups to generate multi-partite entanglement in photonics is challenging. The most efficient automated way to construct a large class of such quantum photonic experiments is via abstract representation of graphs with certain properties [6, 10, 7] allowing new possibilities for quantum technology [13, 5, 1]. The construction of such graphs has been a challenging mathematical open problem as one needs to carefully tune the edge weights to satisfy exponentially many equations [9].
Recently, new directions were explored using Artificial intelligence and SAT solvers to find such graphs, which could be used to design quantum photonic experiments [11, 15, 3]. However, using these methods, it is computationally infeasible to find solutions in large graphs as the search space grows exponentially. Therefore, more advanced analytical methods are necessary. In this work, we introduce the technique of local sparsification on experiment graphs, using which we answer a crucial open question in experimental quantum optics, namely whether certain complex entangled quantum states can be constructed. The two main ideas behind our technique are:
1) To develop an edge pruning algorithm which helps to construct quantum optical experiments with as few resources as possible.
2) Detect a special sparse subgraph in the pruned experiment graph whose edge count bounds the dimension of some multi-particle entangled quantum states
Our ideas are general and might be useful to understand the experimental designs to construct several other quantum states like NOON states, cluster states, W states and Dickes states. With more structural insights into the graphs used for creating high-dimensional
multi-particle entanglement, we believe our techniques can be used to resolve a conjecture on the constructability of certain complex entangled quantum states by Cervera-Lierta, Krenn, and Aspuru-Guzik [3]. This would give us more insights into quantum resource theory and the limitation of specific quantum photonic systems.
## 2 Graph representation of quantum optics experiments
In 2017, Krenn, Gu and Zeilinger [10] have discovered (and later extended [7, 6]) a bridge between experimental quantum optics and graph theory. Here, large classes of quantum optics experiments (including those containing probabilistic photon pair sources, deterministic photon sources and linear optics elements) can be represented as an edge-coloured edge-weighted graph. Additionally, every edge-coloured edge-weighted graph can be translated into a concrete experimental setup. This technique has led to the discovery of new quantum interference effects and connections to quantum computing [6]. Furthermore, it has been used as the representation of efficient AI-based design methods for new quantum experiments [11, 15].
### Mathematical formulation of the problem
We first define some commonly used graph-theoretic terms. For a graph \(G\), let \(V(G),E(G)\) denote the set of vertices and edges, respectively. For \(S\subseteq V(G)\), \(G[S]\) denotes the induced subgraph of \(G\) on \(S\). \(\mathbb{N},\mathbb{C}\) denote the set of natural and complex numbers, respectively. The cardinality of a set \(\mathcal{S}\) is denoted by \(|\mathcal{S}|\). For a positive integer \(r\), \([r]\) denotes the set
Figure 1: An edge-coloured edge-weighted graph. The edges \(\{1,2\},\{3,4\},\{5,6\},\{3,6\}\) are of colour green (mode number 1) and \(\{1,6\},\{2,3\},\{4,5\}\) are of colour red (mode number 0). The edges \(\{4,6\}\) and \(\{3,5\}\) are bi-chromatic, where the halves starting at the vertices \(4,5\) are of the colour red, and the remaining halves are of the colour green.
Figure 2: The vertex colourings induced by the perfect matchings of Figure 1.
\(\{1,2\ldots,r\}\).
We assume all graphs to be simple throughout unless otherwise mentioned. Usually, in an edge colouring, each edge is associated with a natural number. But in such edge colourings, the edges are assumed to be monochromatic. But in the experiment graph, we are allowed to have bi-coloured edges, i.e. one half coloured by a certain colour and the other half coloured by a different colour (as shown in Figure 1). So, we develop some new notation to describe bi-coloured edges.
An edge colouring \(c\) associates a coloured edge \(\{(v_{1},i),(v_{2},j)\}\) to an uncoloured edge \(\{v_{1},v_{2}\}\) for some \(i,j\in\mathbb{N}\). For an edge \(e=\{(v_{1},i),(v_{2},j)\}\), we say that \(\{i,j\}\) is the colour of \(e\). We will use the notation \(c(e,v_{1})\) to denote \(i\) and \(c(e,v_{2})\) to denote \(j\) in this case. When \(c(e,v_{2})=c(e,v_{1})\), we call the edge to be monochromatic and represent the colour of the edge by \(c(e)\). If \(c(e,v_{2})\neq c(e,v_{1})\), we call the edge to be a bi-chromatic edge. The colour degree of \(u\) with respect to the colour \(i\), \(d(u,i)\) is the number of edges \(e\) incident on \(u\) such that \(c(e,u)=i\). A weight assignment \(w\) assigns every edge \(e\) a weight \(w(e)\in\mathbb{C}\setminus\{0\}\).
We call a subset \(P\) of edges in a graph a perfect matching if each vertex in the graph has exactly one edge in \(P\) incident on it.
The weight of this perfect matching \(P\), \(w(P)\) is the product of the weights of all its edges \(\prod\limits_{e\in P}w(e)\)
A vertex colouring \(vc\) associates a colour \(i\) to each vertex in the graph for some \(i\in\mathbb{N}\). We use \(vc(v)\) to denote the colour of vertex \(v\) in the vertex colouring \(vc\). We say that each perfect matching \(P\)_induces_ a vertex colouring \(vc\) if for each vertex \(v\), \(vc(v)\) is equal to \(c(e,v)\), where \(e\) is the unique edge in \(P\) incident on \(v\). We say that this vertex colouring \(vc\) is induced by \(P\). Note that different perfect matchings can induce the same vertex colouring. A vertex colouring is defined to be feasible if it is induced by at least one perfect matching.
The weight of a vertex colouring \(vc\), \(w(vc)\) is the sum of the weights of all perfect matchings \(P\) inducing \(vc\).
The weight of a vertex colouring which is not feasible, is zero by default.
An experiment graph is said to be valid, if:
1. All feasible monochromatic vertex colourings have a weight of \(1\).
2. All non-monochromatic vertex colourings have a weight of \(0\).
An example of a valid experiment graph is shown in Figure 1
The dimension of a valid experiment graph \(G\), \(\mu(G)\) is the number of feasible monochromatic vertex colourings having a weight of \(1\).
In graph theoretic terms, Theorem 3.1 is equivalent to stating that \(\mu(G)\leq\frac{|V(G)|}{\sqrt{2}}\). The reader may note that the number of particles and the dimension of a GHZ state corresponds to the number of vertices and dimensions in the valid experiment graph, respectively. For a detailed description of the quantum physical meaning of this setup, refer to [9].
### Progress on the problem
Krenn and Gu conjectured that physically it is not possible to generate a GHZ state of dimension \(d>2\) with more than \(n=4\) photons with perfect quality and finite count rates without additional resources [9], that is, when \(G\) is not isomorphic to \(K_{4}\), then \(\mu(G)\leq 2\)
While proving their conjecture would immediately lead to new insights into resource theory in quantum optics, finding a counter-example would uncover new peculiar quantum interference effects of a multi-photonic quantum system.
However, when multi-edges and bi-chromatic edges are allowed, even for a valid experiment graph with just 4 vertices, the question of whether the dimension is bounded or not looks surprisingly challenging. So a general bound on the dimension as the function of the number of vertices of the experiment graph remains elusive. This motivated researchers to look at the problem by restricting the presence of bi-chromatic edges and multi-edges.
**Absence of destructive interference.** When destructive interference is absent, the problem reduces to a simpler problem on unweighted coloured graphs. Bogdanov [2] proved that the dimension of a 4 vertex experiment graph is at most 3, and an \(n\geq 6\) vertex graph is at most 2. Chandran and Gajjala [4] gave a structural classification of experiment graphs of dimension 2. They also proved that if the maximum dimension achievable on a graph without destructive interference is not one, the maximum dimension achievable remains the same even if destructive interference is allowed. Vardi and Zhang [18, 19] proposed new colouring problems which are inspired by these experiments and investigated their computational complexity.
**Absence of multi-edges.** Chandran and Gajjala [4] proved that the dimension of an \(n\) vertex experiment graph is at most \(n-3\) even when bi-chromatic edges are present for simple graphs. Their techniques can be extended to get the same bound when multi-edges are present and bi-chromatic edges are absent.
**Absence of bi-chromatic edges.** Ravsky [14] proposed a claim connecting this problem to rainbow matchings. Using a result of Kostochka and Yancey [8], he showed that the dimension of an \(n\) vertex experiment graph is at most \(n-2\). Neugebauer [12] used these ideas and did computational experiments. For valid experiment graphs with a small number of vertices, Cervera-Lierta, Krenn, and Aspuru-Guzik [3] translated this question into a boolean equation system and found that the system does not have a solution using SAT solvers. In particular, they show that for graphs with monochromatic edges, GHZ states with \(n=6\), \(d\geq 3\) and \(n=8\), \(d\geq 4\) cannot exist. The authors further conjectured the following more general claim
**Conjecture 5**.: It is not possible to generate an \(n>4\) vertex experiment graph with dimension \(d\geq\frac{n}{2}\).
In a simple valid experiment graph with \(n\) vertices, as every vertex has at most \(n-1\) neighbours and each colour has at least one monochromatic edge incident on each vertex (from Observation 8), a trivial bound of \(n-1\) can be obtained on the dimension. A more careful argument would also give the same bound when multi-edges are allowed and bi-chromatic edges are absent [12]. So far, all results either improve an additive factor over this trivial bound or work only on graphs with at most 8 vertices. In this work, using the technique of _local sparsification_ for experiment graphs, we overcome this barrier.
It is not possible to generate an \(n>4\) vertex experiment graph with dimension \(d\geq\frac{n}{\sqrt{2}}\).
This translates to saying that it is not possible to produce a GHZ state of \(n\) particles, with \(\geq\frac{n}{\sqrt{2}}\) dimensions using this graph approach without additional quantum resources (such as auxiliary photons). Note that our bound holds even when bi-chromatic edges are allowed.
## 3 Edge pruning
We introduce the concept of an edge minimum valid experiment graph. A valid experiment graph \(G\) is said to be edge minimum if there is no other graph \(G^{\prime}\) such that \(|V(G^{\prime})|=|V(G)|\), \(\mu(G)=\mu(G^{\prime})\) and \(|E(G^{\prime})|<|E(G)|\). Such graphs correspond to the experiments to create GHZ states with minimum resources and are of interest to experimental physicists. The reader may notice that proving Theorem 3.1 for Edge minimum valid experiment graphs implies Theorem 3.1 is true for all valid experiment graphs.
If the monochromatic vertex colouring of colour \(i\) is not feasible, then all edges with at least half of it coloured \(i\) can be discarded as such a mode number \(i\) would not help in increasing the dimension of the corresponding GHZ state. So, in the edge minimum valid experiment graphs, all monochromatic vertex colourings are feasible
A graph is matching covered if every edge of it is part of at least one perfect matching. If an edge \(e\) is not part of any perfect matching \(M\), then we call the edge \(e\) to be redundant. By removing all redundant edges from the given graph \(G\), we get its unique maximum matching covered sub-graph.
Edge minimum valid experiment graphs are matching covered.
Suppose that there is an edge minimum valid experiment graph \(G\) which is not matching covered. Consider its matching covered subgraph \(H\). By definition, every perfect matching of \(G\) is contained in \(H\). Therefore, the weight of a vertex colouring in \(H\) and \(G\) are equal; hence, \(H\) is also a valid experiment graph and \(\mu(G)=\mu(H)\). Since \(G\) is not matching covered, we know that \(|E(H)|<|E(G)|\). This contradicts the edge minimality of \(G\).
In a valid experiment graph \(G\), for any vertex \(v\in V(G)\) and colour \(i\in[\mu(G)]\), there exists a monochromatic edge \(e\) incident on \(v\) such that \(c(e)=i\).
Proof.: By definition, the weight of the monochromatic vertex colouring with colour \(i\) is non-zero. Therefore, there must be a monochromatic perfect matching \(P\) in which all edges are of colour \(i\). By the definition of perfect matching, an edge \(e\in P\) exists, which is incident on \(v\). Therefore, \(e\) must be of colour \(i\).
If an edge \(e=\{u,v\}\) is monochromatic and \(d(u,c(e))=d(v,c(e))=1\), then \(e\) is said to be colour isolated.
For an edge \(e\in E(G)\) incident on \(u\in V(G)\), if \(d(u,c(e))=1\), then \(e\) is a colour-isolated edge.
Proof.: Let \(e=\{u,v\}\). From Observation 3.1, we know that \(e\) must be a monochromatic edge and hence \(d(v,c(e))\geq 1\). Towards a contradiction, suppose \(e\) is not a colour-isolated edge. It follows that \(d(v,c(e))\geq 2\). We now construct \(G^{\prime}\), a subgraph of \(G\), by removing each edge \(e^{\prime}\neq e\), which is incident on \(v\) and has \(c(e^{\prime},v)=c(e)\). We show that \(G^{\prime}\) is a valid experiment graph by showing that the weight of any feasible vertex colouring \(vc\) in \(G^{\prime}\) is equal to the weight of \(vc\) in \(G\).
\(G^{\prime}\) is a valid experiment graph.
Proof.: Since \(G^{\prime}\) is a subgraph of \(G\), every perfect matching in \(G^{\prime}\) would also be contained in \(G\). Consider a perfect matching \(P\) of \(G\), which is not contained in \(G^{\prime}\). Let \(vc\) be the vertex colouring induced by \(P\). We now claim that \(vc\) is not feasible in \(G^{\prime}\).
We first prove that \(vc(v)=c(e)\) and \(vc(u)\neq c(e)\). As \(P\) is not contained in \(G^{\prime}\), there must be an edge \(e^{\prime}\in P\) such that \(e^{\prime}\in G\) and \(e^{\prime}\notin G^{\prime}\). By construction, such an \(e^{\prime}\neq e\)
must be incident on \(v\) and has \(c(e^{\prime},v)=c(e)\). Therefore, \(P\) induces the colour \(c(e)\) on \(v\) and hence \(vc(v)=c(e)\). Since \(e^{\prime}\) is already incident on \(v\), it must be the case that \(e\notin P\). As \(d(u,c(e))=1\) and \(e\notin P\), it follows that \(vc(u)\neq c(e)\).
We now claim that a vertex colouring \(vc\) with \(vc(u)\neq c(e)\) and \(vc(v)=c(e)\) is infeasible in \(G^{\prime}\). Suppose not. We know that there must be a perfect matching \(P^{\prime}\) inducing the vertex colouring \(vc\). By construction, \(e\) is the only edge of colour \(c(e)\) incident on \(v\) in \(G^{\prime}\). Therefore, \(e\in P^{\prime}\). As \(e\) is also incident on \(u\), \(P^{\prime}\) induces the colour \(c(e)\) on \(u\). It follows that \(vc(u)=c(e)\). Contradiction.
Therefore, if a vertex colouring \(vc\) is feasible in \(G^{\prime}\), then \(vc\) is induced by the same set of perfect matchings in both \(G\) and \(G^{\prime}\). It follows that the weight of any feasible vertex colouring \(vc\) in \(G^{\prime}\) is equal to the weight of the \(vc\) in \(G\); hence, \(G^{\prime}\) is a valid experiment graph.
Observe that none of the removed edges could have been part of a monochromatic perfect matching in \(G\). Therefore, the monochromatic vertex colouring of colour \(i\) in \(G^{\prime}\) has a weight equal to the weight of monochromatic vertex colouring of colour \(i\) in \(G\), which is \(1\) for all \(i\in[\mu]\). It follows that \(\mu(G)=\mu(G^{\prime})\). This contradicts the edge minimality of \(G\) as \(G^{\prime}\) is a valid experiment graph with \(n\) vertices, \(\mu(G)=\mu(G^{\prime})\) and has fewer edges than \(G\).
## 4 Proof of the Theorem 6
### Proof sketch
Towards a contradiction, suppose there is exists an edge minimum simple valid experiment graph \(G\) with \(n\) vertices and dimension \(\mu(G)=\mu>\frac{n}{\sqrt{2}}\).
If the dimension of the graph \(\mu=n-1\), then it must be the case that on each vertex \(v\), \(d(v,i)=1\) for every \(i\in[n-1]\) (from Observation 8). Similarly, as shown in Observation 12, if the dimension is \(\mu\), it must be the case that \(d(v,i)=1\) for at least \(2\mu-n+1\) values of \(i\in[\mu]\). We then use Lemma 10 to guarantee that at least \(2\mu-n+1\) colour isolated edges are incident on \(v\). As there are at least \(2\mu-n+1\) colour-isolated edges incident on each vertex \(v\), we get that there must be at least \(0.5n(2\mu-n+1)\) colour-isolated edges in \(G\). A simple averaging argument over \(\mu\) colours would now guarantee that for some \(i\in[\mu]\), there must be a large matching \(M\) with \(\frac{(2\mu-n)n}{2\mu}\)\(i\)-coloured isolated edges.
In Section 4.3, we find some structural properties over a special subgraph on the vertices spanned by \(M\), called the representative sparse graph. Using this, we prove that \(\mu\leq n-|M|\). Since we already know that \(|M|\geq\frac{(2\mu-n)n}{2\mu}\), we get that \(\mu\leq\frac{n}{\sqrt{2}}\). This is a contradiction. Therefore, \(\mu\leq\frac{n}{\sqrt{2}}\).
### Existence of a large special matching
For all \(v\in V(G)\), \(d(v,i)=1\) for at least \(2\mu-n+1\) values of \(i\in[\mu]\).
Proof.: Suppose not. Then there are \(2\mu-n-i\) colours whose colour degree is exactly one for some vertex \(v\in V(G)\) for some \(i\in[0,2\mu-n]\). Therefore, there are \(n-\mu+i\) colours with a colour degree of at least \(2\) on \(v\). It follows that there are \(2\mu-n-i+2(n-\mu+i)=n+i\geq n\) neighbours to \(v\). But since \(G\) is a simple graph, \(v\) can have at most \(n-1\) neighbours. Contradiction.
**Observation 13**.: \(G\) has at least \((2\mu-n+1)\dfrac{n}{2}\) colour isolated edges.
Proof.: From Lemma 10, we know that for every colour \(i\) with \(d(u,i)=1\), there is a colour isolated edge of colour \(i\) incident on \(u\). Along with Observation 12, this implies that there are at least \(2\mu-n+1\) colour isolated edges are incident on each vertex. Therefore, there are at least \((2\mu-n+1)\dfrac{n}{2}\) colour isolated edges in \(G\).
**Theorem 14**.: _For some \(i\in[\mu]\), there exist at least \(\dfrac{(2\mu-n)n}{2\mu}\)\(i\)-coloured isolated edges._
Proof.: From Observation 13, we know that there are at least \((2\mu-n+1)\dfrac{n}{2}\) colour isolated edges. Since there are \(\mu\) colours, by a simple averaging argument, we get that for some colour \(i\in[\mu]\), there exist at least \(\dfrac{(2\mu-n)n}{2\mu}\)\(i\)-coloured isolated edges.
### Detecting a sparse subgraph
We will assume that the large matching exhibited in Theorem 14 has colour \(1\) for the remainder of this section. Let \(R\) represent the set of vertices whose colour degree with respect to the colour \(1\) is one. Let \(U\) be the set of vertices whose colour degree with respect to the colour \(1\) is at least two.
**Observation 15**.: \(|R|\geq\dfrac{(2\mu-n)n}{\mu}\) and \(|U|\leq\dfrac{n^{2}}{\mu}-n\).
Proof.: From Lemma 10, there are exactly \(\dfrac{|R|}{2}\)\(1\)-coloured isolated edges forming a perfect matching in the induced subgraph \(G[R]\). Along with Theorem 14, this would imply that \(|R|\geq\dfrac{(2\mu-n)n}{\mu}\). Since \(|R|+|U|=n\), we know that \(|U|\leq\dfrac{n^{2}}{\mu}-n\).
Note that since \(n\geq 6\), we have \(|R|\geq\dfrac{(2\mu-n)n}{\mu}=(2-\dfrac{n}{\mu})n\geq(2-\sqrt{2})n\geq 3.5\). Since \(|R|\) must be integral, \(|R|\geq 4\). Therefore, there exists at least two \(1\)-coloured isolated edges.
We now pick an arbitrary \(1\)-coloured isolated edge \(\{u,v\}\). For the remainder of this section, we base our analysis on the edges incident on the vertices \(u,v\). We now construct a
special subgraph on \(G[R]\) called the representative sparse graph \(\chi\) using the edges incident on \(u,v\).
We first construct the graph \(\chi_{u}\) in the following way over \(R\): For every colour \(i\in[\mu]\), if there is no vertex \(w\in U\) such that \(c(\{u,w\},u)=i\), then add an arbitrary monochromatic edge of colour \(i\) (its existence is guaranteed from Observation 3) to \(E(\chi_{u})\).
Similarly, we construct the graph \(\chi_{v}\) in the following way over \(R\): For every colour \(i\in[\mu]\), if there is no vertex \(w\in U\) such that \(c(\{v,w\},v)=i\), then add an arbitrary monochromatic edge of colour \(i\) (its existence is guaranteed from Observation 3) to \(E(\chi_{v})\).
We define the graph \(\chi\) over \(R\) with the edge set \(E(\chi)=E(\chi_{u})\cup E(\chi_{v})\). An example graph and its representative sparse graph are presented in Figure 3.
\(2\mu\leq 2|U|+|E(\chi)|+1\)
Proof.: For any colour \(i\in[\mu]\), there is either a vertex \(w\in U\) such that \(c(u,\{u,w\})=i\) or there is a monochromatic edge of colour \(i\) in \(\chi_{u}\). Therefore,
\[\mu\leq|U|+|E(\chi_{u})|\]
Similarly,
\[\mu\leq|U|+|E(\chi_{v})|\]
It is easy to see that \(E(\chi_{u})\cap E(\chi_{v})=\{\{u,v\}\}\) and hence \(|E(\chi)|=|E(\chi_{u})|+|E(\chi_{v})|-1\). It follows that
\[2\mu\leq 2|U|+|E(\chi_{u})|+|E(\chi_{v})|=2|U|+|E(\chi)|+1\]
Since every edge in \(\chi\) is incident either on \(u\) or on \(v\), it easy to see that \(|E(\chi)|\leq 2|R|-3\). But this can be strengthened by using the following structural observation.
If \(\{u^{\prime},v^{\prime}\}\) is a \(1\)-coloured isolated edge in \(G[R]\) distinct from \(\{u,v\}\), then
\[|\{\{u,u^{\prime}\},\{u,v^{\prime}\},\{v,u^{\prime}\},\{v,v^{\prime}\}\}\bigcap E (\chi)|\leq 2\]
Proof.: Suppose \(|\{\{u,u^{\prime}\},\{u,v^{\prime}\},\{v,u^{\prime}\},\{v,v^{\prime}\}\}\bigcap \chi|\geq 3\). Without loss of generality, let the edges \(\{u,u^{\prime}\},\{u,v^{\prime}\},\{v,v^{\prime}\}\) be present in \(E(\chi)\) with colours \(i,j,k\). By definition of the representative sparse graph, we know that \(i\neq j\) and \(i,j,k\) are not equal to \(1\). Note that \(k\) might be equal to \(i\) or \(j\).
Consider the vertex colouring \(vc\) in which \(u,u^{\prime}\) get the colour \(i\), \(v,v^{\prime}\) get the colour \(k\), and the vertices in \(V(G)\setminus\{u,u^{\prime},v,v^{\prime}\}\) are coloured \(1\). Note that \(V(G)\setminus\{u,u^{\prime},v,v^{\prime}\}\) is non-empty as \(|V(G)|>4\). Let \(vc^{\prime}\) be the vertex colouring in which every vertex in \(V(G)\) gets the colour \(1\).
\(\frac{w(vc)}{w(vc^{\prime})}=\frac{w(\{u,u^{\prime}\})w(\{v,v^{\prime}\})}{w (\{u,v\})w(\{u^{\prime},v^{\prime}\})}\)
Proof.: Let \(M\) be the set of all \(1\)-coloured isolated edges. As every vertex in \(R\) has exactly one \(1\)-coloured edge incident on it, every perfect matching \(P^{\prime}\) inducing \(vc^{\prime}\) must contain all edges in \(M\). Let \(W\) denote the weight of the monochromatic vertex colouring of colour \(1\) on \(G[U]\). As the vertices in \(U\) match with themselves in every perfect matching \(P^{\prime}\), it is easy to see that
\[w(vc^{\prime})=W\times\prod_{e\in M}w(e)\]
As every vertex in \(R\setminus\{u,v,u^{\prime},v^{\prime}\}\) has exactly one 1-coloured edge incident on it, every perfect matching \(P\) inducing \(vc\) must contain all edges in \(M\setminus\{\{u,v\},\{u^{\prime},v^{\prime}\}\}\). By definition of the representative sparse graph, the vertex \(u\) can obtain the colour \(i\) only through an edge \(\{u,w\}\) such that \(w\in R\). But the vertices \(R\setminus\{u,v,u^{\prime},v^{\prime}\}\) are already matched and \(c(\{u,v\},u)=1\neq i\) and \(c(\{u,v^{\prime}\},u)=j\neq i\). Therefore, the edge \(\{u,u^{\prime}\}\) must be present in every perfect matching \(P\) that induces \(vc\). Again, by definition of the representative sparse graph, the vertex \(v\) can obtain the colour \(k\) only through an edge \(\{v,w\}\) such that \(w\in R\). But all the vertices in \(R\setminus\{v,v^{\prime}\}\) are already matched. Therefore, the edge \(\{v,v^{\prime}\}\) must be present in every perfect matching \(P\). The remaining vertices in \(U\) should match among themselves in every perfect matching \(P\) that induces \(vc\). It is now easy to see that the weight of the vertex colouring \(vc\) is
\[w(vc)=W\times w(\{u,u^{\prime}\})\times w(\{v,v^{\prime}\})\times\prod_{e\in M \setminus\{\{u,v\},\{u^{\prime},v^{\prime}\}\}}w(e)\]
\[=\frac{w(\{u,u^{\prime}\})w(\{v,v^{\prime}\})}{w(\{u,v\})w(\{u^{\prime},v^{ \prime}\})}\times W\times\prod_{e\in M}w(e)\]
It follows that
\[\frac{w(vc)}{w(vc^{\prime})}=\frac{w(\{u,u^{\prime}\})w(\{v,v^{\prime}\})}{w( \{u,v\})w(\{u^{\prime},v^{\prime}\})}\]
Since \(vc^{\prime}\) is a monochromatic vertex colouring, \(w(vc^{\prime})=1\). Recall that all edge weights are non-zero by definition. Therefore, \(w(vc)\) is non-zero. But \(vc\) is a non-monochromatic vertex colouring and should have weight 0 by definition. Contradiction.
**Lemma 19**.: \(|E(\chi)|\leq|R|-1\)__
Proof.: From Lemma 17, we know that \(\chi\) can contain at most two edges between \(\{u,v\}\) and any other 1-coloured isolated edge. Therefore, the total number of edges in \(\chi\) is
\[E(\chi)\leq 2(\frac{|R|-2}{2})+1=|R|-1\]
**Theorem 20**.: \(\mu\leq\frac{n}{\sqrt{2}}\)__
Proof.: From Lemma 16, we have
\[2\mu\leq 2|U|+|E(\chi)|+1\]
Using Lemma 19, we get
\[2\mu\leq 2|U|+|R|\]
As \(|U|+|R|=n\) by definition,
\[2\mu\leq 2n-|R|\]
It follows from Theorem 14 that \(|R|\) is at least \(\frac{(2\mu-n)n}{\mu}\) and hence
\[2\mu\leq 2n-\frac{(2\mu-n)n}{\mu}=\frac{n^{2}}{\mu}\]
Therefore, \(\mu\leq\frac{n}{\sqrt{2}}\).
This is a contradiction to our assumption that \(\mu>\frac{n}{\sqrt{2}}\) and hence, \(\mu\leq\frac{n}{\sqrt{2}}\). |
2303.13913 | GarmentTracking: Category-Level Garment Pose Tracking | Garments are important to humans. A visual system that can estimate and track
the complete garment pose can be useful for many downstream tasks and
real-world applications. In this work, we present a complete package to address
the category-level garment pose tracking task: (1) A recording system
VR-Garment, with which users can manipulate virtual garment models in
simulation through a VR interface. (2) A large-scale dataset VR-Folding, with
complex garment pose configurations in manipulation like flattening and
folding. (3) An end-to-end online tracking framework GarmentTracking, which
predicts complete garment pose both in canonical space and task space given a
point cloud sequence. Extensive experiments demonstrate that the proposed
GarmentTracking achieves great performance even when the garment has large
non-rigid deformation. It outperforms the baseline approach on both speed and
accuracy. We hope our proposed solution can serve as a platform for future
research. Codes and datasets are available in
https://garment-tracking.robotflow.ai. | Han Xue, Wenqiang Xu, Jieyi Zhang, Tutian Tang, Yutong Li, Wenxin Du, Ruolin Ye, Cewu Lu | 2023-03-24T10:59:17Z | http://arxiv.org/abs/2303.13913v1 | # GarmentTracking: Category-Level Garment Pose Tracking
###### Abstract
Garments are important to humans. A visual system that can estimate and track the complete garment pose can be useful for many downstream tasks and real-world applications. In this work, we present a complete package to address the category-level garment pose tracking task: (1) A recording system **VR-Garment**, with which users can manipulate virtual garment models in simulation through a VR interface. (2) A large-scale dataset **VR-Folding**, with complex garment pose configurations in manipulation like flattening and folding. (3) An end-to-end online tracking framework **GarmentTracking**, which predicts complete garment pose both in canonical space and task space given a point cloud sequence. Extensive experiments demonstrate that the proposed GarmentTracking achieves great performance even when the garment has large non-rigid deformation. It outperforms the baseline approach on both speed and accuracy. We hope our proposed solution can serve as a platform for future research. Codes and datasets are available in [https://garment-tracking.robotflow.ai](https://garment-tracking.robotflow.ai).
+
Footnote †: \(\dagger\) Cewu Lu is the corresponding author, the member of Qing Yuan Research Institute and MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China and Shanghai Qi Zhi Institute.
## 1 Introduction
Garments are one of the most important deformable objects in daily life. A vision system for garment pose estimation and tracking can benefit downstream tasks like MR/AR and robotic manipulation [7, 17]. The category-level garment pose estimation task is firstly introduced in GarmentNets [11], which aims to recover the full configuration of an **unseen** garment from a single static frame. Unlike the non-rigid tracking methods [9, 10, 16, 34, 27, 35, 19] which can only recover the geometry of the visible regions, pose estimation task can also reconstruct the occluded parts of the object. Another line of works [14, 15, 28, 29, 28, 30, 21] (non-rigid 4D reconstruction which can reconstruct complete object geometry) cannot be directly applied on garments, since they assume the object has a watertight geometry. In contrast, garments have thin structures with holes.
In this paper, we propose a new task called **Category-level Garment Pose Tracking**, which extends the single-frame pose estimation setting in [11] to pose tracking in dynamic videos. Specifically, we focus on the pose tracking problem in garment manipulation (_e.g_. flattening, folding). In this setting, we do not have the priors of the human body like previous works for clothed humans [18, 26, 31, 41]. Therefore, we must address the extreme deformation that manipulated garments could undergo.
To tackle the garment pose tracking problem, we need a dataset of garment manipulation with complete pose annotations. However, such a dataset does not exist so far to the best of our knowledge. To build such a dataset, we turn to a VR-based solution due to the tremendous difficulty of garment pose annotation in the real world [10]. We first create a real-time VR-based recording system named **VR-Garment**. Then the volunteer can manipulate the garment in a simulator through the VR interface. With VR-Garment, we build a large-scale garment manipulation dataset called **VR-Folding**. Compared to the single static garment configuration (_i.e_. grasped by one point) in GarmentNets, our manipulation tasks include _flattening_ and _folding_, which contain much more complex garment configurations. In total, our VR-Folding dataset contains 9767 manipulation videos which consist of 790K multi-view RGB-D frames with full garment pose and hand pose annotations on four garment categories selected from the CLOTH3D [8] dataset.
With the VR-Folding dataset, we propose an end-to-end online tracking method called **GarmentTracking** to perform category-level garment pose tracking during manipulation. For the garment pose modeling, we follow GarmentNets [11] to adopt the normalized object coordinate space (NOCS) for each category. Nevertheless, tracking garment pose raises new challenges compared to single-frame pose estimation: (1) How to fuse inter-frame geometry and correspondence information? (2) How to make the tracking prediction robust to pose estimation errors? (3) How to achieve
tracking in real-time? To address these challenges, we conduct GarmentTracking in three stages, namely _NOCS predictor_, _NOCS refiner_, and _warp field mapper_. Firstly, it predicts per-frame features and fuses them for canonical coordinate prediction. Then it refines the predicted canonical coordinates and the geometry with a NOCS refiner to reduce the accumulated errors. Finally, it maps the prediction in canonical space to the task space (_i.e_. coordinate frame of the input point cloud).
Since no previous work is designed for the tracking setting, we use GarmentNets [11] as a single-frame prediction baseline for comparison. We also perform extensive ablative experiments to reveal the efficacy of our design choices. Finally, we collect real-world data on garment manipulation and show the qualitative results of our method. In our design, we avoid the computationally expensive Marching Cubes [25] for reconstructing the canonical mesh frame by frame, so that we can achieve tracking at 15 FPS with an RTX 3090 GPU (**5 times faster** than the baseline approach).
We summarize our contributions as follows:
1). We propose a VR-based garment manipulation recording system named **VR-Garment**. It can synchronize human operations into the simulator and collect garment manipulation data.
2). We propose a large-scale garment manipulation dataset named **VR-Folding** for pose tracking. During manipulation, garments exhibit diverse configurations.
3). We propose a real-time end-to-end framework named **GarmentTracking** for category-level garment pose tracking. It can serve as a strong baseline for further research. We also demonstrate its generalization ability to real-world garment recordings with models trained by simulated data.
## 2 Related Work
**Category-level Object Pose Estimation and Tracking.** Object pose is the configuration of the object posited in the observation space. For the rigid object, we can describe its pose in 6 degrees of freedom (DOFs), _i.e_. 3 for translation and 3 for rotation. However, for the non-rigid object, like garments, the object pose can be of near-infinite DOFs.
On the other hand, the category-level object pose estimation task aims to learn a model that can predict unseen object poses of the same category [20, 23, 24, 38, 39]. The concept is first introduced to estimate rigid object pose [38]. In [38], Wang _et al_. proposed a Normalized Object Coordinate Space (NOCS) as a category-specific canonical representation. Following the idea, Li _et al_. [20] proposed a hierarchical NOCS representation for articulated objects.
To handle the near-infinite DOF nature and the category-level generalization requirement, GarmentNets [11] also defined NOCS for each garment category. They predicted the garment pose by mapping the reconstructed mesh from canonical space to task space. However, GarmentNets [11] treats each frame individually, which hampers its stability for inter-frame prediction, and its ability to infer complex poses from sequential movements. Our GarmentTracking is proposed for these tracking issues.
**Non-rigid Tracking and Reconstruction.** Tracking and reconstructing non-rigid deforming objects is an important research area in computer vision and graphics. One line of works [9, 10, 16, 27, 34, 35] perform free-form tracking, which does not assume any geometric prior. For example, DynamicFusion [27] used a hierarchical node graph structure and an efficient GPU solver to reconstruct the visible surface of the object. Deepdeform [10] and Bozic _et al_. [9] leveraged learning-based correspondences to track deformable objects. However, unlike pose estimation meth
Figure 1: **The pipeline of our Virtual Reality recording system (VR-Garment). (a) A volunteer needs to put on a VR headset and VR gloves. (b) By following the guidance of a specially designed UI, the volunteer begins to collect data efficiently. (c) After recording, we re-render multi-view RGB-D images with Unity [6] and obtain masks and deformed garment meshes with NOCS labels.**
ods which reconstruct the complete configuration of objects, all these methods can not reconstruct occluded parts.
Another line of works [21, 28, 29, 14, 22, 20, 15] can perform 4D reconstruction from RGB-D videos, which captures the complete geometry of the object both in space and time. Unfortunately, the shape representations of these methods have limitations when adapting to garment pose reconstruction under large deformations. For example, Fusion4D [15], Motion2fusion [14], 4DComplete [21], OcclusionFusion [22], NPMs [29] and OccupancyFlow [28] heavily rely on watertight object modeling such as SDF, TSDF, or occupancy grids to reconstruct object surfaces. Such modeling is not suitable for objects with open and thin structures like garments.
**Garment-related Dataset.** Current garment-related datasets can be divided into _asset datasets_[41, 8] and _task datasets_[37, 10, 26, 11]. _Asset datasets_ provide garment models for different tasks. For example, GarmentNets [11] proposed a simulation dataset for category-level pose-estimation task based on CLOTH3D [8]. We also build our VR-Folding dataset based on [8]. Other task datasets do not require complete garment models [26]. For example, CAPE [26] deals with the clothed human reconstruction task. However, the human body limits the possible garment states. DeepDeform [10] dataset contains simple scenes where a person lifts one garment with minor deformations, and it only annotates sparse keypoint correspondences between frames. A real-world cloth-folding dataset proposed by Verleysen [37] contains videos of cloth-folding actions, but it only annotates the contour of garments in 2D images. Our VR-Folding dataset is the first dataset designed for category-level garment pose tracking in manipulation, and it contains dynamic scenes which include complex human actions and garment configurations.
## 3 VR-Folding Dataset
To build the VR-Folding dataset, we develop a real-time data recording system called **VR-Garment** (Fig. 1). In this way, we can combine human experience and the benefits of simulation environments (easy access to ground-truth poses) to efficiently collect a large amount of data with natural and complex poses. We will describe the system design and the operation procedures in Sec. 3.1 and Sec. 3.2. Then we will describe the data statistics in Sec. 3.3.
### VR-Garment
In this section, we will first describe the hardware and software setup of the system and then the recording procedure for volunteers to operate. The recording system is illustrated in Fig. 1.
**Hardware.** On the hardware side, our recording system needs an HTC Vive Pro [1] VR headset and Noitom Hi5 [2] VR gloves which can track finger poses in the real world and reproduce them with virtual hands in the simulator through a VR interface.
**Software.** We developed our VR recording framework based on Unity [6] for its good support of mainstream VR devices. The cloth physics simulation in Unity is achieved by Obi [3]. Specifically, we implemented a simple UI and a grasping system in VR, which allow users to grasp or release any point on the garment surface when two virtual fingertips make contact with the garment.
**Recording Procedure.** Firstly, a volunteer must wear a VR headset and VR gloves. Then he should observe a garment instance randomly dropping on a table in Unity. Next, he performs a pre-defined manipulation task (folding). In the manipulation process, we save the deformed garment mesh and hand poses for each frame. When the task is done, the volunteer will use special gestures ( fist) to send commands for moving to the next garment instance. After recording, we re-render multi-view RGB-D images in Unity with the saved animation data and generate corresponding ground-truth annotations (garment poses, masks). Note that all the rendering settings (lights, cameras, textures _etc._) can be customized even after recording.
### Task Definition
In a typical cloth folding process, we operate this task in two stages, namely _flattening_ and _folding_:
**Flattening:** Firstly, a garment will drop on the virtual table in Unity. Then our system will randomly choose one point on the garment surface. Next, the volunteer will grasp that point with one hand and lift the garment in the air (an initial configuration similar to that in GarmentNets [11]). Next, the volunteer will try to grasp and fling the garment repeatedly with two hands until the garment is smoothed and in a flattened T-pose (see Fig. 1). Please see the supplementary files for more details about the task.
**Folding:** Firstly, a garment in a flattened T-pose will be placed on the virtual table. Then the volunteer will repeatedly perform pick-and-place actions with both hands until the garment is folded. Though people may have different preferred steps to achieve folding, we have defined general rules for each category and asked the volunteers to follow them. Please see the supplementary files for more details.
### Data Statistics
All the garment meshes used in our system are from CLOTH3D [8]. We choose 4 categories from CLOTH3D dataset, namely _Shirt_, _Pants_, _Top_ and _Skirt_. For _flattening_ task, we recorded 5871 videos which contain 585K frames in total. For _folding_ task, we recorded 3896 videos which contain 204K frames in total. As shown in Fig. 1, the data for each frame include multi-view RGB-D images, object masks, full garment meshes, and hand poses. Please see the supplementary files for more statistics on the dataset.
## 4 Method
This paper proposes an end-to-end online tracking method called **GarmentTracking** for category-level garment pose tracking. As shown in Fig. 2, given a first-frame garment pose (point NOCS, _i.e_. canonical coordinates of partial point cloud) and a rough canonical shape (mesh NOCS, _i.e_. sampled points from mesh in canonical space) of an instance, it takes point cloud sequences as input and generates complete garment geometry with inter-frame correspondence (_i.e_. NOCS coordinates). Specifically, GarmentTracking can be divided into three stages. In the first stage (Sec. 4.1), the network will predict canonical coordinates for the partial input point cloud. In the second stage (Sec. 4.2), the network will refine the predicted canonical coordinates and the input canonical shape. In the third stage (Sec. 4.3), the network will use the refined canonical shape and canonical coordinates to predict a warp field that maps from canonical space to task space (_i.e_. the coordinate frame of the input point cloud).
### Canonical Coordinate Prediction
#### 4.1.1 Normalized Garment Canonical Space
Following the definition of garment representation in GarmentNets [11], we use Normalized Object Coordinate Space (NOCS) coordinates as an intermediate representation for object states in a category. As shown in Fig. 2, the rest state of a garment is the T-pose defined by the garment worn by a person (provided by CLOTH3D [8] dataset).
#### 4.1.2 3D Feature Extractor
The thin structure and near-infinite DOF nature of garments may result in many complicated poses (_e.g_. multi-layer cloth stacked together) that require feature extraction for granular local details. Our method uses a high-resolution sparse 3D convolution network (ResUNet3D) proposed by FCGF [12] to extract the per-point feature from the raw point cloud. ResUNet3D is a UNet-like network with skip connections and residual blocks. Please refer to the supplementary files for further details of the network.
#### 4.1.3 Inter-frame Feature Fusion with Transformer
After extracting the feature from the extractor, we apply the inter-frame feature fusion with Transformer [36].
**Feature Matching** Inspired by PTTR [40], we perform feature matching with self-attention and cross-attention modules based on Transformer [36]. In general, we first use a self-attention module to individually aggregate point features for the two input frames. Then we use a cross-attention module to perform feature matching between two frames. Intuitively, the self-attention operation can have a global understanding of the current frame, and the cross-attention operation can capture cross-frame correlations and generate a relation-enhanced fusion feature. The self-attention and cross-attention modules are based on the relation attention module (RAM) proposed by PTTR [40]. Please see the supplementary files for more details on the relation attention module.
Figure 2: The overview of **GarmentTracking**. Given the per-point NOCS coordinate of the first frame and a rough canonical shape (mesh NOCS), our tracking method takes two frames of the partial point cloud as input. In stage 1, the NOCS predictor will generate an inter-frame fusion feature and predict raw NOCS coordinates. In stage 2, the NOCS refiner will refine the NOCS coordinates and the canonical shape simultaneously. In stage 3, the warp field mapper will predict the warp field which maps from canonical space to task space.
**NOCS Prediction** After obtaining the per-point fusion feature via the cross-attention module, we predict the per-point canonical coordinate with MLP. We follow GarmentNets [11] and formulate this problem as a classification task instead of a regression task. Specifically, we divide each axis into 64 bins and the network independently predicts each axis's classification score. During training, we use a cross-entropy loss to supervise the classification scores.
**NOCS Coordinates for Positional Embedding** We have per-point NOCS coordinate prediction from the previous frame, which contains clearer geometric and structural information. We use it for positional embedding [36], which will be added to input features before feeding into transformers. The positional embeddings for two frames are calculated as Eq. (1):
\[\mathbf{emb}_{1}=f_{1}([\mathbf{P}_{1}^{xyz},\mathbf{P}_{1}^{nocs}]),\mathbf{ emb}_{2}=f_{2}([\mathbf{P}_{2}^{xyz}]), \tag{1}\]
where \(\mathbf{P}_{1}^{xyz}\) and \(\mathbf{P}_{2}^{xyz}\) are the partial input point clouds of the two frames, and \(\mathbf{P}_{1}^{nocs}\) is the predicted per-point NOCS coordinates of the partial point cloud in the previous frame. Here \(f_{1}(\cdot)\) and \(f_{2}(\cdot)\) are learned MLP. By fusing NOCS coordinates into positional embedding, the transformer network will incorporate positional and semantic information from previous frames. Besides, empirically speaking, utilizing intermediate representations like NOCS coordinates instead of complete garment poses can increase the robustness against noisy predictions during long-term tracking.
### NOCS Refiner
Since the canonical shape can be generated by other methods like GarmentNets [11], or augmented with noise, it might be inaccurate. On the other hand, the NOCS coordinate predictions can also be noisy. Such inaccuracy could cause errors to be accumulated during tracking. To mitigate this problem, we propose a NOCS PC (Point Cloud)-Mesh intertwined refiner, **NOCS refiner**. As shown in Fig. 3, the predicted NOCS coordinates can reveal the cues of the input point cloud, such as scales and offsets, while the canonical shape can provide information about the complete geometry. Thus they can complement each other. We describe the NOCS refiner in two parts (_PC refiner_ and _Mesh refiner_):
**PC Refiner:** Firstly, the predicted NOCS classification scores and the per-point fusion feature from the transformer will be concatenated and fed into a Mini-Pointnet [32]. Next, the dense feature will be fused with the global mesh feature generated by _Mesh Refiner_ with concatenation. Finally, we use MLP to predict the final delta logits with the fused dense feature. We use cross-entropy loss to supervise the refined classification logits during training.
**Mesh Refiner:** Firstly, we use a Mini-Pointnet to extract dense features from raw canonical shapes. Then we concatenate the dense mesh feature generated by _PC Refiner_ with the global feature from the partial point cloud to obtain the fused dense feature. Next, we use an MLP with global pooling to extract the final global shape feature. Finally, we predict the global scale factor and offset for the canonical shape with the global shape feature and an MLP. Finally, we use L2 loss to supervise the refined mesh points during training.
### Warping from Canonical To Task Space
#### 4.3.1 Feature Scattering with Canonical Coordinates
After obtaining the refined canonical (NOCS) coordinate prediction (Sec. 4.2) of a partial point cloud, we scatter the per-point feature generated by Transformer (Sec. 4.1) into a \(32^{3}\) feature volume. The "scatter" operation is performed by copying the feature vector to the target location in volume with predicted NOCS coordinates. All features mapped to the same volume index will be aggregated with a channel-wise maximum operation. And all the volume locations with no corresponding feature vectors are filled with zeros. Then the feature volume will be fed into a 3D UNet [13], then we can obtain a dense feature volume \(\mathcal{V}\) for warp field prediction.
#### 4.3.2 Warp Field Prediction
Finally, we map the refined canonical shape (Sec. 4.2) from canonical space to task space. The output contains the full configuration of the garment, including the occluded parts. It is achieved by warp field prediction [11], which is an implicit neural function \(w(p;\mathcal{V})\in\mathbf{R}^{3}\) that takes a query point \(p\) in the canonical space as input and infers the corresponding location of \(p\) in task space. Here \(w(\cdot)\) is a learned MLP. We use L2 loss to supervise the warp field prediction. In training, the query points are sampled from the canonical mesh surface. In inference, the query points are generated by our Mesh Refiner (Sec. 4.2).
Figure 3: PC-Mesh Fusion Refiner
## 5 Experiments
### Implementation Details
We implement our method with Pytorch [4] and use Adam optimizer with a learning rate of 0.0001. The training stage takes about 150 epochs to converge, which lasts for 1-3 days on an RTX 3090 GPU, depending on the training dataset sizes for different categories. We randomly sample 4000 points from the input partial point cloud and 6000 points from the input canonical mesh surface for each frame. During training, we randomly add noise to the partial point-cloud canonical coordinates \(\mathbf{P}_{1}^{nocs}\) of the previous frame by randomly generating a NOCS scale factor \(\mathbf{s}_{pc}\in[0.8,1.2]^{3}\) and a global NOCS offset \(\mathbf{o}_{pc}\in[0,0.1]^{3}\). We also add noise to the input canonical mesh by randomly generating a global NOCS scale factor \(\mathbf{s}_{mesh}\in[0.8,1.2]^{3}\) during training. Please see the supplementary files for further details on training, inference, and network structure.
### Metrics
**NOCS Coordinate Distance** (\(D_{nocs}\)). We calculate the point-wise L2 distance between the predicted NOCS coordinate of the partial point cloud with the ground-truth NOCS labels. This metric evaluates the quality of per-point NOCS coordinate prediction for input partial point cloud.
**Chamfer Distance** (\(D_{chamf}\)). We calculate the Chamfer distance in centimeters between the reconstructed mesh points and the ground-truth mesh points in task space. This metric can evaluate the quality of _surface reconstruction_.
**Correspondence Distance** (\(D_{corr}\), \(A_{d}\)). We calculate point-wise L2 distance in centimeters between the reconstructed mesh and the ground-truth mesh for each frame in task space. The correspondences are based on the NOCS coordinates (each point on the predicted mesh will find the closest point on the ground-truth mesh in NOCS). This metric can evaluate the quality of garment _pose estimation_. In practice, we find the variance of the error distribution in different frames is very large, which makes the mean correspondence distance \(D_{corr}\) across all frames dominated by the worst cases. So we additionally introduce \(A_{d}\) which represents the accuracy (ratio of frames) with \(D_{corr}<d\).
### Experiment Results
#### 5.3.1 Main Experiments
**Baselines** In Tab. 1, we compare GarmentNets with two settings of our method:
**GarmentNets**[11]: As the only-existing method for category-level garment pose estimation, GarmentNets focused on the single-frame setting. We adapt it for tracking by prediction frame by frame.
**Ours (GT)**: Our tracking method given the ground-truth first-frame garment pose and the ground-truth canonical mesh as initialization.
**Ours (Pert.)**: Our tracking method when the first-frame garment pose and the input canonical shape are perturbed with noise. Specifically, we use the same noise distribution in training (Sec. 5.1) which adds global NOCS scale and offset to the first-frame canonical coordinates of partial point-cloud and the canonical mesh. Additionally, we add per-point Gaussian noise (\(\delta\)=0.05) to the input canonical coordinates of the first frame during inference.
**Results** Tab. 1 summarizes the quantitative results on the VR-Folding dataset. In general, our method outperforms GarmentNets in all metrics by a large margin. On the challenging \(A_{3cm}\) metric in _Folding_ task and \(A_{5cm}\) in _Flattening_ task, GarmentNets has very low performance (\(0.8\%\) in _Shirt Folding_), while our method achieves much higher scores (\(29.0\%\) in _Shirt Folding_), which proves that our method can generate more accurate predictions in videos compared to GarmentNets. Our method also outperforms GarmentNets on mean correspondence distance \(D_{corr}\) and chamfer distance \(D_{chamf}\), which proves that our method can do well in both _pose estimation_ and _surface reconstruction_ tasks. Even with perturbation on first-frame poses (Ours with Pert. in Tab. 1), our method only shows minor performance loss (\(37.9\%\to 36.6\%\) in _Top Folding_) compared to using ground-truth as first-frame pose.
We also present some qualitative results in Fig. 4 and Fig. 5. We can see from Fig. 5 that the prediction results of GarmentNets are very unstable because it performs mesh reconstruction for each frame individually and can not utilize the information from previous frames. Conversely, our method can leverage input canonical mesh and inter-frame information to predict more stable and accurate results. Besides, GarmentNets suffers from ambiguity brought by symmetry (take a front side as a back side), which hampers its ability to predict accurate canonical coordinates (see Fig. 4). In contrast, our method can predict much more accurate canonical coordinates (\(D_{nocs}\) 0.162 v.s. 0.039 for _Pants Folding_ in Tab. 1).
Figure 4: The canonical coordinate prediction results on the VR-Folding dataset.
#### 5.3.2 Ablation Study
**NOCS Positional Embedding.** In our method, NOCS positional embedding (Sec. 4.1.3) is the crucial design choice to leverage inter-frame correspondence information. As shown in Fig. 4, if we remove the NOCS positional embedding from our network, the network will suffer from the same ambiguity problem as GarmentNets due to symmetry.
**NOCS Refiner.** Unlike rigid object tracking, garment tracking has a higher demand for avoiding error accumulation in long videos, because the error distribution of predicted pose in testing can be very different from that in training due to the near-infinite DOF. As shown in Tab. 2, our proposed PC Refiner (Sec. 4.2) greatly influences the performance due to its ability to refine NOCS coordinate predictions in each frame. Besides, the Mesh Refiner (Sec. 4.2) also contributes to a slight performance improvement, indicating that our network has more tolerance for canonical mesh errors than NOCS coordinate errors.
**Feature Extractor.** As shown in Tab. 2, after we replace our feature extractor (ResUNet3D [12] based on sparse 3D convolution) with PointNet++ [33], the overall performance drops a lot. Thus high-resolution 3D convolution network should be a better choice for this task.
#### 5.3.3 Robustness
**Robustness against Noise** We test our method under different levels of noise perturbation by increasing the initial pose noise level described in Sec. 5.3.1 by 1 or 2 times. Specifically, we augment the point-cloud NOCS coordinates of the first frame with a global scaling factor \(s_{pc}\), a
\begin{table}
\begin{tabular}{c|c|c|c c c|c|c c c|c|c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multirow{2}{*}{Init.} & \multicolumn{6}{c|}{Folding} & \multicolumn{6}{c}{Flattening} \\ \cline{4-13} & & & \(A_{5cm}\uparrow\) & \(A_{5cm}\uparrow\) & \(D_{corr}\downarrow\) & \(D_{chamf}\downarrow\) & \(D_{nocs}\downarrow\) & \(A_{5cm}\uparrow\) & \(A_{10cm}\uparrow\) & \(D_{corr}\downarrow\) & \(D_{chamf}\downarrow\) & \(D_{nocs}\downarrow\) \\ \hline \multirow{3}{*}{Shirt} & GarmentNets [11] & N/A & 0.8\% & 21.5\% & 6.40 & 1.58 & 0.221 & 13.2\% & 59.4\% & 10.54 & 3.54 & 0.135 \\ & Ours & GT & **29.8\%** & 85.8\% & **3.88** & **1.16** & **0.051** & **30.7\%** & **83.4\%** & **8.63** & **1.78** & **0.105** \\ & Ours & Pert. & 29.0\% & **85.9\%** & 3.88 & 1.18 & 0.052 & 25.4\% & 81.6\% & 8.94 & 1.85 & 0.109 \\ \hline \multirow{3}{*}{Pants} & GarmentNets [11] & N/A & 16.2\% & 69.5\% & 4.43 & 1.30 & 0.162 & 1.5\% & 42.4\% & 12.54 & 4.19 & 0.185 \\ & Ours & Of & **47.3\%** & **94.0\%** & **3.26** & **1.07** & **0.039** & **31.3\%** & **78.2\%** & **8.97** & **1.64** & **0.113** \\ & Ours & Pert. & 42.8\% & 93.6\% & 3.35 & 1.10 & 0.039 & 30.7\% & 76.9\% & 9.55 & 2.71 & 0.143 \\ \hline \multirow{3}{*}{Top} & GarmentNets [11] & N/A & 10.3\% & 53.8\% & 5.19 & 1.51 & 0.148 & 21.6\% & 57.6\% & 9.98 & 2.13 & 0.174 \\ & Ours & GT & **37.9\%** & **85.9\%** & **3.75** & **0.99** & **0.051** & **36.5\%** & **69.0\%** & **9.41** & **1.59** & **0.113** \\ & Ours & Pert. & 36.6\% & 86.1\% & 3.76 & 1.00 & 0.051 & 33.5\% & 68.1\% & 9.61 & 1.62 & 0.116 \\ \hline \multirow{3}{*}{Skirt} & GarmentNets [11] & N/A & 1.1\% & 30.3\% & 6.95 & 1.89 & 0.239 & 0.1\% & 7.9\% & 18.48 & 5.99 & 0.287 \\ & Ours & GT & **23.5\%** & **71.3\%** & **4.61** & **1.33** & **0.060** & **5.4\%** & **39.4\%** & **16.09** & **2.02** & **0.199** \\ & Ours & Pert. & 22.8\% & 70.6\% & 4.72 & 1.36 & 0.060 & 2.3\% & 35.5\% & 16.55 & 2.15 & 0.207 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results on VR-Folding dataset.
Figure 5: The qualitative results of pose estimation for **unseen** instances in VR-Folding dataset. In the long sequence tracking (shown in the lower part), our prediction still keeps high consistency with GT, while GarmentNets outputs a series of meshes that lack stability.
global offset \(o_{pc}\), and Gaussian noise standard deviation \(\delta\). Besides, we also augment the canonical mesh with global scaling factor \(s_{mesh}\). As shown in Fig. 6 (left), We can see that our method performs well at _surface reconstruction_ (_i.e_. \(D_{chamf}\)) under high noise level. Please see the supplementary files for details of the noise parameters.
**Robustness under Large Frame Interval** Tracking at large frame intervals with huge object deformations can be very challenging. Therefore, we uniformly drop frames in videos and only keep part (_i.e_. 1/2, 1/4, 1/6, and 1/8) of the frames. The results are shown in Fig. 6 (right), we can see that our method is more robust against missing frames on _Folding_ task compared to _Flattening_ task.
#### 5.3.4 Tracking Speed
On a single RTX 3090 GPU, GarmentNets takes 100ms to pass the backbone, 7ms for volume query, and 170ms for Marching Cubes [25]. It results in a runtime of 3.6 FPS. While in our design, we adopt a faster backbone that costs only 45ms and eliminates the time-consuming Marching Cubes. Our NOCS refiner and warp field prediction take 12ms and 7ms respectively. Our method can achieve 15 FPS during inference, which is \(\sim 5\) times faster than [11].
#### 5.3.5 Generalization Ability
**Neural Prediction as First-Frame Pose** In order to evaluate the generalization ability of our method, we directly use GarmentNets prediction (_i.e_. canonical coordinates and mesh) as the first-frame pose during inference. As shown in Tab. 3, our method still outperforms GarmentNets by a large margin without any data augmentation related to GarmentNets during training.
**Real World Experiments** We collect some real-world RGB-D videos of garment manipulation with Realsense L515 [5] LiDAR cameras. Our method can directly track garment pose for novel garments in the real-world with a model trained only on our simulated data. Please see the supplementary files for more qualitative results.
## 6 Conclusion and Future Works
In this work, we propose a complete framework for garment pose tracking, including the data collection (_i.e_. VR-Garment system), dataset (_i.e_. VR-Folding), and a strong approach (_i.e_. GarmentTracking) which is both quantitatively and qualitatively better than the baseline approach. As a platform, we believe VR-Garment can innovate the dataset collection for other kinds of deformable objects. As a manipulation dataset, we are interested in using VR-Folding for robot imitation learning. As a strong baseline, we hope GarmentTracking can facilitate future research in this challenging direction.
## Acknowledgement
This work was supported by the National Key Research and Development Project of China (2021ZD0110704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Qi Zhi Institute, Shanghai Science and Technology Commission (21511101200) and OpenBayes.
\begin{table}
\begin{tabular}{c|c|c c c|c|c c c|c|c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Folding} & \multicolumn{6}{c}{Flattening} \\ \cline{3-10} & & \(A_{3cm}\uparrow\) & \(A_{5cm}\uparrow\) & \(D_{corr}\downarrow\) & \(D_{chamf}\downarrow\) & \(D_{mocs}\downarrow\) & \(A_{5cm}\uparrow\) & \(A_{10cm}\uparrow\) & \(D_{corr}\downarrow\) & \(D_{chamf}\downarrow\) & \(D_{mocs}\downarrow\) \\ \hline \multirow{5}{*}{Shirt} & Ours & **29.0\%** & **85.9\%** & **3.88** & **1.18** & 0.052 & **25.4\%** & **81.6\%** & **8.94** & **1.85** & **0.109** \\ & Ours w.o. PC refiner & 4.1\% & 25.3\% & 7.76 & 1.70 & 0.115 & 0.6\% & 27.4\% & 17.42 & 2.89 & 0.229 \\ & Ours w.o. Mesh refiner & 26.8\% & 83.6\% & 3.92 & 1.19 & **0.048** & 23.5\% & 81.2\% & 9.18 & 1.88 & 0.110 \\ & Ours w. PointNet++ & 22.2\% & 34.7\% & 6.53 & 1.54 & 0.085 & 13.3\% & 53.2\% & 14.21 & 1.93 & 0.174 \\ \hline \multirow{5}{*}{Pants} & Ours & **42.8\%** & **93.6\%** & **3.35** & **1.10** & **0.039** & **30.7\%** & **76.9\%** & **9.55** & **2.71** & 0.143 \\ & Ours w.o. PC refiner & 23.1\% & 70.0\% & 4.84 & 1.30 & 0.072 & 5.8\% & 49.2\% & 13.72 & 3.46 & 0.165 \\ \cline{1-1} & Ours w.o. Mesh refiner & 33.5\% & 92.2\% & 3.52 & 1.18 & 0.039 & 22.5\% & 75.2\% & 9.76 & 2.78 & 0.148 \\ \cline{1-1} & Ours w. PointNet++ & 38.9\% & 73.6\% & 4.91 & 1.33 & 0.066 & 8.0\% & 69.1\% & 10.12 & 2.81 & **0.125** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of ablative experiments on VR-Folding dataset.
\begin{table}
\begin{tabular}{c|c|c c c|c|c} \hline \hline Type & Method & \(A_{3cm}\uparrow\) & \(A_{5cm}\uparrow\) & \(D_{corr}\downarrow\) & \(D_{chamf}\downarrow\) & \(D_{mocm}\downarrow\) \\ \hline \multirow{2}{*}{Shirt} & Ours* & **25.4\%** & **78.9\%** & **4.04** & **1.18** & **0.052** \\ & GarmentNets & 0.8\% & 21.5\% & 6.40 & 1.58 & 0.221 \\ \hline \multirow{2}{*}{Pants} & Ours* & **45.1\%** & **92.2\%** & **3.33** & **1.16** & **0.040** \\ & GarmentNets & 16.2\% & 69.5\% & 4.43 & 1.30 & 0.162 \\ \hline \multirow{2}{*}{Top} & Ours* & **21.1\%** & **61.9\%** & **4.82** & **1.11** & **0.065** \\ & GarmentNets & 10.3\% & 53.8\% & 5.19 & 1.51 & 0.148 \\ \hline \multirow{2}{*}{Skirt} & Ours* & **14.7\%** & **65.9\%** & **5.36** & **1.46** & **0.078** \\ & GarmentNets & 1.1\% & 30.3\% & 6.95 & 1.89 & 0.239 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of our method using GarmentNets prediction as first-frame pose on _Folding_ task.
Figure 6: The robustness experiments. |
2307.10735 | A model for pion collinear parton distribution function and form factor | We developed a model for the pion light-front wave function (LFWF) that
incorporates valence, sea and gluon degrees of freedom. Using the LFWF overlap
representation, we derived parametrizations for the pion parton distribution
functions and the electromagnetic form factor. These parametrizations depend on
two distinct sets of parameters, enabling separate fits of the longitudinal-
and transverse-momentum dependencies of the LFWF. The pion PDFs are extracted
from available Drell-Yan and photon-production data using the xFitter framework
and are found well compatible with existing extractions. Furthermore, the fit
of the electromagnetic form factor of the pion to all the available
experimental data works quite successfully. | Simone Venturini, Barbara Pasquini, Simone Rodini | 2023-07-20T10:01:22Z | http://arxiv.org/abs/2307.10735v1 | # A model for pion collinear parton distribution function and form factor
###### Abstract
We developed a model for the pion light-front wave function (LFWF) that incorporates valence, sea and gluon degrees of freedom. Using the LFWF overlap representation, we derived parametrizations for the pion parton distribution functions and the electromagnetic form factor. These parametrizations depend on two distinct sets of parameters, enabling separate fits of the longitudinal- and transverse-momentum dependences of the LFWF. The pion PDFs are extracted from available Drell-Yan and photon-production data using the xFitter framework and are found well compatible with existing extractions. Furthermore, the fit of the electromagnetic form factor of the pion to all the available experimntal data works quite successfully.
PRESENTED AT
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects,
Michigan State University, USA, 27-31 March 2023
## I Light - Front Wave Functions of the pion
Light-front quantization is an effective approach in high-energy scattering, especially for describing the hadronic matrix elements that determine the soft contributions in inclusive and exclusive reactions. These matrix elements can be expressed through the overlap of light-front wave functions (LFWFs) associated with distinct parton configurations. Introducing the light-front Fock-state expansion and truncating the series up to the the four-partons components, the pion state can be represented as
\[\ket{\pi(P)}=\ket{\pi(P)}_{q\bar{q}}+\ket{\pi(P)}_{q\bar{q}g}+\ket{\pi(P)}_{q \bar{q}gg}+\sum_{\{\bar{s}\}}\ket{\pi(P)}_{q\bar{q}\,(\bar{s})}\,, \tag{1}\]
where \(q=u,d\) and the sum in \(\{\bar{s}\}\) runs over the \(N_{f}\)-flavor pairs of the sea quarks (\(u\bar{u}\), \(d\bar{d}\), \(s\bar{s}\) at the model scale). Restricting ourselves to consider only the contribution with zero orbital angular momentum, the LFWF for each Fock-state component in Eq. (1) can be written in a model independent way in terms of light-front wave-amplitudes (LFWAs) as [1]
\[\ket{\pi(P)}_{q\bar{q}}^{l_{z}=0}=\int\mathrm{d}[1]\mathrm{d}[2]\frac{\delta _{c1c_{2}}}{\sqrt{3}}\psi_{q\bar{q}}^{(1)}(1,2)\left[q_{c_{1}\uparrow}^{ \dagger}(1)\bar{q}_{c_{2}\downarrow}^{\dagger}(2)-q_{c_{1}\downarrow}^{ \dagger}(1)\bar{q}_{c_{2}\uparrow}^{\dagger}(2)\right]\ket{0}, \tag{2}\]
\[|\pi(P)\rangle_{q\bar{q}g}^{l_{z}=0} = \int{\rm d}[1]{\rm d}[2]{\rm d}[3]\frac{T_{c_{1}c_{2}}^{a}}{2}\psi_{q \bar{q}g}^{(1)}(1,2,3)\left[\left(q\bar{q}\right)_{A,1}^{\dagger}g_{a\downarrow}^ {\dagger}\left(3\right)-\left(q\bar{q}\right)_{A,-1}^{\dagger}g_{a\uparrow}^{ \dagger}\left(3\right)\right]|0\rangle\,, \tag{3}\] \[|\pi(P)\rangle_{q\bar{q}gg}^{l_{z}=0} = \int{\rm d}[1]{\rm d}[2]{\rm d}[3]{\rm d}[4]\frac{\delta_{c_{1}c_ {2}}\delta^{ab}}{\sqrt{24}}\bigg{\{}\psi_{q\bar{q}gg}^{(1)}(1,2,3,4)\left(q \bar{q}\right)_{A,0}^{\dagger}\left(gg\right)_{S,0}^{\dagger}\] (4) \[+ \psi_{q\bar{q}gg}^{(2)}\left(1,2,3,4\right)\left(q\bar{q}\right)_ {S,0}^{\dagger}\left(gg\right)_{A,0}^{\dagger}\bigg{\}}|0\rangle\,,\] \[|\pi(P)\rangle_{q\bar{q}\{\bar{s}\}}^{l_{z}=0} = \int{\rm d}[1]{\rm d}[2]{\rm d}[3]{\rm d}[4]\frac{\delta_{c_{1}c _{2}}\delta_{c_{3}c_{4}}}{3}\bigg{\{}\psi_{q\bar{q}\bar{s}}^{(1)}(1,2,3,4) \left(q\bar{q}\right)_{A,0}^{\dagger}\left(3\bar{s}\right)_{S,0}^{\dagger}\] (5) \[+ \psi_{q\bar{q}\bar{s}}^{(2)}\left(1,2,3,4\right)\left(q\bar{q} \right)_{S,0}^{\dagger}\left(3\bar{s}\right)_{A,0}^{\dagger}\] \[+ \psi_{q\bar{q}\bar{s}}^{(3)}\left(1,2,3,4\right)\left[\left(q \bar{q}\right)_{A,1}^{\dagger}\left(3\bar{s}\right)_{A,-1}^{\dagger}-\left(q \bar{q}\right)_{A,-1}^{\dagger}\left(3\bar{s}\right)_{A,1}^{\dagger}\right] \bigg{\}}\,|0\rangle\,.\]
The measures in Eqs. (2)-(5) are defined as:
\[\prod_{i=1}^{N}{\rm d}[i]=\left[dx\right]_{N}\left[d^{2}\mathbf{k}_{\perp} \right]_{N}, \tag{6}\]
\[\left[dx\right]_{N}=\prod_{i=1}^{N}\frac{dx_{i}}{\sqrt{x_{i}}}\delta\left(1- \sum_{i=1}^{N}x_{i}\right),\qquad\qquad\left[d^{2}\mathbf{k}_{\perp}\right]_{N}= \frac{1}{[2(2\pi)^{3}]^{N-1}}\prod_{i=1}^{N}d^{2}\mathbf{k}_{\perp i}\delta^{(2)} \left(\sum_{i=1}^{N}\mathbf{k}_{\perp i}\right). \tag{7}\]
The LFWAs \(\psi_{\beta}^{(i)}\) are generic functions depending on \(i=(x_{i},\mathbf{k}_{\perp i})\), where \(x_{i}\) and \(\mathbf{k}_{\perp i}\) are, respectively, the fraction of pion momentum in the collinear direction carried by the \(i\)-th parton and the transverse momentum of the \(i\)-th parton. Furthermore, \(q_{c\lambda}^{\dagger},\bar{q}_{c\lambda}^{\dagger}\) and \(g_{a\lambda}^{\dagger}\) are creation operator of quarks, antiquarks and gluons, respectively, and are labelled by the specific quantum numbers of the partons: the color (\(c\) and \(a\)), the helicity (\(\lambda=\uparrow\) or \(\downarrow\)), and, in case of the fermions, the flavor \(q\). For brevity, in Eqs. (2)-(5) we introduced the operators \(\left(p_{1}p_{2}\right)_{A/S,0/1}^{\dagger}\): these are anti-symmetric (\(A\)) or symmetric (\(S\)) combinations of creation operators of the partons \(p_{1}\) and \(p_{2}\) with total helicity 0 or 1. The subscript \({}_{\lambda\bar{3}}\) refers to the sea quarks \(u\bar{u}\), \(d\bar{d}\), \(s\bar{s}\) and \(T_{ij}^{a}=\frac{\lambda_{ij}^{a}}{2i}\) represent the SU(3) color matrices.
The quark and gluon PDFs of the pion are defined as
\[f_{1}^{q}(x) = \int\frac{dz^{-}}{2(2\pi)}e^{ik^{+}z^{-}}\langle\pi(p)|\bar{\psi} (0)\gamma^{+}\psi(z)|\pi(p)\rangle\Big{|}_{\begin{subarray}{c}z^{+}=0\\ \mathbf{z}_{\perp}=\mathbf{0}\end{subarray}}, \tag{8}\] \[f_{1}^{g}(x) = \frac{1}{xp^{+}}\int\frac{dz^{-}}{2\pi}e^{ik^{+}z^{-}}\langle\pi( p)|G^{+\mu}(0)G_{\mu+}(z)|\pi(p)\rangle\Big{|}_{\begin{subarray}{c}z^{+}=0\\ \mathbf{z}_{\perp}=\mathbf{0}\end{subarray}}, \tag{9}\]
where \(\psi\) is the quark field and \(G^{\mu\nu}\) is the gluon field strength tensor. By inserting in Eqs. (8) and (9) the pion state of Eq. (1), one obtains the representation of the pion PDFs in terms of overlap of LFWFs for each Fock-state component, as given in Ref. [2]. Moreover, by neglecting electroweak corrections and quark masses, charge symmetry imposes \(f_{1,\pi^{+}}^{u}=f_{1,\pi^{+}}^{\bar{d}}=f_{1,\pi^{-}}^{d}=f_{1,\pi^{-}}^{\bar{u }}=2f_{1,\pi^{0}}^{u}=2f_{1,\pi^{0}}^{\bar{u}}=2f_{1,\pi^{0}}^{d}=2f_{1,\pi^{0 }}^{\bar{d}}\). Hereafter, we will refer to distributions in positively charged pions. Assuming also a SU(3)-symmetric sea, i.e., \(f_{1}^{u}=f_{1}^{\bar{d}}=f_{1}^{s}=f_{1}^{\bar{s}}\), we will consider three independent PDFs: the total valence contribution \(f_{1}^{v}\), the total sea contribution \(f_{1}^{S}\), given by
\[f_{1}^{v} = f_{1}^{u_{v}}-f_{1}^{d_{v}}=\left(f_{1}^{u}-f_{1}^{\bar{u}} \right)-\left(f_{1}^{d}-f_{1}^{\bar{d}}\right)=2f_{1}^{u_{v}},\] \[f_{1}^{S} = 2f_{1}^{u}+2f_{1}^{\bar{d}}+f_{1}^{s}+f_{1}^{\bar{s}}=6f_{1}^{u}, \tag{10}\]
and the gluon contribution \(f_{1}^{g}\).
As discussed in Ref. [2], we can analogously obtain the LFWF overlap representation for the pion e.m. form factor, starting from the definition
\[F_{\pi}(Q^{2})=\frac{1}{(p+p^{\prime})^{+}}\langle\pi(p^{\prime})|\bar{\psi}(0) \gamma^{+}\psi(0)|\pi(p)\rangle \tag{11}\]
where \(Q^{2}=-q^{2}>0\) and \(q=p^{\prime}-p\) is the four-momentum transfer.
### Parametrization
We have designed the LFWAs in such a way that the parameters associated with longitudinal and transverse momentum dependence are treated separately during the fitting process of the pion PDFs and electromagnetic (e.m.) form factor (FF). Specifically, the LFWAs are written in the following general form
\[\psi_{\beta}^{(i)}\left(1,2,\ldots,N\right)=\phi_{\beta}^{(i)}\left(x_{1},x_{2},\ldots,x_{N}\right)\Omega_{\beta}^{(i)}\left(x_{1},\mathbf{k}_{\perp 1},x_{2},\mathbf{k}_{ \perp 2},\ldots,x_{N},\mathbf{k}_{\perp N}\right), \tag{12}\]
where the functions \(\phi_{\beta}^{(i)}\) can be expressed as a linear combination of pion distribution amplitudes and \(\Omega_{\beta}^{(i)}\) are modified \(x\)-dependent gaussian functions. The last ones are normalized in such a way that the fit of the collinear PDFs does not contain any spurious dependence of the parameters in the transverse-momentum space. In total the model has a set \(\mathcal{X}\) of six parameters for the longitudinal momentum dependence, fitted to pion PDFs, and a set \(\mathcal{A}\) of four parameters for the transverse momentum dependence, constrained from the fit to the pion e.m. FF.
## II Extraction of the pion PDF and electromagnetic form factor
The fit of the pion PDFs has been performed by using the open-source tool xFitter [4; 7]. The data included in the analysis are from the NA10 [8], E615 [9] and WA70 [10] experiments. The complete data set has been cut to exclude
Figure 1: \(xf_{1}\) as function of \(x\) for the total valence (upper panel), total sea (left panel in the bottom) and gluon (right panel in the bottom) contributions at \(\mu^{2}=5\) GeV\({}^{2}\). The light (dark) red bands show the results of this work with the \(3\sigma\) (\(1\sigma\)) uncertainty in comparison with the results from the JAM collaboration [3] (light blue bands), the analysis of xFitter collaboration [4] (grey bands), the BCP fit of Ref. [5] (yellow bands) and the GRVPI1 fit [6] (solid black curves).
the kinematic region corresponding to the \(J/\psi\) and \(\Upsilon\) resonances. We fixed the initial scale to \(\mu_{0}=0.85\) GeV and the factorization scale \(\mu_{F}\) and renormalization scale \(\mu_{R}\) to \(\mu_{F}=\mu_{R}=0.8\) GeV. The reduced chi-squared from a single minimization is \(\hat{\chi}^{2}/N_{d.o.f.}=0.88\) for the number of degrees of freedom \(N_{d.o.f.}=260-6=254\). The fit to the real data has been repeated 1000 times varying the experimental points by random gaussian shifts both for the statistic and systematic uncertainties, thus obtaining 1000 bootstrap replicas. The renormalization scale and the factorization scale have been varied replica in a way such that \(\mu_{0}\leq\mu_{F}\leq\mu_{0}\) and \(\mu_{0}\leq\mu_{R}\leq 2\mu_{0}\).
In Fig. 1 we show our results for the pion PDFs (red bands) at the scales of \(\mu^{2}=5\) GeV\({}^{2}\) in comparison with different analyses: the GRVPI1 solution [6] (solid black curves); the xfitter results [4] (grey bands); the JAM extraction [3] (light blue bands); the results within the statistical model of Bourrely-Chang-Peng (BCP) (Bourrely) [5] (yellow bands). Overall, the modern analyses give compatible results within the relative error bands. The agreement is better for the valence and sea contributions at larger \(x\) and for the gluon PDF in the small \(x\) region.
For the fit of the FF, we included 100 experimental points in a \(Q^{2}\) range from 0.015 GeV\({}^{2}\) to 9.77 GeV\({}^{2}\), corresponding to the experimental measurements described in the legend of Fig. 2. The FF depends both on the set \(\mathcal{X}\), fixed from the pion PDF fit, and the set \(\mathcal{A}\), which contains four distinct elements to be determined from the FF fit. The bootstrap replica method is used to propagate both the experimental uncertainties and the uncertainties associated to the parameters of the set \(\mathcal{X}\) that are not directly free fitting variables. The method is described in detail in Ref. [2]. In the end, the best fit produces a reduced chi-squared of 1.19.
In Fig. 2, we show the results for the square of the pion e.m. form factor with the inner (dark blue) band representing the 68% uncertainty and the external (light blue) band showing the 99.7% uncertainty. Agreement with the different data sets is qualitatively evident. We stress that the two bands incorporate the error propagation of the PDF parameters, representing therefore more than just the experimental uncertainty on the FF.
## III Conclusions
In this study, we have presented an extraction of the pion PDFs and e.m. FF using a light-front model approach that incorporates contributions from valence quarks, sea quarks, and gluons. We have developed a model for the pion LFWFs, ensuring that the fit of the collinear PDFs remains free from any spurious dependence on the parameters in the transverse-momentum space. These parameters are separately fitted to the available data on the pion e.m. form factor. The approach presented in this study serves as a proof-of-principle for achieving a unified description of hadron distribution functions, encompassing both the longitudinal and transverse momentum dynamics of partons within hadrons. The ultimate objective is to extend this framework to incorporate transverse-momentum dependent parton distributions (TMDs) and generalized parton distributions (GPDs) in a global fit. This will leverage the wealth of data expected from upcoming experiments at JLab, COMPASS++/AMBER, and future electron-ion colliders,
Figure 2: Fit results for the square of the pion electromagnetic form factor as function of \(Q^{2}\). The dark (light) blue band shows the 68% (99.7%) of the replicas. The experimental data correspond to Hub08 [11], Hor06 [12], Ame86 [13], Vol01 [14], Beh78 [15], Dal82 [16; 17], Ack78 [18].
taking the analysis a step further than previous studies that have only considered simultaneous PDFs and TMDs analysis [19], or have focused solely on TMDs [20; 21], and GPDs [22] separately.
|
2301.05908 | An Order-Complexity Model for Aesthetic Quality Assessment of Symbolic
Homophony Music Scores | Computational aesthetics evaluation has made great achievements in the field
of visual arts, but the research work on music still needs to be explored.
Although the existing work of music generation is very substantial, the quality
of music score generated by AI is relatively poor compared with that created by
human composers. The music scores created by AI are usually monotonous and
devoid of emotion. Based on Birkhoff's aesthetic measure, this paper proposes
an objective quantitative evaluation method for homophony music score aesthetic
quality assessment. The main contributions of our work are as follows: first,
we put forward a homophony music score aesthetic model to objectively evaluate
the quality of music score as a baseline model; second, we put forward eight
basic music features and four music aesthetic features. | Xin Jin, Wu Zhou, Jinyu Wang, Duo Xu, Yiqing Rong, Shuai Cui | 2023-01-14T12:30:16Z | http://arxiv.org/abs/2301.05908v1 | # An Order-Complexity Model for aesthetic Quality Assessment of Symbolic Homophony Music Scores
###### Abstract
Computational aesthetics evaluation has made great achievements in the field of visual arts, but the research work on music still needs to be explored. Although the existing work of music generation is very substantial, the quality of music score generated by AI is relatively poor compared with that created by human composers. The music scores created by AI are usually monotonous and devoid of emotion. Based on Birkhoff's aesthetic measure, this paper proposes an objective quantitative evaluation method for homophony music score aesthetic quality assessment. The main contributions of our work are as follows: first, we put forward a homophony music score aesthetic model to objectively evaluate the quality of music score as a baseline model; second, we put forward eight basic music features and four music aesthetic features.
Xin Jin, Wu Zhou, Jinyu Wang, Duo Xu, Yiqing Rong, Shuai Cui Computational aesthetics, Music score evaluation, Birkhoff's measure, Music aesthetic features
## 1 Introduction
Computational aesthetics evaluation [1] enables computers to make qualitative or quantitative aesthetic judgments on works of art. These works of art usually include painting, music and design. It is meaningful for computers to realize beauty because this can guide AI generation tasks.
Although the existing work of music generation is very mature, the quality of music score generated by AI is relatively poor compared with that created by human composers. This is probably because the essence of AI generation task is to predict the probability of the next music unit being played and the lack of prior music theory knowledge leads to the music generated by AI sounds unpleasant.
There are three main steps in the production of pop music: composition of music score, arrangement, and finally played by the performer. We hope that the quality of score can be evaluated from the stage of music score composition, so as to eliminate the interference of different performers' performance levels on the evaluation of music score quality.
Due to a lack of labeled aesthetic score on music scores like AVA [2] in the field of image aesthetic, we adopt the traditional aesthetic measure method to study the aesthetic model. Traditional aesthetic measure can analyze the beauty of objects from the perspective of information in objects.
Our goal is to create a music score aesthetic assessment model that can objectively distinguish the good from the bad.
In this paper, Birkhoff's method [3] was selected to conduct a study of aesthetic quality assessment of music score from the perspective of information theory. Birkhoff formalizes the aesthetic measure of an object into the quotient between order and complexity:
\[M=\frac{O}{C} \tag{1}\]
Fig 1 briefly describes the content of our work. The main contributions of our work are as follows:
* We put forward a score aesthetic assessment model to objectively evaluate the quality of homophony music score as a baseline score aesthetic assessment model.
* We put forward and update eight basic music features and four music aesthetic features in combination with information theory and music theory.
## 2 Related Work
There is only one work of music aesthetic measure using information theory, which is the aesthetic measure of audio. Audio Oracle [4] (AO) uses Information Rate (IR) as an aesthetic measure. However, it cannot clarify what kind of specific aesthetic intention the system has, because repetition or redundancy (proposed by IR) has essentially different meanings, interpretations and values in art, and it is questionable to use information rate as the aesthetic measure.
Figure 1: The quality of symbolic score can be easily evaluated through the Score Aesthetic Assessment Model (SAAM).
There are three levels of music generation: score generation, performance generation and audio generation. The work we discuss in this section is related to music score generation. In order to make full use of music theory and study complexity, we will not discuss monophonic music, but we will discuss homophony. Homophony is a kind of multipart music, which has a melodic part and an accompanying part. In the generation of homophony music field, there are tasks such as directly generating melody and chord [5], generating melody through chord [6], and generating chord according to melody [7]. Although the existing generation technology is rather mature, the quality of music score created by AI is still low, which sounds monotonous and lacks emotion.
In order to evaluate the quality of music, in the field of music generation, the evaluation part is often divided into objective evaluation and subjective evaluation. The objective evaluation is often to set some statistical metrics for the music generated by AI, and the result of objective evaluation is completely calculated by computer. Many music toolkits have objective evaluation metrics packaged for direct use, such as MV2H [8] and MusPy [9], etc. MV2H evaluated how many errors there are between the generated music and the ground truth. In Muspy, basic statistical metrics of symbolic score are provided. The subjective evaluation generally includes listening test and visual analysis, almost all AI generation tasks involve subjective evaluation experiments such as scoring and Turing test, which are considered necessary and essential.
## 3 Score aesthetic assessment model
### Formalization of the Model
Based on Birkhoff's theory, information theory and music theory, we propose four aesthetic features: harmony, symmetry, entropy and K-Complexity. We linearly combine the order measures of molecules and the complexity measures of denominators. Detailed measures explaination will be described in Sections 3.2 and 3.3. Fig 2 shows the process of our work. The music aesthetic measure formula is as follows:
\[Aesthetic\ Measure=\frac{\omega_{1}H+\omega_{2}S+\theta_{1}}{\omega_{3}E+ \omega_{4}K+\theta_{2}} \tag{2}\]
Where \(H\) is harmony, \(S\) is symmetry, \(E\) is entropy and \(k\) is K complexity. \(\omega\) is the weight and \(\theta\) is the constant.
### Order Measures
When objects have some characteristics of harmony, symmetry or order, they often have a certain sense of beauty. We quantify the order of music in two dimensions, harmony and symmetry. Harmony mainly calculates based on music theory knowledge, while symmetry mainly relies on some statistical information in music. Next, we use the linear combination of harmony and symmetry as the measure of order.
Figure 2: First, we do not tokenize the symbolic score. We extract the attributes in the symbolic score, which requires preprocessing. After preprocessing, we get the pitch, rhythm, chord attributes of the score and the label of the score as the ground truss for classification. Then, we process the music attributes and extract 8 music features (small green boxes). Next, we train the samples through four logical regression models (LR stands for logical regression) and combined to extract four aesthetic features (light blue boxes). Finally, we input the four aesthetic attributes into our model, use sigmoid function to establish the loss of its error with the ground truth, and calculate the parameters of the aesthetic model.
#### 3.2.1 Interval Harmony
In music, the distance between two notes is called interval. In particular, in music, when an interval is 12 semitone, we call it an octave. Interval classification can be found in the supplementary material, interval are divided into five categories.
Mathematical and physical research shows that when two sound frequencies are a simple integer ratio, it is more pleasant to listen together. Therefore, we propose a calculation method of interval harmony, and the formula is as follows.
\[Interval\;Harmony=\sum_{i=1}^{12}\alpha_{i}*pir_{i}+\theta_{ih} \tag{3}\]
Among them, \(\alpha_{i}\) is the weight of interval, \(pir_{i}\) is the ratio of interval to total interval, \(\theta_{ih}\) is the constant.
#### 3.2.2 Chord Progression Harmony
According to Schoenberg's theory of harmony [10], the internal chord is divided into three functional harmonies: tonic triad (T), subdominant triad (S) and dominant triad (D). An example of chord functions and series are shown in supplementary material.
A complete harmony progression starts from the tonic triad, proceeds to the subordinate triad, proceeds to the dominant triad, and finally returns to the tonic triad to complete a complete cycle, which is called complete progression. In the usual sense, harmony progression is the connection of chords within a certain harmonic range in tonal music.
There are many ways to quantify harmony progression. In this paper, we refer to the method of Maria [11]. In our work, we took the average value of the progression tension to obtain a quantitative chord progress harmony. It is calculated by referring to the following formula:
\[\begin{split} Chord\;Progression\;Harmony=\lambda_{1}d_{1}(T_{i},T _{i-1})+\\ \lambda_{2}d_{2}(T_{i},T_{key})+\lambda_{3}d_{3}(T_{i}-T_{key},T _{f})+\lambda_{4}c(T_{i})+\\ \lambda_{5}m(T_{i},P)+\lambda_{6}h(T_{i},P)\end{split} \tag{4}\]
Where \(T_{i}\) is the i-th chord of progression \(P\), \(\lambda\) is the weight. For more information of parameters \(c\), \(m\), \(h\), see [11].
#### 3.2.3 Self Similarity Fitness
In the field of music generation, structure is often discussed as an important feature. Almost all music contains repetitive pieces. We will discuss the influence of repetitive structure on music aesthetics. We measure it with self similarity fitness.
Inspired by the aesthetics of images and art [12], the aesthetics beauty of music comes from the symmetry in musical compositions. Therefore, in our study, we refer to Muller's fitness method [13] to measure the degree of repetition in a piece of music. The fitness formula is shown as follow:
\[Self\;Similarity\;Fitness=2\cdot\frac{\bar{\sigma}(\alpha)\cdot\bar{\gamma}( \alpha)}{\bar{\sigma}(\alpha)+\bar{\gamma}(\alpha)} \tag{5}\]
Both \(\bar{\sigma}(\alpha)\) and \(\bar{\gamma}(\alpha)\) are related to a concept defined by Muller's method [13].
#### 3.2.4 Skewness
Skewness is a concept proposed by jSymbolic [14]. It proposes that both pitch and rhythm of music have the concept of skewness. The notes in music cannot lack pitch and rhythm. Skewness describes how asymmetrical the pitch / rhythm is to either the left or the right of the mean pitch / rhythm value.
The features are extracted based on jSymbolic, the calculation formula of pitch and rhythm skewness are not specifically described here. We combine pitch skewness and rhythm skewness linearly to get the formula of skewness:
\[Skewness=\beta_{1}*PS+\beta_{2}*RS+\theta_{sk} \tag{6}\]
Where \(PS\) is pitch skewness, \(RS\) is rhythm skewness, \(\beta\) represents their weight and \(\theta_{sk}\) is a constant.
### Complexity Measures
Bense [15] first uses Birkhoff's aesthetic measure formula to calculate aesthetics. They adapt statistical measure of information in aesthetic objects and believe that the objective measure of aesthetic objects is related to the complexity of objects. Their idea has to use information theory, and entropy is the core of it. Our aesthetic measure method considers two features: Shannon entropy and Kolmogorov complexity.
#### 3.3.1 Shannon Entropy
Let \(\Omega\) be a finite set, and \(X\) be a random variable. The value \(x\) in \(\Omega\) has a distribution \(p(x)=Pr[X=x]\). The Shannon entropy \(H(X)\) of random variable \(X\) is defined as follows:
\[H(X)=-\sum_{x\in\Omega}p(x)\log(x) \tag{7}\]
The Shannon entropy \(H(X)\) measures the average uncertainty of random variable \(X\), which is widely used to evaluate the degree of chaos in the internal state of a system. In order to calculate the entropy of music, it is necessary to obtain the music attribute histogram.
Pitch and rhythm are the two basic elements of music. In our method, we consider pitch entropy and rhythm histogram entropy, and take their linear combination as the measure of entropy. This is used to describe the uncertainty in music. Our entropy formula is as follows:
\[Entropy=\eta_{1}*PHE+\eta_{2}*RHE+\theta_{e} \tag{8}\]
Where \(PHE\) and \(RHS\) are pitch and rhythm histogram entropy, \(\eta\) represents their weight and \(\theta_{e}\) is a constant.
#### 3.3.2 Kolmogorov Complexity
For a string \(s\), Kolmogorov complexity \(K(s)\) of the string \(s\) refers to the shortest program to calculate the string \(s\) on a computer. In essence, the Kolmogorov complexity of a string is the length of the final compressed version of the string. Then, we use the linear combination of entropy and Kolmogorov complexity as a measure of complexity.
Aesthetically speaking, redundancy makes people feel dull, resulting in negative emotions. According to the definition of Kolmogorov complexity, we also refer to the method of using Kolmogorov complexity in image aesthetics [16].
We believe that Kolmogorov complexity in music is also computable. It is actually the lossless compression ratio of music, which can be formalized as the following formula:
\[Kolmogorov\ Complexity=\frac{NH_{m}-K}{NH_{m}} \tag{9}\]
Where \(NH_{m}\) is information content of a music, and \(K\) is the simplest music information after compression.
## 4 Implementation
### Datasets
There are many datasets of pop music, such as POP909 [17] and Wikifonia1. But we don't use them due to some reasons.
Footnote 1: [http://www.wikifonia.org/](http://www.wikifonia.org/)
All music data formats in POP909 dataset are performance midi, which will cause different performers' aesthetic impact on the same music score. Although Wikifonia is a score dataset (single track), it can't be used by order measure.
In order to eliminate the performance differences of different performers, objectively measure the aesthetic value of a piece of music, we finally download the scores of 100 pop songs on the Musescore2 website. Then we extract the chords of music scores in Musescore for generating 100 music scores. In detail, we take chord progression from the scores of Musescore, and determine scores' key signature (tonic) as the input parameter, then give them to Magenta's in-prov_rnn [6] enables it to generate music according to chord progression and key signature, which has the advantage of controlling the time length of two pairs of datasets to be approximately the same. The scores in Musescore are created by composers as positive samples, while the scores created by Magenta are generated by AI as negative samples.
Footnote 2: [https://musescore.org/](https://musescore.org/)
Since we use cross validation, we do not set validation sets. We split the dataset at a ratio of 7:3 for training testing.
### Preprocessing & Computing Aesthetic Features
We use music21 [18] toolkit to load music scores. Music21 has note and chord attributes, which can easily obtain the information of music score and make calculations.
Firstly, we obtain the pitch histogram and rhythm histogram in the music score to calculate entropy. Then, we get all the intervals by calculating the note events that occur at the same time, and obtain a histogram. So, the histogram entropy of pitch and rhythm and interval harmony can be calculated.
Secondly, we try to get all the chords and the key signature in the score to calculate the chord progress harmony. The chord progress and the key signature will be saved in the json format and subsequently input to the pretrained model.
Thirdly, we use Musescore3 to batch process composer scores and AI scores into score midi. This is to facilitate the use of jSymbolic [14] to extract features. The conversion of xml to score midi will not lose the score information. In this way, we get pitch and rhythm skewness.
Fourthly, we use Musescore3 to render the music score into audio, which controls that all music scores are played by Musescore3 to ensure that the music score information is lossless. Then we use the wav file format to calculate the self similarity fitness. Then, we refer to Monkey's Audio's3 lossless compression method to compress music into ape format, so as to calculate Kolmogorov complexity.
Fifthly, calculating the values of 8 basic music features (refer to Table 1), we will normalize them. Next, we use the method of logical regression to confirm the values of Harmony, Symmetry, Entropy, and Kolmogorov complexity.
Finally, we take the four normalized aesthetic features as inputs, use sigmoid function to map the aesthetic measure to 0 and 1 for classification, and use cross-entropy loss to make loss function, confirming the parameters of the aesthetic model in the way of gradient decline. We set the learning rate to 0.01. After 1000 iterations, the loss function converges.
## 5 Experiments
### Turing Test
We need to verify whether people really think the music created by composers is more beautiful than the music generated by AI. So we did the Turing test to see if people can really distinguish between composer music and AI music.
We randomly sampled 10 pieces from the music created by the composer and the music generated by AI respectively, with each piece lasting about 15 seconds. Volunteers participating in the Turing test need to identify which music is created by the composer and which music is generated by AI in these 10 pairs of pieces. The volunteer also needs to choose the one in each pair that he thinks is more beautiful.
A total of 15 volunteers participate in the Turing test. Among 300 samples, their classification accuracy of music is 91.3% (274). Assuming that the music created by the composer is more aesthetic, 87% (261) of the samples are correctly classified. This proves that our assumption is correct.
### Results & Discussion
Since the music created by the composer has a higher aesthetic feeling than the music generated by AI, we let the machine learn the aesthetic score according to the label value. This is essentially a binary classification problem.
After the weight is obtained by gradient descent, we bring the weight into the aesthetic model to view the distribution of the aesthetic model. The distribution is shown in Fig 4.
We use precision and F1-Measure as our metric to test our model. The precision of our model on the test set is 93.3%, and the F1-Measure is 90.9%. This proves that our model is valid. Fig 3 shows an example of score comparison.
Table 1 shows the calculation results of 8 features without normalization. We can make the following analysis from it. Whether IH or CPH, the scores of composer is obviously better than that of AI. This is also in line with the music theory. Considering Symmetry, the SSF of composer is obviously higher than that of AI. This is because the music created by AI is too random and often has no repetitive pieces, which proves beauty is related to repetition. Although the PS of AI is smaller than that of the composer, the difference is not much. The RS of the composer is obviously smaller than that of AI. This is because composers tend to create more regular rhythm and pitch than AI. When it comes to entropy, the values of PE and HE are obviously higher in composer score than in AI score. Although this is contrary to Birkhoff's aesthetic measure, it is also reasonable. This is because composer tends to add some changes to pitch and rhythm, while music generated by AI does not change much around the tonic. As for K-complexity, the difference between composer's scores and AI's scores is not significant, but it can be seen from the table that the music created by AI is relatively low in compression and complexity. In conclusion, moderate order and complexity quotient can quantify aesthetic feeling to a certain extent.
### Ablation Study
We conduct ablation experiments to remove harmony, symmetry, entropy and K-complexity respectively to train four different models. We compare them with the original model.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline
**Dataset** & **IH** & **CPH** & **SSF** & **PS** & **RS** & **PHE** & **RHE** & **KC** \\ \hline AI & 0.97 & 1.98 & 0.06 & **0.49** & 2.16 & **1.34** & **1.60** & **0.69** \\ \hline Composer & **1.40** & **2.04** & **0.19** & 0.53 & **0.99** & 2.16 & 1.79 & 0.73 \\ \hline \end{tabular}
\end{table}
Table 1: The similar CPH value shows the rationality of the AI dataset generated based on chords. **More detailed distribution comparison can be found in the supplementary material.** The results in bold have higher aesthetic scores.
Figure 4: The intersection of the distributions is quite small, showing that our model can distinguish between AI scores and composer scores very well.
Figure 3: As can be seen from the figure, the melody of composer score is in order, and it matches the chord progression very well. The melody of AI score seems very random and low-quality, which is specifically reflected in the irregular appearance of the rest and the melody do not follow the tension of the chord progression. **Note: The full score of normalized measure is 1.**
As shown in Fig 5, we observe the ROC curves of the original model and four models with one aesthetic feature removed respectively, and obtain their AUC values. The AUC value of our model is 0.93, which is obviously higher than that of the four models without aesthetic features.
If harmony is removed, the AUC value is only 0.77, which shows the importance of harmony and further proves that music theory plays a very important role in music aesthetics. If symmetry is removed, the AUC value of the model is 0.85, which indicates that symmetry may contribute slightly less to aesthetics than harmony. As for entropy, it is obviously important, but K-complexity seems not.
## 6 Conclusion
In summary, we propose a score aesthetic assessment model using Birkhoff's aesthetic measure to quantify a score's aesthetic. We also discover four categories of music aesthetic features, totaling eight basic aesthetic features. We have made some contributions to improve the quality of music scores. This might be helpful for music score quality assessment. However, our method still has shortcomings, for instance we have not taken the relationship between creativity and musical aesthetics into consideration. The aesthetic study of music audio quality assessment is worth exploring in the future.
|
2304.07719 | Electroweak Multi-Higgs Production: A Smoking Gun for the Type-I
Two-Higgs-Doublet Model | Extending the Higgs sector of the Standard Model (SM) by just one additional
Higgs doublet field leads to the two-Higgs-doublet model (2HDM). In the Type-I
$Z_2$-symmetric limit of the 2HDM, all the five new physical Higgs states can
be fairly light, $\mathcal{O}(100)$\,GeV or less, without being in conflict
with current data from the direct Higgs boson searches and the $B$-physics
measurements. In this article, we establish that the new neutral as well as the
charged Higgs bosons in this model can all be simultaneously observable in the
multi-$b$ final state. The statistical significance of the signature for each
of these Higgs states, resulting from the electro-weak (EW) production of their
pairs, can exceed 5$\sigma$ at the 13\,TeV High-Luminosity Large Hadron
Collider (HL-LHC). Since the parameter space configurations where this is
achievable are precluded in the other, more extensively pursued, 2HDM Types, an
experimental validation of our findings would be a clear indication that the
true underlying Higgs sector in nature is the Type-I 2HDM. | Tanmoy Mondal, Stefano Moretti, Shoaib Munir, Prasenjit Sanyal | 2023-04-16T08:05:53Z | http://arxiv.org/abs/2304.07719v2 | # Electroweak multi-Higgs production: A smoking gun for the Type-I 2HDM
###### Abstract
Extending the Higgs sector of the Standard Model (SM) by just one additional Higgs doublet field leads to the 2-Higgs Doublet Model (2HDM). In the Type-I \(Z_{2}\)-symmetric limit of the 2HDM, all the five new physical Higgs states can be fairly light, \(\mathcal{O}(100)\,\mathrm{GeV}\) or less, without being in conflict with current data from the direct Higgs boson searches and the \(B\)-physics measurements. In this article, we establish that the new neutral as well charged Higgs bosons in this model can all be simultaneously observable in the multi-\(b\) final state. The statistical significance of the signature for each of these Higgs states, resulting from the electro-weak (EW) production of their pairs, can exceed \(5\sigma\) at the \(13\,\mathrm{TeV}\) High-Luminosity Large Hadron Collider (HL-LHC). Since the parameter space configurations where this is achievable are precluded in the other, more extensively pursued, 2HDM Types, an experimental validation of our findings would be a clear indication that the true underlying Higgs sector in nature is the Type-I 2HDM.
Higgs boson, Scalar, LHC, 2HDM
## I Introduction
The existence of additional Higgs bosons, besides the one discovered by the LHC [1; 2] (hereafter, denoted by \(H_{\mathrm{obs}}\)), is predicted by most (if not all) frameworks of new physics. Observation of a second Higgs boson will thus provide firm evidence that the underlying manifestation of the EW Symmetry Breaking (EWSB) mechanism is a non-minimal one.
From a theoretical point of view, given the fact that the \(H_{\mathrm{obs}}\) belongs to a complex doublet field in the SM, any additional Higgs field can be naturally expected to have the same \(SU(2)_{L}\) representation. Following this argument, even the minimal bottom-up approach of augmenting the SM with a second doublet Higgs field and assuming CP-invariance yields a total of five physical Higgs states after EWSB: two neutral scalars (\(h\) and \(H\), with \(m_{h}<m_{H}\)), one pseudoscalar (\(A\)), and a charged pair (\(H^{\pm}\)). If both the doublets \(\Phi_{1}\) and \(\Phi_{2}\) in this 2HDM couple to all the fermions of the SM, they would cause flavor-changing neutral currents (FCNCs) that contradict the experimental results. To prevent these FCNCs, a \(\mathbb{Z}_{2}\) symmetry can be imposed [3; 4], under which \(\Phi_{1}\to\Phi_{1}\), \(\Phi_{2}\to-\Phi_{2},u^{i}_{R}\to-u^{i}_{R}\), \(d^{i}_{R}\to-d^{i}_{R}\), \(e^{i}_{R}\to-e^{i}_{R}\), so that all the quarks and charged leptons (conventionally) couple only to the \(\Phi_{2}\), resulting in the so-called Type-I 2HDM (see [5; 6] for detailed reviews).
By now, many studies [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] have established that the additional Higgs states (when the \(H_{\mathrm{obs}}\) is identified with either the \(h\) or the \(H\) state) of the 2HDM can be individually accessed at the LHC. Therefore, several searches for singly-produced neutral and charged Higgs bosons have been carried out by the ATLAS and CMS collaborations (see, e.g., [25; 26; 27; 28; 29; 30; 31; 32]), but they remain elusive thus far. Even if a single state is eventually observed, the corresponding measurements that will ensue will, however, not enable one to ascertain which of the many possible extended realisations of the Higgs mechanism is at work. For an unequivocal extraction of the complete EWSB dynamics, it is imperative that all the various components of the scalar potential be accessed experimentally. This makes the study of multi-Higgs final states mandatory.
The majority of analyses, both phenomenological and experimental ones, involving an electrically neutral multi-Higgs final state, concentrate on QCD-induced production modes, namely, gluon-fusion and \(b\bar{b}\)-annihilation (where the \(b\)-quarks are themselves actually produced from a (double) gluon splitting). While such gluon-initiated (multi-)Higgs production is evidently highly dominant in the SM, it is not necessarily so in new physics models, owing to the non-standard couplings of their new Higgs bosons to the fermions and gauge bosons. In a previous analysis [33] it was shown that the inclusive cross sections for the \(q\bar{q}^{(\prime)}\)-induced production, where \(q\) represents predominantly a valence (\(u\) or \(d\)) quark, of neutral multi-Higgs final states can be larger than their QCD-induced production, over size-able parameter space regions of the Type-I 2HDM with standard hierarchy (\(H_{\mathrm{obs}}\equiv h\)). The charged final states can of course only be produced via EW processes.
In this article, through a complete detector-level Monte Carlo (MC) analysis, we concretely establish that EW production can provide simultaneously visible signals of all the three additional Higgs bosons of the Type-I 2HDM at the LHC with \(3000\,\mathrm{fb}^{-1}\) integrated luminos |
2306.11702 | Lingua Manga: A Generic Large Language Model Centric System for Data
Curation | Data curation is a wide-ranging area which contains many critical but
time-consuming data processing tasks. However, the diversity of such tasks
makes it challenging to develop a general-purpose data curation system. To
address this issue, we present Lingua Manga, a user-friendly and versatile
system that utilizes pre-trained large language models. Lingua Manga offers
automatic optimization for achieving high performance and label efficiency
while facilitating flexible and rapid development. Through three example
applications with distinct objectives and users of varying levels of technical
proficiency, we demonstrate that Lingua Manga can effectively assist both
skilled programmers and low-code or even no-code users in addressing data
curation challenges. | Zui Chen, Lei Cao, Sam Madden | 2023-06-20T17:30:02Z | http://arxiv.org/abs/2306.11702v2 | # Lingua Manga : A Generic Large Language Model Centric System for Data Curation
###### Abstract.
Data curation is a wide-ranging area which contains many critical but time-consuming data processing tasks. However, the diversity of such tasks makes it challenging to develop a general-purpose data curation system. To address this issue, we present Lingua Manga, a user-friendly and versatile system that utilizes pre-trained large language models. Lingua Manga offers automatic optimization for achieving high performance and label efficiency while facilitating flexible and rapid development. Through three example applications with distinct objectives and users of varying levels of technical proficiency, we demonstrate that Lingua Manga can effectively assist both skilled programmers and low-code or even no-code users in addressing data curation challenges.
2019
Zui Chen, Lei Cao, and Sam Madden, Lingua Manga : A Generic Large Language Model Centric System for Data Curation. PVLDB, 16(12): 4074 - 4077, 2023.
doi:10.14778/3611540.3611624
## 1. Introduction
In the era of big data, organizations are collecting vast amounts of data from various sources. However, the data is often messy, incomplete, or contains errors. To effectively perform data analytics or any other applications, it is often necessary to engage in a data curation process [8; 15], which includes tasks like data discovery, integration, and cleaning. Nonetheless, devising a data curation solution for a particular scenario can be a cumbersome and time-consuming process. It entails extensive communication of requirements, a collaboration between domain experts and programmers, multiple rounds of debugging and testing, and active maintenance to accommodate new use cases. This underscores the need for a _generic_ system that enables users to efficiently and effectively address diverse data curation challenges and apply solutions to various applications.
However, building a system that addresses diverse data curation problems and applications is challenging. Data curation involves a range of distinct tasks, such as data discovery through table search, schema matching, entity resolution during data integration, and data imputation in data cleaning. Moreover, in real applications, various extra tasks are often involved in the data curation processes,
Figure 1. An illustration of the Lingua Manga System.
such as event extraction, name extraction, anomaly detection, data summarization, visualization, table semantic parsing, etc. Worst yet, applications in different domains have diverse requirements. Effectively resolving data curation problems often relies heavily on users' domain knowledge, making developing a _general-purpose_ solution improbable. For example, entity resolution over _books_ focuses on text understanding, while entity resolution over _images_ should rather pay special attention to image processing. As a result, existing data curation systems typically focus on only a few tasks on specific data formats. For example, Data Tamer [8] solves schema matching and entity resolution problems; ZenCrowd [4] targets entity linking; while CrowdDB [7] addresses fuzzy SQL semantics.
Recent advances in Large Language Models (LLMs) have shed light on this highly challenging yet critical data curation area. Pre-trained LLMs such as GPT-3 [10], Codex [6], ChatGPT [17], and LLaMA [5] have not only shown the ability to understand human needs and instructions but also to generate code like a programmer. Such ability could be leveraged to solve specific data curation problems like data imputation or entity resolution [2].
However, the current use of LLMs and LLM plugins is still far from the ultimate solution. Repeatedly calling LLM services can be costly and may lead to data privacy issues. For instance, consider a large e-commerce organization with thousands of documents and tables containing billions of rows - it would be neither affordable nor secure to let an LLM access the entire data lake to complete the tasks. LLMs tend to be less effective without access to enterprise data due to the gap between the enterprise data and the public data LLMs trained on. Thus, there is an urgent need to harness the power of LLMs systematically, efficiently, and securely. Properly integrating LLMs and traditional data curation methods could be a promising path toward a general-purpose data curation system.
In this demonstration, we present Lingua Manga, an LLM-centric system that enables the swift development of effective yet efficient data curation solutions. It benefits programmers as well as low-code or even no-code users. Specifically, Lingua Manga shows the following key properties:
* **User-friendly**: Users can quickly develop a data curation solution by utilizing the templates, built-in modules, and, most importantly, by effectively interacting with LLMs through Lingua Manga as a middleware.
* **Flexible**: Users can effortlessly communicate with Lingua Manga through natural language (NL) to inject domain knowledge or instructions into the data curation solution such that it can meet the needs of the specific applications or domains.
* **Intelligent**: Lingua Manga automatically optimizes the solution to fix errors, improves performance, continuously learns from the data, etc.
* **Highly Performant**: Lingua Manga minimizes the frequency of calling the LLM service, which incurs heavy computation and monetary costs, thus resulting in an economical, efficient, and privacy-preserving solution.
* **Label Efficient**: with our system, users can develop a data curation solution with no or only a few labeled examples from the specific application while still achieving accuracy comparable to the SOTA ML-based methods trained with thousands of labels.
## 2. Preliminaries: Prompt-tuning and Code Generation in LLMs
Starting from GPT-3 (Girard et al., 2017), LLMs' size and computational cost have grown so much that users can no longer afford to fine-tune them for every downstream task. In the meantime, LLMs have also gained significant knowledge through training on large-scale corpora, leading to the emergence of a new paradigm known as prompt-tuning (Girard et al., 2018). Using prompt-tuning, users can generalize LLMs to new tasks and data domains without modifying the LLM itself. This is achieved by incorporating prompts, such as descriptions, tags, and learnable components, into the input data.
Moreover, cutting-edge LLMs like ChatGPT (Girard et al., 2018) have demonstrated their effectiveness in interacting with humans using natural-language-based prompts (Beng et al., 2017). These prompts are interpretable by humans. For instance, a straightforward prompt like "Please determine if the following entities are equivalent" can enable LLMs to carry out entity resolution. This capability is especially useful in Lingua Manga since it facilitates LLMs to interactively execute functions vaguely specified by users in natural language.
Furthermore, prompts like "Please write a code for entity resolution" trigger LLMs to generate custom software modules via code generation. GPT-3 (Girard et al., 2017), ChatGPT (Girard et al., 2018), Codex, and Copilot (Copilot, 2018) are the mainstream LLMs that support code generation. However, existing methods that use these models to generate code have limitations (Girard et al., 2018). They are typically inadequate for producing code with complex logic and building large-scale software. This calls for a new methodology to effectively utilize LLM Generated Code (LLMGC).
## 3. Lingua Manga
Lingua Manga comprises a Domain-Specific Language (DSL), a Compiler, an Optimizer, and a set of templates.
Lingua Manga is a **user-friendly** workflow system that enables users to quickly build data curation solutions by composing pipelines of logical operators. The system features a DSL to simplify the workflow-building process.
Lingua Manga is **automatic**. Like a relational database, it automopiles each logical operator into a physical, executable _module_.
Lingua Manga is **extensible**. Lingua Manga allows the programmers to implement their own physical modules, which can be seamlessly plugged into the system as long as they follow the standard interfaces of Lingua Manga.
Lingua Manga is **flexible**. To meet the specific need of different applications, the Lingua Manga optimizer interacts with users via LLMs to generate customized physical modules.
Lingua Manga is **intelligent**. The Lingua Manga optimizer uses LLMs to enhance the modules' capability.
In addition, to further facilitate users in building pipelines, Lingua Manga offers templates, essentially groups of pre-built pipelines. Rather than creating a pipeline from scratch, Lingua Manga allows users to start with a pre-defined, well-optimized pipeline that the target application can directly use.
The fundamental observation underlying this research is that while current LLMs may not yet be able to produce exceedingly intricate and large software systems, they are proficient at generating modular code on a smaller scale. Moreover, LLMs possess learning and logical reasoning ability that enable them to complete data curation tasks directly, indicating that LLMs could accelerate the process of developing a data curation solution by automating parts of it and, thus, reduce the workload of software developers.
Next, we discuss the modules and the Lingua Manga optimizer, which are the critical components of Lingua Manga.
### Modules in Lingua Manga
A module is a function \(f:X\to Y\) that takes \(X\) as input and returns \(Y\). Modules are usually viewed as black boxes, and their behavior is determined by their expected output. In Lingua Manga, modules can be classified into four types:
* **Custom Module**: As a basic module, a custom module is implemented with manually written code. It can be created by users with programming skills or provided by Lingua Manga as a default built-in module.
* **LLM Module**: An LLM itself can be a module. An LLM module tends to be more powerful than a traditional module because of the common sense knowledge and logical inference abilities possessed by an LLM. However, an LLM module requires a good task description as input; and LLM outputs typically need proper validation, as textual responses generated by the LLMs could be diverse and unstable. For instance, numerical validation that ensures the LLM's output is within a reasonable value range can identify and correct potential errors in LLM output.
* **LLMGC Module**: An LLM can dynamically generate code to implement an LLMGC module, replacing the role of programmers. Lingua Manga allows LLMGC to call other modules in the system of use external tools such as a calculator or another pre-trained LLM. Although an LLMGC module might be less expressive than custom modules written by skilled programmers, it enables no-code or low-code users to implement or customize a module effortlessly.
* **Decorated Module**: As the most advanced module in Lingua Manga, a decorated module can comprise multiple basic modules and be enhanced by the optimizer that leverages the common sense knowledge encoded in the LLMs and their learning ability.
### Lingua Manga Optimizer
The optimizer in Lingua Manga can improve the performance of physical modules within a pipeline by enhancing their efficiency and effectiveness, e.g., avoiding repeatedly calling the LLM service to minimize the execution costs and the potential data privacy leak. Unlike conventional database optimizers, the optimizer in Lingua Manga is modularized and can be selectively composed by users to serve specific applications best. The key modules in Lingua Manga include:
* **Validator**: It checks whether the target module behaves correctly on a few example test cases. It then uses the failed test cases to trigger the LLM to improve the target module and fix the errors. Specifically, the validator first calls an LLM to generate the suggestion by reading the code and the failure cases. Then, the code, failure cases, and the generated suggestion are sent to another LLM to generate a new version of the code. This validation cycle repeats until either all test cases are executed successfully, or a timeout ensues, leading to a re-generation of the LLMGC module until an additional timeout.
* **Simulator**: A simulator automatically generates a more efficient and equally effective alternative to a given module that already functions well. For example, a module that frequently calls LLM service can be expensive. Potentially, a supervised learning counterpart could replace it. This is because LLMs are versatile, while a data curation task typically only needs part of its functionality. It is thus possible to produce a module proficient in simulating the specific LLM functionality required by a task.
Because each module is treated as a black-box function, an ML-based simulator can replicate the target module through supervised learning. The target module will function as intended during initialization, and a control logic will decide when the simulated version should take over, such as after achieving the desired accuracy or reaching a certain level of confidence.
A simulator could improve accuracy as well. At first sight, this is counter-intuitive. Like the teacher-student model in machine learning, traditionally, the student model, which learns from a teacher model, is expected to be upper bounded by the teacher model on performance. However, studies have shown that self-training with filters (He et al., 2016; He et al., 2017; He et al., 2018; Wang et al., 2018) can produce a superior student model because of better generalization. Moreover, the simulator will continuously monitor the real data flow in the original pipeline. It can thus constantly learn to adapt to the data distribution, potentially outperforming the original static module.
* **Connector**: Concerning efficiency and data privacy, it is crucial for applications to reduce the amount of data exposed to LLMs, while still ensuring the effectiveness. For instance, in an NL2SQL task, granting LLMs access to the entire table contents is neither secure nor cost-effective. However, relying solely on the table schema is not adequate to generate accurate predicates. To address this issue, a locally-running connector can be employed to manage the selective data upload to LLMs. A pre-defined connector for tabular data enables LLMs to execute SQL commands in local databases and obtain the resulting data while ensuring that the execution is limited to the queries specified by the user. In multi-modal data scenarios, appropriate pre-defined connectors are provided, such as connectors designed for handling extensive textual data.
## 4. Demonstration
We use three applications to demonstrate Lingua Manga. Fig. 5 shows its user interfaces with name extraction as an example task.
### Entity Resolution: Effortless to the Novices
Consider the scenario where a technical novice wants to perform entity resolution on a dataset but lacks programming experience.
Lingua Manga effectively fulfills these requirements by providing a user-friendly solution. To begin, users can easily search for existing templates within the system. If none are available, they can create a simple pipeline that includes data loading, entity resolution, and data-saving operators, as depicted in Figure 1(a). It is worth noting that the user does not have to write any code. Instead, they can simply describe the task to the LLMs using the suggested prompt templates and provide optional input and output specifications through examples. The Lingua Manga optimizer will then improve the performance automatically.
We evaluated the F1 scores of the solution produced by Lingua Manga on three entity resolution datasets: BeerAdvo-RateBeer, Fodors-Zagats, and iTunes-Amazon (Beng et al., 2017). As shown in Table 11, it significantly outperforms the previous LLM-based method (Beng et al., 2017), while achieving performance comparable to supervised learning algorithms trained with hundreds or even thousands of labeled examples (He et al., 2017).
Footnote 1: The numbers of the baselines are obtained from (Beng et al., 2017).
### Name Extraction: Flexible for the Adepts
Consider a scenario where a domain expert, as a low-code user with basic programming skills, aims to get a high-accuracy and efficient name extraction solution, i.e., find all person names in a text passage. In this case, the domain expert understands that name extraction typically involves three steps: tokenization, noun-phrase extraction, and tagging. As a result, the domain expert can create a pipeline of three individual operators (ignoring data loading and saving for ease of presentation), as shown in Fig. 3.
The domain expert recognizes that tagging is a complex process and is often the performance bottleneck. Therefore, he or she chooses to utilize the LLM module within Lingua Manga along with an example-based validator. The other two operators are relatively simple and thus are realized as LLMGC modules - using LLM to generate the code. Suppose the resulting LLMGC modules, such as the noun-phrase extraction module, are not precise enough. The
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Magellan (Beng et al., 2017) & Ditto (He et al., 2017) & FMs (Beng et al., 2017) & Lingua Manga \\ \hline BeerAdvo-RateBeer & 78.8 & 94.37 & 78.6 & 89.66 \\ Fodors-Zagats & 100.0 & 100.00 & 87.2 & 95.65 \\ iTunes-Amazon & 91.2 & 97.06 & 65.9 & 92.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Quantitative Experiment on Entity Resolution.
Figure 2. An illustration of two possible Entity Resolution workflows using Lingua Manga.
domain expert can further enhance them by providing external tool APIs, domain knowledge instructions, or code snippets to optimize the code generation process. Additionally, since repeatedly using LLM to tag each data record can be costly, the domain expert may use the simulator to create an ML-based alternative solution that simulates LLM tagging with significantly lower expenses.
To showcase the versatility of Lingua Manga, we experimented on a real-world name extraction dataset obtained from a startup company. This task is unique in that it has to handle multi-lingual data, which significantly degrades the accuracy of name extraction. Lingua Manga quickly resolves this issue by incorporating an LLM language detection module and providing multi-lingual tools to the LLMGC module. This shows that Lingua Manga can swiftly adapt to specific needs, being flexible and efficient.
### Data Imputation: Excellency with the Experts
Consider a scenario where an expert programmer needs to optimize a specific data imputation solution at all costs. This demo assumes the expert is dealing with the Buy dataset (Beng et al., 2017; Chen et al., 2021), comprises three attributes: products with names, descriptions, and manufacturers. The manufacturers' names are missing.
Because Lingua Manga effectively integrates LLMs, it is advantageous over traditional rule-based or learning-based approaches. For example, without having the relevant knowledge, it is almost impossible for the traditional methods to deduce that the product "PlayStation 2 Memory Card 8MB" is produced by "Sony". This is where the LLM module could help.
Experts can offer comprehensive guidelines or codes to create a better LLMGC module, as illustrated in Figure 4. These guidelines facilitate the validation procedure, enabling the LLMGC module to undergo iterative refinement. As a result, it can effectively use the LLM as an external tool to resolve complex cases while performing more efficiently than a pure LLM module on easy cases.
The Lingua Manga produced solution results in an accuracy of 94.48%, which is a remarkable improvement compared to the 16.2% accuracy by HoloClean (HoloClean, 2020). It is comparable to IMP (HoloClean, 2020), which trains a Transformer model on thousands of training examples and achieves an accuracy of 96.5%. It is noteworthy that our optimized version, which combines the LLM module with the LLMGC module, is cost-effective, using only 1/6 LLM calls to achieve higher accuracy, compared to the version that only uses the LLM module to call LLM service repeatedly (accuracy 93.92%). Previous LLM-based research achieved an accuracy of 84.6% (Beng et al., 2017).
## 5. Conclusion
We introduced Lingua Manga, a user-friendly and versatile system designed to facilitate the development of data curation applications, leveraging the capabilities of LLMs. We demonstrated the efficiency, effectiveness, and user-friendliness of Lingua Manga through three data curation tasks. Nonetheless, it should be noted that Lingua Manga is currently in the prototype stage, which presents numerous opportunities for enhancements. These improvements encompass pipeline optimizations, supporting multi-modal applications, and implementing robust mitigation strategies to tackle LLM-induced hallucinations, etc.
|
2305.07430 | Expertise-based Weighting for Regression Models with Noisy Labels | Regression methods assume that accurate labels are available for training.
However, in certain scenarios, obtaining accurate labels may not be feasible,
and relying on multiple specialists with differing opinions becomes necessary.
Existing approaches addressing noisy labels often impose restrictive
assumptions on the regression function. In contrast, this paper presents a
novel, more flexible approach. Our method consists of two steps: estimating
each labeler's expertise and combining their opinions using learned weights. We
then regress the weighted average against the input features to build the
prediction model. The proposed method is formally justified and empirically
demonstrated to outperform existing techniques on simulated and real data.
Furthermore, its flexibility enables the utilization of any machine learning
technique in both steps. In summary, this method offers a simple, fast, and
effective solution for training regression models with noisy labels derived
from diverse expert opinions. | Milene Regina dos Santos, Rafael Izbicki | 2023-05-12T12:52:51Z | http://arxiv.org/abs/2305.07430v1 | # Expertise-based Weighting for Regression Models with Noisy Labels
###### Abstract
Regression methods assume that accurate labels are available for training. However, in certain scenarios, obtaining accurate labels may not be feasible, and relying on multiple specialists with differing opinions becomes necessary. Existing approaches addressing noisy labels often impose restrictive assumptions on the regression function. In contrast, this paper presents a novel, more flexible approach. Our method consists of two steps: estimating each labeler's expertise and combining their opinions using learned weights. We then regress the weighted average against the input features to build the prediction model. The proposed method is formally justified and empirically demonstrated to outperform existing techniques on simulated and real data. Furthermore, its flexibility enables the utilization of any machine learning technique in both steps. In summary, this method offers a simple, fast, and effective solution for training regression models with noisy labels derived from diverse expert opinions.
N oisy Labels, Non-parametric methods, Supervised Learning
## 1 Introduction
Supervised learning regression methods aims to find a function \(g(\mathbf{x})\) that predicts a real label \(\mathbf{Y}\in\Re\) based on the input covariates \(\mathbf{X}=(X_{1},X_{2},\ldots,X_{d})\). In order to do that, it is often assumed one has access to a labeled dataset, \((\mathbf{X}_{1},Y_{1}),\ldots,(\mathbf{x}_{n},Y_{n})\). However, in many situations, obtaining the real labels \(Y_{i}\)'s may be costly or time-consuming, or even impossible to collect. In such scenarios, asking different experts to provide their opinions regarding the real label for each observation is a common practice. As experts only provide an educated guess of the true value of \(Y_{i}\), it is common to ask many experts to provide their estimate of such quantity. Cases like these include spam detection, diagnosis of patients based on images and morphological classification of galaxies [4, 9, 13, 17, 19]. We denote such opinions by \(Y_{i,1},\ldots,Y_{i,J}\), where \(J\) is the number of experts. A key question is how to best use this information to train \(g\).
The standard approach to deal with such noisy labels is to use the average of the opinions, \(J^{-1}\sum_{j=1}^{J}Y_{i,j}\), as a proxy for the true value \(Y_{i}\), and then use standard supervised learning methods using this proxy as the label to be predicted [3, 6, 23]. It is known however that such an approach is suboptimal, especially in settings where the experts have different expertise (that is, some are more accurate than others) [25]. Thus, much work has been done on alternative ways to use such information.
Most approaches however only deal with classification problems, that is, problems where \(Y\) is quantitative [2, 8, 15, 21, 25, 26, 27]. In a regression context, existing approaches deal only with specific settings. For instance, [15] develops an iterative method based on the Expectation-Maximization algorithm assuming a linear relationship between the true label, \(Y\), and the features. [18] developed a method specific to topic modelling that also uses a parametric assumption. [5] and [23] propose methods based on Gaussian Processes. Thus, the literature lacks of procedures that can have good performance under more general scenarios.
In this paper, we propose a simple, fast, and powerful method to train \(g\) in a regression context while taking into account the diversity between labellers. Our approach, named WEAR (Weighted Expertise-based Average Regression), has two steps: first, we estimate the expertise of each labeller by estimating its variability. Then, we create a weighted average of \(Y_{i,1},\ldots,Y_{i,J}\) using the learned weights. Finally, we regress the weighted average on \(\mathbf{x}\) to create our prediction model. Any machine learning method can be used in both steps, yielding flexibility to the method.
Section 2 presents a formal justification of the method, as well as additional details. Section 3 presents empirical evidence on both simulated and real data that our approach has better performance than competing approaches. Finally, Section 4 concludes the paper.
## 2 Proposed Method
### Motivation
Our method begins by attempting to recover the true labels, drawing inspiration from the following theorem. The proof of this theorem can be found in the Appendix.
**Theorem 2.1**.: _Let \(Y_{j}\) be the opinion of the \(j\)-th expert, \(j=1,\ldots,J\), \(Y\) be the real label and \(\mathbf{x}\) be the vector of covariates. Assume that:_
* \(Y,Y_{1},\ldots,Y_{J}\) _are independent,_
* _For every_ \(j=1,\ldots,J\)_,_ \(\mathbb{E}[Y_{j}|\mathbf{X}]=\mathbb{E}[Y|\mathbf{X}],\) _that is, the experts are unbiased._
_Then, the solution for_
\[\arg\min_{w_{1},\ldots,w_{J}:\sum_{j}w_{j}=1}\mathbb{E}\left[\left(\sum_{j=1}^ {J}w_{j}Y_{j}-Y\right)^{2}\right|\mathbf{X}=\mathbf{x}\right], \tag{1}\]
_is to take_
\[w_{i}=\frac{\mathbb{V}^{-1}[Y_{i}\mid\mathbf{X}=\mathbf{x}]}{\sum_{j=1}^{J} \mathbb{V}^{-1}[Y_{j}\mid\mathbf{X}=\mathbf{x}]}.\]
The theorem shows that the most effective way to linearly combine experts and achieve accurate recovery of Y is by weighting them based on the reciprocal of their variance. To estimate these variances, we make a simplifying assumption that \(\mathbb{V}[Y_{j}\mid\mathbf{X}=\mathbf{x}]\) remains constant across all values of \(\mathbf{x}\). The following theorem outlines a method for estimating the optimal weights under this assumption.
**Theorem 2.2**.: _Let \(r_{i}(\mathbf{x}):=\mathbb{E}[Y_{i}|\mathbf{x}]\) be the regression of the label given by the \(i\)-th expert on \(\mathbf{x}\). Then, under the assumptions of Theorem 2.1 and if \(\mathbb{V}[Y_{j}\mid\mathbf{X}=\mathbf{x}]\) is constant in \(\mathbf{x}\), a consistent estimator of the optimal weight \(w_{i}\) is_
\[\frac{R_{i}^{-1}}{\sum_{j=1}^{J}R_{j}^{-1}},\]
_where_
\[R_{i}:=\frac{1}{m}\sum_{k=1}^{m}\left(Y_{k,i}^{\prime}-r_{i}(\mathbf{x}_{k}^{ \prime})\right)^{2}\]
_is the mean squared error of regression \(r_{i}\) and \(\{(\mathbf{x}_{k}^{\prime},Y_{k,1}^{\prime},\ldots,Y_{k,J}^{\prime})\}_{k=1}^ {m}\) is a validation sample._
It also follows from this theorem that \(R_{i}^{-1}\) is a proxy for the expertise of the \(i\)-th expert.
In practice, \(r_{i}(\mathbf{x})\) is also unknown and therefore must be estimated. The next section details our full procedure.
### WEAR: Weighted Expertise-based Average Regression
Our method consists of the following steps:
1. Separate the data set into training, validation and test data sets.
2. Estimate \(r_{j}(\mathbf{x}):=\mathbb{E}[Y_{j}|\mathbf{x}]\), the regression of the label given by the \(j\)-th expert on \(\mathbf{x}\), using the training set. Let \(\widehat{r}_{j}\) be such estimate.
3. Use the validation data set to estimate \(R_{i}\) from Theorem 2.2: \[\widehat{R}_{j}:=\frac{1}{m}\sum_{k=1}^{m}\left(Y_{k,i}^{\prime}-\widehat{r}_ {j}(\mathbf{x}_{k}^{\prime})\right)^{2}\] and approximate the optimal weights of Theorem 2.1: \[\widehat{w}_{i}:=\frac{\widehat{R}_{i}^{-1}}{\sum_{j=1}^{J}\widehat{R}_{j}^{- 1}}.\]
4. Compute \(\overline{Y}_{k}^{w}\), the weighted mean of the experts on the \(k\)-th training sample point, using \[\overline{Y}_{k}^{w}=\sum_{j=1}^{J}\widehat{w}_{j}Y_{k,j}.\]
5. Estimate the true regression function, \(r(\mathbf{x}):=\mathbb{E}[Y|\mathbf{x}]\), by regressing \(\overline{Y}_{k}^{w}\) on \(\mathbf{x}_{k}\) using the training sample.
WEAR offers great flexibility as we can utilize any machine learning algorithm in steps 2 and 4. For instance, if we expect only a few covariates to be associated
with \(Y\) and \(Y_{j}\)'s, we can use lasso, random forests, or sparse additive models. These algorithms perform variable selection, and studies have shown that they work well in such scenarios [1, 14]. Alternatively, if we expect \(\mathbf{x}\) to lie on a submanifold of \(\Re^{d}\), k-nearest neighbors, support vector regression, or spectral methods might be a better option. These algorithms have shown success in situations where \(\mathbf{x}\) is expected to be on a submanifold of \(\Re^{d}\)[10, 11, 12, 20]. Similary, of \(\mathbf{x}\) represents image data, our method can easily leverage convolutional networks to estimate the regression functions.
The following section demonstrates that this method, despite its simplicity, not only delivers excellent predictive performance but also achieves a high degree of accuracy in identifying the true expertise of each expert (\(w_{j}\)).
## 3 Experiments
### Simulated Data
In order to evaluate the performance of WEAR, we used simulated data with a known gold standard, which allowed us to compare the results of our proposed model with existing methods. In all settings, we used 10,000 observations to train the model, and 5,000 as the validation set, as described in Section 2. 85,000 additional sample points were used to compare the predictive performance of the methods that were investigated through the Mean Square Error (MSE) of the true label.
To estimate the expertise and fit the proposed model, we used several methods, including linear regression, Tree, Forest, and Lasso. We also used the Raykar's Algorithm [15] as a baseline. We also added regression methods that use the true labels as a baseline (which would be unavailable in practice). Concretely, the fitted models were:
* **Our methods:*
* LINEAR REGRESSION WITH WEIGHTED MEAN, linear regression via least squares method using the weighted mean from this work as the dependent variable,
* FOREST WITH THE WEIGHTED MEAN, random forest method using the weighted mean from this work as the dependent variable,
* TREE WITH WEIGHTED MEAN, regression tree method using the weighted mean from this work as the dependent variable
* LASSO WITH WEIGHTED MEAN, lasso regression method with the weighted mean from this work as the dependent variable.
* **Baselines that can be computed using real data:*
* RAYKAR ALGORITHM, the method shown in [15],
* LINEAR REGRESSION WITH THE ARITHMIC MEAN, linear regression via least squares method using the arithmetic mean as the dependent variable,
* FOREST WITH ARITHMIC MEAN, random forest method using the arithmetic mean as the dependent variable,
* TREE WITH ARITHMIC MEAN, regression tree method using the arithmetic mean as the dependent variable,
* LASSO WITH ARITHMIC MEAN, Lasso regression method using the arithmetic mean as the dependent variable,
* **Baselines only available on simulated data:*
* LINEAR REGRESSION WITH REAL \(Y\), linear regression via least squares
method using the real label as the dependent variable, * FOREST WITH REAL \(Y\), random forest method using the real label as the dependent variable, * TREE WITH REAL \(Y\), regression tree method using the real label as the independent variable, * LASSO WITH REAL \(Y\), Lasso regression method using the real label as the dependent variable,
We used R software ([https://www.r-project.org/](https://www.r-project.org/)) for the analysis, along with specific packages for the Forest, Tree, and Lasso methods: randomForest [16], rpart [22], and glmnet [7], respectively. We calculated the Lasso model hyperparameters using cross-validation, and use the default values for the tree-based methods. To avoid any bias due to simulation randomness, we generated 100 different samples with the same characteristics. The final result for the MSE presented in the tables represents the mean MSE across those 100 samples.
We investigate four settings. In two of them (1 and 2), the true regression function \(r(\mathbf{x})\) is nonlinear on the covariates, and in two of them (3 and 4), it is linear. Moreover, in two of them (1 and 3), experts have a similar expertise), while in two of them (2 and 4), they do not; some are clearly better than others. More specifically, the settings we investigate are given by
* **Experiment 1**: \(Y=2x_{1}^{2}+1x_{2}+5x_{3}+0.5x_{4}+4x_{5}^{2}+3x_{6}^{2}+\epsilon\); \(\mathbb{V}[Y_{1}|\mathbf{x}]=4\), \(\mathbb{V}[Y_{2}|\mathbf{x}]=4.41\), \(\mathbb{V}[Y_{3}|\mathbf{x}]=4.84\), \(\mathbb{V}[Y_{4}|\mathbf{x}]=5.0625\)
* **Experiment 2**: \(Y=2x_{1}^{2}+1x_{2}+5x_{3}+0.5x_{4}+4x_{5}^{2}+3x_{6}^{2}+\epsilon\); \(\mathbb{V}[Y_{1}|\mathbf{x}]=4\), \(\mathbb{V}[Y_{2}|\mathbf{x}]=100\), \(\mathbb{V}[Y_{3}|\mathbf{x}]=2500\), \(\mathbb{V}[Y_{4}|\mathbf{x}]=10000\)
* **Experiment 3**: \(Y=2x_{1}+1x_{2}+5x_{3}+0.5x_{4}+4x_{5}+3x_{6}+\epsilon\); \(\mathbb{V}[Y_{1}|\mathbf{x}]=4\), \(\mathbb{V}[Y_{2}|\mathbf{x}]=4.41\), \(\mathbb{V}[Y_{3}|\mathbf{x}]=4.84\), \(\mathbb{V}[Y_{4}|\mathbf{x}]=5.0625\)
* **Experiment 4**: \(Y=2x_{1}+1x_{2}+5x_{3}+0.5x_{4}+4x_{5}+3x_{6}+\epsilon\); \(\mathbb{V}[Y_{1}|\mathbf{x}]=4\), \(\mathbb{V}[Y_{2}|\mathbf{x}]=100\), \(\mathbb{V}[Y_{3}|\mathbf{x}]=2500\), \(\mathbb{V}[Y_{4}|\mathbf{x}]=10000\)
We always choose \(\epsilon\sim N(0,3^{2})\) and each expert is generated according to \(Y_{i}|\mathbf{x},y\sim N(y,\mathbb{V}[Y_{i}|\mathbf{x}])\), all experts independent of each other.
Table 1 presents the mean squared error (MSE) for each model and experiment, along with its standard error. The best-performing models are highlighted in bold. The results of our analysis are summarized as follows:
* Our proposed approach (WEAR) achieves results that are comparable to the gold standard in all experimental settings, which is not available in real-world datasets.
* For the linear settings (experiments 3 and 4), our method performs similarly to [15], a model that was specifically designed for this type of problem.
* When the experts have comparable expertise (experiments 1 and 3), WEAR yields results that are comparable to those obtained using a simple arithmetic mean. This indicates that our approach does not sacrifice performance by adding more parameters
* WEAR's flexibility enables it to outperform linear models in the nonlinear settings (experiments 1 and 2), where nonlinear models perform better. This is in contrast to [15], which assumes linear relationships.
Overall, these findings demonstrate the effectiveness of our proposed approach and its versatility in handling different experimental settings.
The precision in the estimation of each expert's variance is evaluated across different datasets and methods, and the results are summarized in Table 2. The table presents the average absolute deviation between each expert's estimated variance and its true value. In linear settings (experiments 3 and 4), [15] provides reliable variance estimates. However, in nonlinear settings (experiments 1 and 2), WEAR outperforms other methods and yields more accurate estimates of expert precision.
### Real Data
Next, we investigate the performance of our approach on three datasets:
* **Dataset 1**: Physio-chemical properties of the tertiary structure of proteins1 (45730 sample points and 10 variables)
Footnote 1: [https://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure](https://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure)
* **Dataset 2**: Tetouan energy consumption2 (48153 sample points and 9 variables)
Footnote 2: [https://archive.ics.uci.edu/ml/datasets/Power+consumption+of+Tetouan+city](https://archive.ics.uci.edu/ml/datasets/Power+consumption+of+Tetouan+city)
* **Dataset 3**: Prediction of credit card default3 (52416 sample points and 9 variables)
Footnote 3: [https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients](https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients)
We simulated experts using the true labels in the same way as we do in Section 3.1. The variances of the experts were taken to be (1, 4, 25, 225) for dataset 1, (1, 9, 64, 40000) for data set 2, and (1, 9, 64, 40000) for dataset 3. The dataset was divided into three portions: 70% for training, 10% for validation, and 20% for testing.
The results presented in Table 3 demonstrate that WEAR yields superior performance compared to other methods that do not use \(Y\), except for the third dataset, where it matches [15]. Notably, our approach achieves predictive performance that is equivalent to using real labels, which are typically unavailable in practical settings. In
\begin{table}
\begin{tabular}{l|l|c|c|c|c} \hline
**Framework** & **Model** & **Experiment 1** & **Experiment 2** & **Experiment 3** & **Experiment 4** \\ \hline \multirow{3}{*}{\begin{tabular}{l} WEAR \\ (Our approach) \\ \end{tabular} } & Linear Regression & 271.18 (1.9) & 268.83 (2.2) & **9.00 (0.0)** & **9.01 (0.0)** \\ & Random Forest & **73.45 (2.2)** & **70.00 (2.3)** & 10.07 (0.0) & 10.20 (0.0) \\ & Regression Tree & 79.12 (1.3) & 77.02 (1.0) & 17.28 (0.0) & 17.58 (0.0) \\ & Lasso & 271.19 (1.9) & 268.83 (2.2) & **9.00 (0.0)** & **9.01 (0.0)** \\ \hline \multirow{3}{*}{\begin{tabular}{l} Arithmetic Mean \\ \end{tabular} } & – & 271.18 (1.9) & 268.83 (2.2) & **9.00 (0.0)** & **9.02 (0.0)** \\ \cline{1-1} \cline{2-6} & Linear Regression & 271.18 (1.9) & 269.39 (2.2) & **9.00 (0.0)** & 9.55 (0.0) \\ \cline{1-1} \cline{2-6} & Random Forest & **73.90 (1.3)** & 96.43 (2.2) & 10.07 (0.0) & 39.43 (0.0) \\ \cline{1-1} & Regression Tree & 78.92 (1.3) & 98.86 (3.0) & 17.28 (0.0) & 27.10 (0.0) \\ \cline{1-1} & Lasso & 271.19 (1.9) & 269.39 (2.2) & **9.00 (0.0)** & **9.00 (0.0)** \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Real \(Y\) \\ (Gold standard) \\ \end{tabular} } & Linear Regression & 271.17 (1.9) & 268.83 (2.2) & **9.00 (0.0)** & **9.01 (0.0)** \\ \cline{1-1} \cline{2-6} & Random Forest & **73.15 (2.1)** & **69.51 (2.4)** & 10.03 (0.0) & 10.03 (0.0) \\ \cline{1-1} & Regression Tree & 78.06 (1.3) & 76.65 (0.9) & 17.25 (0.0) & 17.25 (0.0) \\ \cline{1-1} & Lasso & 271.19 (1.9) & 268.84 (2.2) & **9.00 (0.0)** & **9.01 (0.0)** \\ \hline \end{tabular}
\end{table}
Table 1: Estimated Mean Squared Error (MSE) results and corresponding standard errors (in parentheses) for real datasets. Our approach demonstrates comparable performance to methods relying on the gold standard label Y, which is typically unavailable in practical scenarios.
\begin{table}
\begin{tabular}{l|l|c|c|c|c} \hline
**Framework** & **Model** & **Weight Experiment 1** & **Weight Experiment 2** & **Weight Experiment 3** & **Weight Experiment 4** \\ \hline \multirow{3}{*}{\begin{tabular}{l} WEAR \\ (Our approach) \\ \end{tabular} } & Linear Regression & 251.35 & 262.25 & 0.027 & 4.38 \\ & Random Forest & **43.07** & 170.93 & 1.24 & 112.75 \\ \cline{1-1} & Regression Tree & 60.65 & **126.80** & 8.53 & 20.70 \\ \cline{1-1} & Lasso & 251.30 & 261.95 & 0.02 & **4.30** \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Work Algorithm \\ \end{tabular} } & – & 256.40 & 257.77 & **0.01** & 6.02 \\ \hline \end{tabular}
\end{table}
Table 2: Average absolute deviation between each expert’s estimated variance and its true value on the simulated datasets. WEAR provides a more accurate estimate of the experts’ precision in nonlinear settings.
contrast, using the arithmetic mean to approximate \(Y\) consistently falls short in terms of predictive performance. Table 4 confirms this by showing that our approach gives better estimates of the variances of each expert in all settings.
## 4 Final remarks
In conclusion, we have proposed a simple, fast, and powerful method for training in a regression context while taking into account the diversity between labellers. Our approach, named WEAR (Weighted Expertise-based Average Regression), has two steps: estimating the expertise of each labeller by estimating its variability and creating a weighted average of \(Y\) using the learned weights. Finally, we regress the weighted average on \(\mathbf{x}\) to create our prediction model. We have shown through empirical evidence on both simulated and real data that our approach has better performance than competing approaches.
While our method has demonstrated promising results, there are still areas for improvement. For example, our current approach assumes that the experts are unbiased, which may not always be the case in practice. Future work could explore ways to incorporate expert bias into the weighting scheme. Similarly, WEAR currently assumes that the variances are constant in \(\mathbf{x}\), but this may be relaxed.
Overall, we believe that our approach provides a more flexible and effective solution to the challenge of obtaining accurate labels for regression models with noisy data.
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline
**Framework** & **Model** & **Weight Data 1** & **Weight Data 2** & **Weight Data 3** \\ \hline & Linear Regression & 24.68 & 40598542.25 & 146.14 \\ WEAR & Random Florest & **16.49** & **25318003** & 264.77 \\ (Our approach) & Regression Tree & 28.80 & 40915427.5 & **61.05** \\ & Lasso & 24.66 & 40599886.5 & 65.02 \\ \hline Raykar Algorithm & – & 26.17 & 40175529 & 318.57 \\ \hline \end{tabular}
\end{table}
Table 4: Average absolute deviation between each expert’s estimated variance and its true value on the real datasets. Our approach provides a more accurate estimate of the experts’ precision.
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline
**Framework** & **Model** & **Dataset 1** & **Dataset 2** & **Dataset 3** \\ \hline & Linear Regression & 26.72 (0.34) & 40688854 (471556.9) & **62.99 (1.3)** \\ WEAR & Random Forest & **12.57 (0.25)** & **26096718 (418018.5)** & **62.67 (1.3)** \\ (Our approach) & Regression Tree & 29.56 (0.39) & 40923181 (495135.2) & **63.63 (1.4)** \\ & Lasso & 26.71 (0.34) & 40692185 (471816.2) & **62.97 (1.3)** \\ \hline Raykar Algorithm & – & 26.72 (0.34) & 40688854 (471556.9) & **62.99 (1.3)** \\ \hline & Linear Regression & 26.72 (0.34) & 1096639436 (4661793.9) & 72.03 (1.8) \\ Arithmetic Mean & Random Forest & 13.81 (0.25) & 1096638824 (4661764.3) & 143.99 (3.1) \\ & Regression Tree & 30.45 (0.39) & 1096639439 (4661795.0) & 87.39 (1.6) \\ & Lasso & 26.72 (0.34) & 1096639439 (4661794.0) & 65.62 (1.4) \\ \hline & Linear Regression & 26.72 (0.34) & 40688975 (471544.0) & **62.98 (1.3)** \\ Real \(Y\) & Random Forest & **12.48 (0.25)** & **25984668 (418557.2)** & **62.84 (1.3)** \\ (Gold standard) & Regression Tree & 29.20 (0.39) & 40922941 (495146.1) & **63.63 (1.4)** \\ & Lasso & 26.72 (0.34) & 40692151 (471745.3) & **62.94 (1.3)** \\ \hline \end{tabular}
\end{table}
Table 3: Estimated Mean Squared Error (MSE) results and corresponding standard errors (in parentheses) for real datasets. Our approach demonstrates comparable performance to methods relying on the gold standard label Y, which is typically unavailable in practical scenarios.
## Acknowledgement
This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001.
Rafael Izbicki is grateful for the financial support of FAPESP (grant 2019/11321-9) and CNPq (grants 309607/2020-5 and 422705/2021-7).
|
2308.08927 | On canonical bundle formula for fibrations of curves with arithmetic
genus one | In this paper, we develop canonical bundle formulas for fibrations of
relative dimension one in characteristic $p>0$. For such a fibration from a log
pair $f\colon (X, \Delta) \to S$, if $f$ is separable, we can obtain a formula
similar to the one due to Witaszek \cite{Wit21}; if $f$ is inseparable, we
treat the case when $S$ is of maximal Albanese dimension. As an application, we
prove that for a klt pair $(X,\Delta)$ with $-(K_X+\Delta)$ nef, if the
Albanese morphism $a_X\colon X \to A$ is of relative dimension one, then $X$ is
a fiber space over $A$. | Jingshan Chen, Chongning Wang, Lei Zhang | 2023-08-17T11:38:05Z | http://arxiv.org/abs/2308.08927v1 | # On canonical bundle formula for fibrations of curves with arithmetic genus one
###### Abstract.
In this paper, we develop canonical bundle formulas for fibrations of relative dimension one in characteristic \(p>0\). For such a fibration from a log pair \(f\colon(X,\Delta)\to S\), if \(f\) is separable, we can obtain a formula similar to the one due to Witaszek [21]; if \(f\) is inseparable, we treat the case when \(S\) is of maximal Albanese dimension. As an application, we prove that for a klt pair \((X,\Delta)\) with \(-(K_{X}+\Delta)\) nef, if the Albanese morphism \(a_{X}\colon X\to A\) is of relative dimension one, then \(X\) is a fiber space over \(A\).
## 1. Introduction
The canonical bundle formula over the field \(\mathbb{C}\) of complex numbers is developed to study fibrations whose general fibers have numerically trivial (log-)canonical divisors. Roughly speaking, for a fibration \(f\colon X\to S\) of projective varieties with certain mild singularities such that \(K_{X}\sim_{\mathbb{Q}}f^{*}D\) for some divisor \(D\) on \(S\), the canonical bundle formula predicts that \(D\sim_{\mathbb{Q}}K_{S}+\Delta_{S}\), where \(\Delta_{S}\) contains the information of singular fibers and moduli of general fibers, and \((S,\Delta_{S})\) is expected to have mild singularities ([12, 13, 14]). The canonical bundle formula plays an important role in birational geometry, for example, in the proof of subadjunction and effectivity of pluri-canonical systems ([15, 16]). We refer the interested reader to [10] for a nice survey. Remark that over \(\mathbb{C}\), to derive the information about \(\Delta_{S}\), the key ingredients are results from moduli theory and Hodge theory (variations of Hodge structure).
In characteristic \(p>0\), there are quite few results about canonical bundle formulas. For elliptic fibrations, by taking advantage of the moduli theory of elliptic curves, Chen-Zhang [11] proved that \(\Delta_{S}\) is \(\mathbb{Q}\)-linearly equivalent to an effective divisor (denoted by \(\Delta_{S}\succeq_{\mathbb{Q}}0\) for short). Cascini-Tanaka-Xu [17, Section 6.2] investigated fibrations of log canonical pairs \(f\colon(X,\Delta)\to S\) of relative dimension one. Assuming that the geometric generic fiber \((X_{\overline{K(S)}}\cong\mathbb{P}^{1}_{\overline{K(S)}},\Delta_{\overline{ K(S)}})\) has log canonical singularities, they proved similar results as in characteristic zero by use of the moduli theory of stable rational curves. However, the geometric generic fiber can be quite singular. For example, in characteristic \(p<5\), there exist quasi-elliptic fibrations ([1, Chapter 7]), whose general fibers are singular rational curves with arithmetic genus one. For a quasi-elliptic fibration \(f\colon X\to S\), the relative canonical divisor \(K_{X/S}\) is not necessarily \(\mathbb{Q}\)-linearly equivalent to an effective \(\mathbb{Q}\)-divisor. To treat fibrations fibered by wildly singular varieties (log pairs), Witaszek [21] obtained the following result.
Introduction
Let \(X\) be a smooth smooth manifold and \(\mathbb{Q}\) a smooth manifold \(X\). We denote by \(\mathbb{Q}\) the _discrete_
_Denote by \(a_{S}\colon S\to A\) the Albanese morphism of \(S\) and \(D:=\overline{D^{\circ}}\) the closure divisor of \(D^{\circ}\) in \(S\). Then we have_
1. _If_ \(a_{S}\) _is separable, then_ \(D-\frac{1}{2p}K_{S}\succeq_{\mathbb{Q}}0\)__\((\mathbb{Q}\)_-linearly to an effective divisor_)_. In particular,_ \(\kappa(S,D)\geq\kappa(S)\)_._
2. _If_ \(a_{S}\) _is inseparable then_ \(\kappa(S,D)\geq 0\)_, and if moreover_ \((X_{K(S)},\Delta_{K(S)})\) _is klt and_ \(a_{S}\) _is finite then_ \(\kappa(S,D)\geq 1\)_._
_Remark 1.4_.: When \(a_{S}\) is inseparable and \(\kappa(S,D)=0\), we can derive additional information, see Theorem 7.3.
Varieties with a nef anti-canonical divisor are of special interest and expected to have good structures. For example, over \(\mathbb{C}\), for a projective klt pair \((X,\Delta)\), if \(-(K_{X}+\Delta)\) is nef then the Albanese morphism \(a_{X}\colon X\to A\) is a fibration which has certain isotrivial structure ([1, 1, 1]). In characteristic \(p>0\), under the condition that the geometric generic fiber has certain mild singularities, similar results hold (see [11, 12]). Applying the canonical bundle formula established above, we can prove similar results for klt trivial fibrations with wildly singular fibers.
**Theorem 1.5** (see Theorem 8.1).: _Assume that a singular variety admits a resolution of singularities. Let \((X,\Delta)\) be a projective normal \(\mathbb{Q}\)-factorial klt pair. Assume that \(-(K_{X}+\Delta)\) is nef. If the Albanese morphism \(a_{X}\colon X\to A\) is of relative dimension one over the image \(a_{X}(X)\). Then \(a_{X}\colon X\to A\) is a fibration._
_Remark 1.6_.: In the above setting, if moreover, \(a_{X}\) is inseparable, then by Proposition 3.3, we may write that
\[\pi^{*}(K_{X}+\Delta)\sim K_{X_{1}}+\mathfrak{M}+\Delta_{1}\]
where \(\pi\colon X_{1}:=(X\times_{A}A^{\frac{1}{p}})_{\operatorname{red}}^{\nu}\to X\) is the induced morphism and \(\mathfrak{M}\) is the movable part of horizontal divisors. Applying Proposition 5.2, \(\mathfrak{M}\) induces a fibration \(X_{1}\to\mathbb{P}^{1}\) which results in another fibration \(g\colon X\to\mathbb{P}^{1}\)
such that a general fiber \(G_{t}\) of \(g\) is an abelian variety which is dominant over \(A\). This is an analog of the isotrivial structure in characteristic zero; indeed, getting a "horizontal" (over \(A\)) fibration structure is a vital step to prove \(X\to A\) is isotrivial.
In a recent paper, for the case of arbitrary relative dimension, under certain additional conditions, Ejiri and Patakfalvi ([1]) prove that \(a_{X}\colon X\to A\) is surjective, and if \(f\colon X\to S\) is the fibration from the Stein factorization of \(a_{X}\), then \(S\to A\) is purely inseparable. In our situation, we can show that \(S=A\) by taking advantage of the fibration structure \(g\colon X\to\mathbb{P}^{1}\) (Lemma 8.2).
**Conventions.**
* For a scheme \(Z\), we use \(Z_{\mathrm{red}}\) to denote the scheme with the reduced structure of \(Z\). For a _variety_ over a field \(k\), we mean an integral quasi-projective scheme over \(k\). For a variety \(X\), we use \(K(X)\) to denote the function field of \(X\), and for a morphism \(f\colon X\to S\) of varieties, we use \(X_{K(S)}\) to denote the generic fiber of \(f\). For a _fibration_, we mean a projective morphism \(f\colon X\to S\) of normal varieties such that \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{S}\), which implies that \(K(S)\) is algebraically closed in \(K(X)\).
* Let \(f\colon X\to S\) be a fibration. A divisor \(D\) on \(X\) is called _\(f\)-exceptional_ (resp. _vertical, horizontal_) if \(f(\operatorname{Supp}D)\) is of codimension \(\geq 2\) (resp. of codimension \(\geq 1\), dominant) on \(S\).
* For a morphism \(\sigma\colon Z\to X\) of varieties, if \(D\) is a divisor on \(X\) such that the pullback \(\sigma^{*}D\) is well defined, we often use \(D|_{Z}\) to denote \(\sigma^{*}D\) for simplicity.
* Let \(f\colon X\to Y\) be a morphism. In the following two situations, either (1) \(D\) is a \(\mathbb{Q}\)-Cartier divisor on \(Y\), or (2) both \(X\) and \(Y\) are normal, \(D\) is a \(\mathbb{Q}\)-divisor and \(f\) is equidimensional, then the pullback \(f^{*}D\) makes sense.
* We denote by \(\equiv\), \(\sim\) and \(\sim_{\mathbb{Q}}\) the numerical, linear and \(\mathbb{Q}\)-linear equivalence of divisors, respectively.
* Let \(k\) be a field of characteristic \(p>0\) and \(X\) be a variety over \(k\). We denote by \(F_{X}\colon X^{\frac{1}{p}}\to X\) the absolute Frobenius morphism.
* Let \(X\) be a normal variety and denote by \(i\colon X^{\circ}\hookrightarrow X\) the inclusion of the regular locus of \(X\). For a Weil divisor \(D\) on \(X\), \(\mathcal{O}_{X}(D)\) is a subsheaf of the constant sheaf \(K(X)\) of rational functions, with the stalk at a point \(x\) being defined by \[\mathcal{O}_{X}(D)_{x}:=\left\{f\in K(X)\;\middle|\;\begin{matrix}(\operatorname {div}(f)+D)|_{U}\geq 0\text{ for some}\\ \text{open set }U\text{ containing }x\end{matrix}\right\}.\] We may identify \(\mathcal{O}_{X}(D)=i_{*}\mathcal{O}_{X^{\circ}}(D|_{X^{\circ}})\).
* For two \(\mathbb{Q}\)-divisors \(D,D^{\prime}\) on a normal variety \(X\), by \(D\geq D^{\prime}\) we mean that \(D-D^{\prime}\) is an effective divisor; and by \(D\succeq D^{\prime}\) (respectively \(D\succeq_{\mathbb{Q}}D^{\prime}\)) we mean that \(D-D^{\prime}\) is linearly (respectively \(\mathbb{Q}\)-linearly) equivalent to an effective divisor.
* If a coherent sheaf \(\mathcal{F}\) on \(X\) is reflexive (of rank \(r\)), then we denote \(\det\mathcal{F}=(\bigwedge^{r}\mathcal{F})^{\vee\vee}\). When \(\mathcal{F}\) is just locally free in codimension one (e.g., torsion free), we define \(\det\mathcal{F}=i_{*}\det(\mathcal{F}|_{U})\), where \(i\colon U\hookrightarrow X\) is a _big open subset_ (meaning its complement in \(X\) has codimension at least \(2\)) on which \(\mathcal{F}\) is locally free.
_Acknowledgments._ This research is partially supported by the National Key R and D Program of China (No. 2020YFA0713100), NSFC (No. 12122116) and CAS Project for Young Scientists in Basic Research, Grant No. YSBR-032.
## 2. Preliminaries
In this section, we collect some basic results about divisors and linear systems that will be used in the sequel. We work over an algebraically closed field \(k\).
**Lemma 2.1** ([22, Lemma 4.2]).: _Let \(X\) be a normal projective variety and \(U\subseteq X\) a big open subset. Let \(\mathcal{E}\) be a coherent sheaf such that \(\mathcal{E}|_{U}\) is locally free. Assume that \(\mathcal{E}\) is generically globally generated and \(h^{0}(U,\mathcal{E}|_{U})>\operatorname{rank}\mathcal{E}\). Then \(h^{0}(X,\det\mathcal{E})>1\)._
**Lemma 2.2**.: _Let \(\sigma\colon Y\to X\) be a proper dominant morphism of normal varieties, generically finite of degree \(d\). Let \(D,D^{\prime}\) be \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisors on \(X,Y\) respectively. Assume that there exists a divisor \(N\) on \(Y\) exceptional over \(X\) such that \(\sigma^{*}D\sim_{\mathbb{Q}}D^{\prime}+N\). Then \(\sigma_{*}D^{\prime}\sim_{\mathbb{Q}}dD\)._
Proof.: See [10, Theorem 1.4].
**Covering Theorem 2.3** ([10, Theorem 10.5]).: _Let \(f\colon Y\to X\) be a proper surjective morphism between normal complete varieties. If \(D\) is a Cartier divisor on \(X\) and \(E\) an effective \(f\)-exceptional divisor on \(Y\), then_
\[\kappa(Y,f^{*}D+E)=\kappa(X,D).\]
_Remark_.: If furthermore, \(f\) is equidimensional, so that pulling back Weil \(\mathbb{Q}\)-divisors makes sense, then the equality \(\kappa(Y,f^{*}D+E)=\kappa(X,D)\) still holds for Weil \(\mathbb{Q}\)-divisors \(D\). Indeed, by the proof of [10, Theorem 10.5], we have \(\kappa(f^{-1}(X^{\operatorname{reg}}),f^{*}(D|_{X^{\operatorname{reg}}})+E)= \kappa(X^{\operatorname{reg}},D|_{X^{\operatorname{reg}}})\), where \(X^{\operatorname{reg}}\) denotes the regular locus of \(X\). Here, \(\kappa(X^{\operatorname{reg}},D|_{X^{\operatorname{reg}}})\) etc., are well defined since \(h^{0}(X,mD)=h^{0}(X^{\operatorname{reg}},mD|_{X^{\operatorname{reg}}})\) for any positive integer \(m\).
**Lemma 2.4**.: _If a linear system \(\mathfrak{M}\) (without fixed components) on a normal proper variety \(X\) has a reduced and connected member \(M_{0}\), then every \(M\in\mathfrak{M}\) is connected._
Proof.: To prove the connectedness of \(M\), we consider the pencil \(\Phi\colon X\dasharrow\mathbb{P}^{1}\) induced by \(M\) and \(M_{0}\). Let \(\Gamma\subset X\times\mathbb{P}^{1}\) be the closure of the graph of \(\Phi\) with projection \(\widetilde{\Phi}\colon\Gamma\to\mathbb{P}^{1}\). Denote by \(L_{0}\subset\Gamma\) the strict transform of \(M_{0}\). Then \(\widetilde{\Phi}\) has a fiber \(L_{0}+E\) with \(E\) exceptional over \(X\). Since the fiber \(L_{0}+E\) is connected and non-multiple, by Stein factorization, each fiber of \(\widetilde{\Phi}\) is connected. Therefore, \(M\) is connected.
Although the following two lemmas are well known to experts, we include a quick proof for the convenience of the reader.
**Lemma 2.5**.: _Let \(X\) be a normal projective variety. If \(D\) is a nef and big Cartier divisor on \(X\), then for any \(\epsilon>0\), there exists an effective \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor \(D_{\epsilon}\) with coefficients \(<\epsilon\) such that \(D_{\epsilon}\sim_{\mathbb{Q}}D\)._
Proof.: Since \(D\) is nef and big, there exists an effective \(\mathbb{Q}\)-Cartier divisor \(N\) such that \(D-N\) is ample. Thus, \(D+\frac{1}{k}(D-N)\) is ample for any integer \(k>0\). In other words the \(\mathbb{Q}\)-divisor \(A_{k}:=D-\frac{1}{k}N\) is ample for \(k\geq 2\). As \(D=A_{k}+\frac{1}{k}N\), it is easy to find the desired divisor \(D_{\epsilon}\).
**Lemma 2.6**.: _Let \(f\colon X\to S\) be a fibration of normal projective varieties and \(L\) a nef \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(X\). Assume that \(L|_{X_{K(S)}}\sim_{\mathbb{Q}}0\), where \(X_{K(S)}\) is the generic fiber. Then there exists a big open subset \(S^{\circ}\subset S^{\operatorname{reg}}\) and a pseudo-effective divisor \(D\) on \(S\) such that \(L|_{f^{-1}(S^{\circ})}\sim_{\mathbb{Q}}f^{*}D|_{S^{\circ}}\)._
Proof.: Applying the flattening trick (cf. [21, Theorem 2.3]), there exists a commutative diagram
where \(h_{1}\) is a projective birational morphism, \(f_{1}\) is flat, \(X_{1}\) is the closure of the generic fiber \(X_{K(S)}\) of \(f\) in \(X\times_{S}S_{1}\), \(h_{2}\) is the normalization morphism and \(X_{2}\) is the normalization of \(X_{1}\times_{S_{1}}S_{2}\). Let \(h=h_{1}\circ h_{2}\) and \(h^{\prime}=h^{\prime}_{1}\circ h^{\prime}_{2}\). Now \(f_{2}\) is equidimensional, therefore by [21, Lemma 2.18], there exists a \(\mathbb{Q}\)-divisor \(D_{2}\) on \(S_{2}\) such that \(h^{\prime*}L\sim_{\mathbb{Q}}f_{2}^{*}D_{2}\). By [19, Example 1.4.4], \(D_{2}\) is nef, and it follows that \(D:=h_{*}D_{2}\) is pseudo-effective. Take \(S^{\circ}\subseteq S^{\rm reg}\) to be a big open subset over which \(h_{1}\) is an isomorphism. Then \(L|_{f^{-1}(S^{\circ})}\sim_{\mathbb{Q}}f^{*}(D|_{S^{\circ}})\).
Recall the adjunction formula as follows.
**Lemma 2.7** ([14, 5.3]).: _Let \(T\subset X\) be a reduced Weil divisor on a normal variety \(X\). Let \(\nu\colon T^{\nu}\to T\) be the normalization morphism. Suppose that \(K_{X}+T\) is \(\mathbb{Q}\)-Cartier, then_
\[(K_{X}+T)|_{T^{\nu}}\sim_{\mathbb{Q}}K_{T^{\nu}}+C+D, \tag{1}\]
_where \(C\) is the conductor divisor and \(D\) is some effective \(\mathbb{Q}\)-divisor with support mapped into the singular locus of \(X\)._
Applying the adjunction formula we obtain the following result, which will be used frequently to study the behavior, under purely inseparable morphisms, of the restriction of the (log-)canonical divisor on certain divisors.
**Lemma 2.8**.: _Let \(X\) be a normal \(\mathbb{Q}\)-factorial quasi-projective variety, \(\Delta\) an effective \(\mathbb{Q}\)-divisor on \(X\) and \(T\) a prime divisor on \(X\). Let \(\Delta=aT+\Delta^{\prime}\) such that \(T\not\subset\operatorname{Supp}\Delta^{\prime}\). Assume that \(0\leq a\leq 1\). Then there exists an effective divisor \(B_{T^{\nu}}\) on the normalization \(T^{\nu}\) of \(T\) such that_
\[(1-a)T|_{T^{\nu}}\sim_{\mathbb{Q}}K_{T^{\nu}}+B_{T^{\nu}}-(K_{X}+\Delta)|_{T^{ \nu}}. \tag{2}\]
Proof.: Applying the adjunction formula of Lemma 2.7, we may write that
\[\big{(}(K_{X}+\Delta)+(1-a)T\big{)}|_{T^{\nu}}\sim_{\mathbb{Q}}(K_{X}+T+ \Delta^{\prime})|_{T^{\nu}}\sim_{\mathbb{Q}}K_{T^{\nu}}+\Delta_{T^{\nu}}+ \Delta^{\prime}|_{T^{\nu}},\]
where \(\Delta_{T^{\nu}}\geq 0\). We then deduce (2) by setting \(B_{T^{\nu}}=\Delta_{T^{\nu}}+\Delta^{\prime}|_{T^{\nu}}\).
Admitting the existence of resolution of singularities, we may adapt a result of characterization of abelian varieties from [17, Theorem 0.2] to the normal case.
**Proposition 2.9**.: _Let \(X\) be a normal \(\mathbb{Q}\)-Gorenstein (i.e., \(K_{X}\) being \(\mathbb{Q}\)-Cartier) projective variety with a generically finite morphism \(f\colon X\to A\) to an abelian variety. Assume that \(X\) admits a resolution of singularities \(\rho\colon Y\to X\). Then_
1. \(\kappa(X,K_{X})\geq 0\)_, and the equality is attained if and only if_ \(X\) _is birational to an abelian variety;_
2. _if moreover_ \(K_{X}\equiv 0\)_, then_ \(X\) _is isomorphic to an abelian variety._
Proof.: Note that the composition morphism \(Y\to X\to A\) factors as \(Y\xrightarrow{a_{Y}}A_{Y}\xrightarrow{\pi}A\) where \(a_{Y}\colon Y\to A_{Y}\) is the Albanese morphism of \(Y\). By [10, Theorem 4.1] we have \(\kappa(Y,K_{Y})\geq 0\). We may write that
\[K_{Y}\sim_{\mathbb{Q}}\rho^{*}K_{X}+\sum_{i}a_{i}E_{i}\]
where \(E_{i}\) are exceptional over \(X\). By \(\rho_{*}K_{Y}=K_{X}\), we obtain that \(\kappa(X,K_{X})\geq\kappa(Y,K_{Y})\geq 0\).
We claim that if \(\kappa(Y,K_{Y})=0\), then \(Y\to A_{Y}\) factors through a morphism \(X\to A_{Y}\). Indeed, under this assumption \(a_{Y}\colon Y\to A_{Y}\) is a birational morphism by [12, Theorem 0.2]. Since the composition morphism \(Y\to X\to A\) is generically finite, \(\pi\colon A_{Y}\to A\) is a finite morphism. In turn, we see that the composition \(Y\to A_{Y}\to A\) is nothing but the Stein factorization of \(Y\to X\to A\). If \(X\to Z\to A\) denotes the Stein factorization of \(X\to A\), then there is a birational finite morphism \(A_{Y}\to Z\), hence \(A_{Y}=Z\), which shows the claim.
Assertion (1) follows from this claim immediately.
For assertion (2), assume \(K_{X}\equiv 0\). Then by \(\kappa(Y,K_{Y})\geq 0\), we conclude that \(a_{i}\geq 0\) and \(\kappa(Y,K_{Y})=0\). Since we have a birational morphism \(X\to A_{Y}\) and \(A_{Y}\) is regular, we conclude that \(X\to A_{Y}\) is an isomorphism from the condition \(K_{X}\equiv 0\).
The behavior of the canonical divisor under quotient of foliations and purely inseparable base changes
In this section, we investigate finite purely inseparable morphisms arising from base changes and compare the canonical divisors under these base changes. We borrow the notions and constructions from [11]. As our setting mildly differs from that of [11, Section 3.1], to avoid ambiguity, we sketch the construction of the divisors involved and include some statements in a reasonable order.
Throughout this section, we work over a perfect field \(k\) of characteristic \(p>0\).
### Foliations and purely inseparable morphisms
Let \(Y\) be a normal variety over \(k\). By a _foliation_ on \(Y\) we mean a saturated subsheaf \(\mathcal{F}\subseteq\mathcal{T}_{Y}\) of the tangent bundle that is \(p\)-closed and involutive. Then \(\operatorname{Ann}\mathcal{F}\subseteq\mathcal{O}_{Y}\) is a subring containing \(\mathcal{O}_{Y}^{p}\). We have a one-to-one correspondence:
\[\left\{\begin{aligned} &\text{foliations}\\ &\mathcal{F}\subseteq\mathcal{T}_{Y}\end{aligned}\right\} \leftrightarrow\left\{\begin{aligned} &\text{finite purely inseparable morphisms $\pi\colon Y\to X$}\\ &\text{over $k$ of height one with $X$ normal}\end{aligned}\right\},\]
which locally is given by
\[\mathcal{F}\ \mapsto\ \pi\colon Y\to\operatorname{Spec}(\operatorname{Ann} \mathcal{F})\ \ \text{and}\ \ \pi\colon Y\to X\ \mapsto\mathcal{F}_{Y/X}:=\Omega_{X\to Y}^{\perp}\]
where \(\Omega_{X\to Y}:=\operatorname{im}(\pi^{*}\Omega_{X}^{1}\to\Omega_{Y}^{1})\) and \(\Omega_{X\to Y}^{\perp}\) is the sheaf of tangent vectors in \(\mathcal{T}_{Y}\) annihilated by \(\Omega_{X\to Y}\). As a side note, \((\Omega_{Y/X}^{1})^{\vee}\cong\mathcal{F}_{Y/X}\), and under the above correspondence \(\operatorname{rank}\mathcal{F}_{Y/X}=\log_{p}\deg f\). Let \(y\in Y\) be a smooth point and \(x:=\pi(y)\). A foliation \(\mathcal{F}\subseteq\mathcal{T}_{Y}\) is called _smooth_ at \(y\) if around \(y\) the subsheaf \(\mathcal{F}\) is a subbundle, namely both \(\mathcal{F}\) and \(\mathcal{T}_{Y}/\mathcal{F}\) are locally free. It is known that, \(\mathcal{F}\) is smooth at \(y\Leftrightarrow Y/\mathcal{F}\) is smooth at \(x\Rightarrow\Omega_{Y/X}^{1}\) is locally free at \(y\) ([12, Page 142]).
Recall the following well-known result (cf. [4, Proposition 2.10]).
**Proposition 3.1**.: _Let \(\pi\colon Y\to X\) be a finite purely inseparable morphism of height one between normal varieties. Then_
\[\pi^{*}K_{X}\sim K_{Y}-(p-1)\det\mathcal{F}_{Y/X}\sim K_{Y}+(p-1)\det\Omega^{1}_ {Y/X}. \tag{3}\]
_Remark 3.2_.: To verify the linear equivalence of two Weil divisors on a normal variety, it suffices to do this in codimension one. So to treat \(\det\mathcal{F}_{Y/X}\), we may assume that \(\mathcal{F}_{Y/X}\) is smooth by working on a big open subset.
### Foliations associated with morphisms arising from base changes
An important kind of foliation is associated with morphisms arising from purely inseparable base changes. Let us briefly recall a relation of the canonical divisors built in [11, Section 3 and Section 4] and [4, Section 3].
#### 3.2.1.
Let \(X,S,T\) be normal varieties over \(k\), \(f\colon X\to S\) a dominant morphism, and \(\tau\colon T\to S\) a finite purely inseparable morphism of height one. Let \(Y\) be the normalization of the reduction of \(X_{T}\). Consider the following commutative diagram
Then there exists a natural morphism of sheaves
\[\delta:g^{*}\Omega^{1}_{T/S}\stackrel{{\cong}}{{\to}}\Omega^{1}_ {X_{T}/X}\otimes\mathcal{O}_{Y}\twoheadrightarrow\Omega^{1}_{(X_{T})_{\text{ red}}/X}\otimes\mathcal{O}_{Y}\to\Omega^{1}_{Y/X}. \tag{4}\]
Note that the first arrow above is induced by the natural isomorphism \(f_{T}^{*}\Omega^{1}_{T/S}\cong\Omega^{1}_{X_{T}/X}\). We can see that the composition \(\delta\) is surjective except over the preimage of the non-normal locus of \((X_{T})_{\text{red}}\) (see [11, Lemma 4.5]).
We may replace \(S\) with a big open subset to assume _either that \(g\) is flat or that \(\Omega^{1}_{T/S}\) is locally free_, then the natural morphism \(g^{*}\mathcal{F}_{T/S}\to(g^{*}\Omega^{1}_{T/S})^{\vee}\) is an isomorphism. We see that the dual of \(\delta\)
\[\gamma\colon\mathcal{F}_{Y/X}\to g^{*}\mathcal{F}_{T/S}\]
is injective since \(\mathcal{F}_{Y/X}\) is torsion free. Let \(\widetilde{F_{Y/X}}\) be the saturation of \(\mathcal{F}_{Y/X}\) in \(g^{*}\mathcal{F}_{T/S}\). We can conclude that
1. the inclusion \(\mathcal{F}_{Y/X}\subseteq\widetilde{F_{Y/X}}\) induces an effective divisor \(E\) on \(Y\) such that \(\det\mathcal{F}_{Y/X}=\det\widetilde{F_{Y/X}}(-E)\), and \(\operatorname{Supp}E\) is in the union of the preimages of the codimension one non-normal locus of \((X_{T})_{\text{red}}\) and the non-flat locus of \(f\colon X\to S\);
2. \(X_{T}\) is reduced if and only if \(\operatorname{rank}\mathcal{F}_{Y/X}=\operatorname{rank}g^{*}\mathcal{F}_{T/S}\).
#### 3.2.2. The movable part and fixed part
The following is a slight generalization of [13, Theorem 1.1], which follows from almost the same argument. For the convenience of the reader, we sketch the proof.
**Proposition 3.3**.: _Let \(f\colon X\to S\) be a fibration and we use the notation above. Let \(\Gamma\subseteq H^{0}(T,\Omega^{1}_{T/S})\) be a finite-dimensional \(k\)-vector subspace. Assume that there is an open subset \(U\subseteq T\) such that \(\Omega^{1}_{U/S}\) is locally free and globally generated by \(\Gamma\). Set \(r=\operatorname{rank}\Omega^{1}_{Y/X}\) and \(\Gamma_{Y}=\operatorname{Im}(\bigwedge^{r}\Omega\to H^{0}(Y,\det\Omega^{1}_{Y/ X}))\). Let \(\mathfrak{M}+F\subseteq|\det\Omega^{1}_{Y/X}|\) be the sub-linear system determined by \(\Gamma_{Y}\) with the fixed part \(F\) and the movable part \(\mathfrak{M}\). Then_
1. \(\nu(F)|_{(X_{U})_{\operatorname{red}}}\) _is supported on the union of the codimension one non-normal locus of_ \((X_{U})_{\operatorname{red}}\) _and the exceptional part over_ \(U\)_;_
2. _the_ \(g\)_-horizontal part_ \(M_{h}\) _of_ \(M\in\mathfrak{M}\) _is zero if and only if_ \(X_{K(T)}\) _is reduced._
Proof.: Consider the following composition morphism
\[\gamma:\bigwedge^{r}\Gamma\otimes_{k}\mathcal{O}_{Y_{U}}\to g^{*}\bigwedge^{r }\Omega^{1}_{U/S}\to\left(\bigwedge^{r}\Omega^{1}_{Y/X}\right)^{\vee\vee} \Big{|}_{Y_{U}}=\det\Omega^{1}_{Y/X}\Big{|}_{Y_{U}}.\]
Note that \(\operatorname{Supp}F\) corresponds to the codimension one locus over which the above morphism is not surjective. More precisely, \(\operatorname{Im}(\gamma)=\det\Omega^{1}_{Y_{U}/U}(-F)\) holds in codimension one. Combining this with the result (ii) in SS3.2.1, we conclude assertion (1).
Let us prove assertion (2). If \(X_{K(T)}\) is reduced, which is equivalent to that \(\operatorname{rank}\Omega^{1}_{U/S}=\operatorname{rank}\Omega^{1}_{Y/X}=r\), then \(\operatorname{Im}(\gamma)_{K(T)}=K(T)(\alpha_{1}\wedge\cdots\wedge\alpha_{r})\) for some \(\alpha_{1},\dots,\alpha_{r}\in\Gamma\) generating \(\Omega^{1}_{U/S}\) over the generic point of \(U\), thus \(M_{h}=0\). For the converse direction, assume that \(X_{K(T)}\) is non-reduced. Then \(m=\operatorname{rank}\Omega^{1}_{U/S}>\operatorname{rank}\Omega^{1}_{Y/X}\). Remark that the argument of [13, Section 5] shows precisely the following statement
* if \(e_{1},\dots,e_{m}\) are local basis of \(\Omega^{1}_{U/S}\), then the sections like \(\gamma(e_{i_{1}}\wedge\cdots\wedge e_{i_{r}})\) produce a nontrivial horizontal movable part of \(|\det\Omega^{1}_{Y_{U}/X}|\).
We conclude the proof after noticing that the sections like \(\alpha_{1}\wedge\cdots\wedge\alpha_{r}\) for \(\alpha_{1},\dots,\alpha_{r}\) in \(\Gamma\) generate \(\bigwedge^{r}\Omega^{1}_{U/S}\).
### The behavior of the relative canonical divisors under base changes
The following result can be proved by use of duality theory similarly to [10, Theorem 2.4]*. But here we give a proof by use of the language of foliation.
Footnote *: The proof of [10, Theorem 2.4] contains a minor mistake, the statement does not necessarily hold if the base is not smooth, which is mended in [21, Proposition 2.4].
**Proposition 3.4**.: _Let \(f\colon X\to S\) be a fibration of normal varieties. Let \(T\) be a normal variety and \(\tau\colon T\to S\) a finite, purely inseparable morphism of height one. Assume that \(X_{K(T)}\) is integral. Consider the commutative diagram_
_Then there exists an effective divisor \(E\) and a \(g\)-exceptional divisor \(N\) on \(Y\) such that_
\[\pi^{*}K_{X/S}\sim K_{Y/T}+(p-1)E+N. \tag{5}\]
Proof.: To prove the assertion, we may restrict ourselves on the regular locus of \(X,Y\). So we may assume that \(X,Y\) are both regular. Let \(S^{\circ}\) be a big regular open subset such that \(T^{\circ}:=\tau^{-1}S^{\circ}\) is regular. Let \(X^{\circ}=X_{S^{\circ}}\) and \(Y^{\circ}=Y_{T^{\circ}}\). Since \(X_{K(T)}\) is assumed to be integral, the natural morphism \(\delta\colon g^{*}\Omega^{1}_{T^{\circ}/S^{\circ}}\to\Omega^{1}_{Y^{\circ}/X^{ \circ}}\) is injective and has the same rank, hence induces an injective morphism \(\det(\delta)\colon g^{*}\det\Omega^{1}_{T^{\circ}/S^{\circ}}\to\det\Omega^{1}_ {Y^{\circ}/X^{\circ}}\). Therefore we may identify \(g^{*}\det\Omega^{1}_{T^{\circ}/S^{\circ}}\cong\det\Omega^{1}_{Y^{\circ}/X^{ \circ}}(-E_{0})\) for some effective divisor \(E_{0}\) on \(Y^{\circ}\). Applying Proposition 3.1, we have
\[\pi^{*}K_{X^{\circ}} \sim K_{Y^{\circ}}+(p-1)\det\Omega^{1}_{Y^{\circ}/X^{\circ}}\sim K _{Y^{\circ}}+(p-1)g^{*}\det\Omega^{1}_{T^{\circ}/S^{\circ}}+(p-1)E_{0}\] \[\sim K_{Y^{\circ}}+g^{*}(\tau^{*}K_{S^{\circ}}-K_{T^{\circ}})+(p- 1)E_{0}\]
which gives
\[\pi^{*}K_{X^{\circ}/S^{\circ}}\sim K_{Y^{\circ}/T^{\circ}}+(p-1)E_{0}.\]
Let \(E\) be the closure of \(E_{0}\) on \(Y\). We may extend the above relation to the whole variety \(Y\) up to some \(g\)-exceptional divisor \(N\), which is the relation (5) as desired.
## 4. Curves of Arithmetic genus one
This section focuses on regular but non-smooth projective curves of arithmetic genus one defined over an imperfect field. We will look closely at the behavior of the non-smooth locus under height-one base changes.
### Auxiliary results
First, we may deduce the following result from [10, Subsection 3.2.2, Corollary 2.14 and Proposition 2.15].
**Proposition 4.1**.: _Let \(f\colon X\to S\) be a fibration of normal varieties over a field \(k\) of characteristic \(p>0\). Then \((X_{\overline{K(S)}})_{\mathrm{red}}\) is integral, and the fibration \(f\) is separable if and only if the geometric generic fiber \(X_{\overline{K(S)}}\) is reduced._
We now consider a curve \(X\) over a field \(K\). To be precise, by a _curve_ over \(K\) we mean a purely one-dimensional quasi-projective scheme over \(K\). Let \(D=\sum_{i}a_{i}\mathfrak{p}_{i}\) be a Cartier divisor on \(X\). The _degree_ of \(D\) is defined to be the integer
\[\deg_{K}D=\sum_{i}a_{i}[\kappa(\mathfrak{p}_{i}):K].\]
where \(\kappa(\mathfrak{p}_{i})\) denotes the residue field of \(\mathfrak{p}_{i}\).
We will need the following classification of curves of arithmetic genus zero.
**Proposition 4.2** ([10, Theorem 9.10]).: _Let \(X\) be a normal projective integral \(K\)-curve with \(H^{0}(X,\mathcal{O}_{X})=K\) and \(H^{1}(X,\mathcal{O}_{X})=0\). Then the following statements hold._
1. \(\deg_{K}K_{X}=-2\)_._
2. \(X\) _is isomorphic to a conic in_ \(\mathbb{P}^{2}_{K}\) _and_ \(X\cong\mathbb{P}^{1}_{K}\) _if and only if it has a_ \(K\)_-rational point._
3. _Either_ \(X\) _is a smooth conic or_ \(X\) _is geometrically non-reduced. In the latter case, we have_ \(\operatorname{char}K=2\)_, and_ \(X\) _is isomorphic to the curve defined by a quadric_ \(sx^{2}+ty^{2}+z^{2}=0\) _for some_ \(s,t\in K\setminus K^{2}\)_._
### Notation and assumptions
From now on to the end of this section, we work over a field \(K\) of characteristic \(p>0\) such that \([K:K^{p}]<\infty\). Denote by \(\overline{K}\) the algebraic closure of \(K\). Let \(X\) be a regular projective curve over \(K\) with \(H^{0}(X,\mathcal{O}_{X})=K\). Assume that \(X\) has arithmetic genus one, i.e., \(h^{1}(X,\mathcal{O}_{X})=1\). Assume further that \(X\) is not smooth over \(K\). Then there exists an intermediate field \(L\) with \(K\subset L\subseteq K^{1/p}\) such that \(X_{L}:=X\otimes_{K}L\) is integral but not regular ([12, Proposition 1.5]). We fix such an \(L\). Note that if \(L/K\) is inseparable of degree \(p\), then \(X_{L}\) is always integral. If \(Y\) denotes the normalization of \(X_{L}\) and \(\pi\colon Y\to X\) the induced morphism, then \(K^{\prime}:=H^{0}(Y,\mathcal{O}_{Y})\subseteq K^{1/p}\). Recall that
\[\pi^{*}K_{X}\sim K_{Y}+(p-1)C, \tag{6}\]
where \(C>0\) is supported on the inverse image of the non-normal point of \(X_{L}\) and \((p-1)C\) coincides with the usual conductor divisor ([13, Theorem 1.2]).
**Proposition 4.3**.: _With the setting above,_
1. _the characteristic_ \(p\) _equals_ \(2\) _or_ \(3\)_, and the normalization of_ \(X_{\overline{K},\operatorname{red}}\) _is isomorphic to_ \(\mathbb{P}^{1}_{\overline{K}}\)_;_
2. _if_ \(X\) _is geometrically reduced, then_ \(X\) _is geometrically integral, and there exists a unique closed singular point on_ \(X_{\overline{K}}\)_;_
3. _if_ \(X\) _is geometrically non-reduced, then either_ \(X_{\overline{K},\operatorname{red}}=\mathbb{P}^{1}_{\overline{K}}\) _or_ \(X_{\overline{K},\operatorname{red}}\) _has a unique singular point._
Proof.: (1) By (6) we have \(K_{Y}\sim-(p-1)C<0\). Applying Proposition 4.2, we show that \(Y\) is a conic (or just \(\mathbb{P}^{1}\)) and \(\deg_{K^{\prime}}K_{Y}=-2\), in turn we conclude that \(p=2\) or \(3\) and \((X_{\overline{K}})_{\operatorname{red}}^{\nu}\cong\mathbb{P}^{1}_{\overline{K}}\).
(2) If \(X\) is geometrically reduced, then \(X_{\overline{K}}\) is integral by Proposition 4.1. Since the normalization of \(X_{\overline{K}}\) is \(\mathbb{P}^{1}_{\overline{K}}\) and \(p_{a}(X_{\overline{K}})=h^{1}(X_{\overline{K}},\mathcal{O}_{X_{\overline{K}}})=1\), we see that \(X_{\overline{K}}\) has at most one singular point.
(3) Again, by Proposition 4.1, \(X_{\overline{K},\operatorname{red}}\) is integral, and we have \(p_{a}(X_{\overline{K},\operatorname{red}})\leq 1\). If \(p_{a}(X_{\overline{K},\operatorname{red}})=0\), then \(X_{\overline{K},\operatorname{red}}\cong\mathbb{P}^{1}_{\overline{K}}\); and if \(p_{a}(X_{\overline{K},\operatorname{red}})=1\), then \(X_{\overline{K},\operatorname{red}}\) has a unique singular point.
In the following we consider the behavior of pulling back divisors. Keep in mind a commutative diagram:
(7)
Let \(\mathfrak{p}\) be a closed point on \(X\), and let \(\mathfrak{q}_{1},\dots,\mathfrak{q}_{r}\) be the points on \(Y\) lying over \(\mathfrak{p}\). We write
\[\pi^{*}\mathfrak{p}=e_{1}\mathfrak{q}_{1}+\dots+e_{r}\mathfrak{q}_{r},\]
where \(e_{i}\) is called the ramification index. Let \(f_{i}:=[\kappa(\mathfrak{q}_{i}):\kappa(\mathfrak{p})]\) denote the residue class degree. It is well known that
\[[L:K]=\sum_{i}e_{i}f_{i}. \tag{8}\]
**Example 4.4**.: Let \(X/K\) be the non-smooth regular curve defined by the affine equation \(sx^{2}+ty^{2}+1=0\), where \(\operatorname{char}K=2\) and \(s,t\in K\setminus K^{2}\). Let \(\mathfrak{p}\in X\) be the prime ideal \((y)\). Then \(\kappa(\mathfrak{p})=K(s^{1/2})=:L\). Now \(X\times_{K}L=\operatorname{Spec}L[x,y]/((s^{1/2}x+1)^{2}+ty^{2})\) is an integral curve. Let \(\pi\colon Y\to X\times_{K}L\) be the normalization morphism:
\[Y\cong\operatorname{Spec}K(s^{1/2},t^{1/2})[y] \,\to\,X\times_{K}L\] \[y \,\maps\,y\] \[(t^{1/2}y+1)/s^{1/2} \,\maps\,x\]
We have \(H^{0}(Y,\mathcal{O}_{Y})=K(s^{1/2},t^{1/2})=:K^{\prime}\). If we denote by \(\mathfrak{q}\in Y\) the prime ideal \((y)\subset K^{\prime}[y]\), then \(\mathfrak{q}\) is a \(K^{\prime}\)-rational point and \(\pi^{*}\mathfrak{p}=\mathfrak{q}\), as predicted by (8).
### The case when \(X\) is geometrically reduced
This case had been studied by Queen in [10], we summarize his results as follows.
**Proposition 4.5**.: _With the notation and assumptions above._
1. _The non-smooth locus of_ \(X\) _is supported at a closed point_ \(\mathfrak{p}\) _and_ \(\kappa(\mathfrak{p})/K\) _is purely inseparable of height one with_ \([\kappa(\mathfrak{p}):K]\leq p^{2}\)_. In particular, there exists a unique point_ \(\mathfrak{q}\in Y\) _lying over_ \(\mathfrak{p}\)_._
2. _If_ \(p=3\)_, then_ \(Y\cong\mathbb{P}^{1}_{L}\)_,_ \(\pi^{*}\mathfrak{p}=3\mathfrak{q}\)_,_ \(\mathfrak{q}\) _is an_ \(L\)_-rational point, and_ \([\kappa(\mathfrak{p}):K]=3\)_._
3. _Assume_ \(p=2\)_._ 1. _If the point_ \(\mathfrak{q}\) _is_ \(L\)_-rational, then_ \(Y\cong\mathbb{P}^{1}_{L}\)_,_ \(\pi^{*}\mathfrak{p}=2\mathfrak{q}\) _and_ \([\kappa(\mathfrak{p}):K]=2\)_._ 2. _If the point_ \(\mathfrak{q}\) _is not_ \(L\)_-rational, then_ \(\deg_{L}\mathfrak{q}=2\)_, and_ \[\pi^{*}\mathfrak{p}=\begin{cases}\mathfrak{q},&\text{ if }\deg_{K}(\mathfrak{p})=2;\\ 2\mathfrak{q},&\text{ if }\deg_{K}(\mathfrak{p})=4.\end{cases}\]
Proof.: (1) See Lemma 1 and Theorem 2 in [10].
(2) When \(p=3\), \(\mathfrak{q}\) is an \(L\)-rational point. Now \([L:K]=ef=e[L:\kappa(\mathfrak{p})]\), thus \(e=[\kappa(\mathfrak{p}):K]\). On one hand, \(\mathfrak{p}\) is not a \(K\)-rational point (cf. [12, Proposition 2.13]), thus \(e=[\kappa(\mathfrak{p}):K]>1\). On the other hand, the absolution Frobenius morphism \(F_{X}\colon X^{\frac{1}{p}}\to X\) factors through \(Y\to X\), therefore \(e\leq 3\). Thus \(e=3\) and we are done.
(3) When \(\mathfrak{q}\) is an \(L\)-rational point, we use the proof of (2) to obtain (a). When \(\mathfrak{q}\) is not \(L\)-rational, we have \(\deg_{L}\mathfrak{q}=2\) by Proposition 4.2 and \(\deg_{K}\mathfrak{p}=2\) or \(4\) by (1). Now \([L:K]=ef=2e[L:\kappa(\mathfrak{p})]\). The statement follows.
We now give an explicit example of Proposition 4.5.
**Example 4.6**.: Let \(K\) be a field of characteristic \(2\) with \(a,b\in K\setminus K^{2}\). Consider the following affine regular curve
\[X:y^{2}=x^{3}+ax+b. \tag{9}\]
Let \(\alpha=a^{1/2}\), \(\beta=b^{1/2}\) and \(L=k(\alpha,\beta)\). Then \(X_{L}\) has a singular point \(\mathfrak{p}_{L}:=(x=\alpha,y=\beta)\). Let \(Y=(X_{L})^{\nu}\) be the normalization. It is easy to see that \(t:=(y+\beta)/(x+\alpha)\) is a local parameter of the point of \(Y\) lying over \(\mathfrak{p}_{L}\), thus \(Y\cong\mathbb{A}^{1}_{k(\alpha,\beta)}\). Besides, the normalization morphism is given by
\[x\mapsto t^{2},\quad y\mapsto t^{3}+\alpha t+\beta.\]
Now, the point \(\mathfrak{p}\in X\) defined by the prime ideal \((x^{2}+a)\) has residue field \(K[x,y]/(x^{2}+a,y^{2}+b)\cong k(\alpha,\beta)\), which has degree \(4\) over \(K\). The pullback of \(\mathfrak{p}\) becomes the ideal \((t^{4}+\alpha^{2})=(t^{2}+\alpha)^{2}\). If \(\mathfrak{q}\in Y\) is the point corresponding to \((t^{2}+\alpha)\), then
\[\pi^{*}\mathfrak{p}=2\mathfrak{q}\ \ \text{and}\ \ [\kappa(\mathfrak{q}):k( \alpha,\beta)]=2.\]
Therefore the completion \(\overline{X}\) of \(X\) is an example of case (ii) in Proposition 4.5. By replacing \(a\) (resp. \(b\)) by \(0\) in (9), we obtain an example for case (a) (resp. (i)) in Proposition 4.5.
### The case when \(X\) is geometrically non-reduced
Recall that there exists a height one field extension \(K\subset L\) such that \(X_{L}\) is integral but not normal, and for the normalization \(Y\), we have \(L\subsetneq K^{\prime}:=H^{0}(Y,\mathcal{O}_{Y})\subseteq K^{\frac{1}{p}}\). Since \(0\sim\pi^{*}K_{X}\sim K_{Y}+(p-1)C\), we see that \(\deg_{K^{\prime}}(p-1)C=2\). Thus \(C\) is supported on either a single point \(\mathfrak{q}\in Y\) or two points \(\mathfrak{q}_{1},\mathfrak{q}_{2}\in Y\) (this happens only when \(p=2\)). We list all the possibilities explicitly as follows.
**Proposition 4.7**.:
1. _If we denote_ \(X^{\prime}:=(X_{K^{\prime}})_{\rm red}\)_, then_ \(Y\cong(X^{\prime})^{\nu}\)_._
2. _If_ \(p=3\)_, then_ \(X_{L}\) _has a unique non-normal point,_ \(C=\mathfrak{q}\)_,_ \(Y\cong\mathbb{P}^{1}_{K^{\prime}}\) _and either_ \(\pi^{*}\mathfrak{p}=\mathfrak{q}\) _or_ \(\pi^{*}\mathfrak{p}=3\mathfrak{q}\)_._
3. _If_ \(p=2\)_, then we fall into one of the following cases_ 1. \(C=2\mathfrak{q}\)_, thus_ \(\mathfrak{q}\) _is a_ \(K^{\prime}\)_-rational point of_ \(Y\) _and_ \(Y\cong\mathbb{P}^{1}_{K^{\prime}}\)_;_ 2. \(C=\mathfrak{q}_{1}+\mathfrak{q}_{2}\)_, and also_ \(Y\cong\mathbb{P}^{1}_{K^{\prime}}\)_;_ 3. \(C=\mathfrak{q}\)_,_ \(\kappa(\mathfrak{q})/K^{\prime}\) _is an extension of degree two, and either_ 1. \(Y\subset\mathbb{P}^{2}_{K^{\prime}}\) _is a smooth conic (possibly_ \(\mathbb{P}^{1}_{K^{\prime}}\)_), or_ 2. \(Y\) _is isomorphic to the curve defined by_ \(sx^{2}+ty^{2}+z^{2}=0\) _for some_ \(s,t\in L\) _such that_ \([K(s,t):K]=4\)_._
## 5. Canonical bundle formula for fibrations with generic fiber of arithmetic genus zero
In this section, we shall treat a special kind of fibration with the generic fiber being a curve of arithmetic genus zero, which is an intermediate situation when treating inseparable fibrations. We work over an algebraically closed field \(k\) of characteristic \(p>0\).
**Theorem 5.1**.: _Let \(f\colon X\to S\) be a fibration of relative dimension one between normal \(\mathbb{Q}\)-factorial quasi-projective varieties. Let \(\Delta\) be an effective \(\mathbb{Q}\)-divisor on \(X\) and \(D\) a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(S\). Let \(\mathfrak{M}\) be a movable linear system without fixed components such that_
\[\deg_{K(S)}\mathfrak{M}:=\deg_{K(S)}M_{0}|_{X_{K(S)}}>0\ \ \text{for some}\ \ M_{0}\in\mathfrak{M}.\]
_Assume that \(K_{X}+M_{0}+\Delta\sim_{\mathbb{Q}}f^{*}D\) and that one of the following conditions holds:_
* \(f\) _is separable;_
* \(f\) _is inseparable, and_ \(S\) _is an open subset of a normal projective variety_ \(\overline{S}\) _which is of m.A.d.;_
* \(f\) _is inseparable, and_ \(\dim S=2\)_._
_Then there exists a number \(t>0\) such that \(D\succeq_{\mathbb{Q}}tK_{S}\) where \(t=1\)\((\)respectively \(1/2\), \(3/4\)\()\) under the condition (a)\((\)respectively (b), (c)\()\)._
Proof.: By assumption we have \(p_{a}(X_{K(S)})=0\), thus \(\deg_{K(S)}(M_{0}+\Delta)=2\). In the following, we take \(M\in\mathfrak{M}\) to be a general member and \(T\) one of its irreducible horizontal components. Write \(M=T+G\). We claim that the restriction \(G|_{T^{\nu}}\succeq_{\mathbb{Q}}0\). This is obvious if \(T\) is not a component of \(G\); otherwise, we have \(M=2T+G^{\prime}\) where \(T\) is not a component of \(G^{\prime}\), but then \(G=T+G^{\prime}=\frac{1}{2}(M+G^{\prime})\), and the claimed assertion follows since \(M\) is movable. Write \(\Delta=\alpha T+\Delta^{\prime}\) such that \(\operatorname{Supp}\Delta^{\prime}\) does not contain \(T\). Notice that \(0\leq\alpha\leq 1\). We may write that
\[G+\Delta=G+\alpha T+\Delta^{\prime}=(1-\alpha)G+\alpha M+\Delta^{\prime},\]
thus \((G+\Delta)|_{T^{\nu}}\succeq_{\mathbb{Q}}0\).
Case (1): \(\deg_{K(S)}T=1\). Then \(T\to S\) is a birational section. By adjunction formula (Lemma 2.7), there exists an effective divisor \(E_{T^{\nu}}\) on \(T^{\nu}\) such that
\[(K_{X}+M+\Delta)|_{T^{\nu}}=(K_{X}+T+G+\Delta)|_{T^{\nu}}\sim_{\mathbb{Q}}K_{T ^{\nu}}+E_{T^{\nu}}.\]
Denote by \(\sigma\colon T^{\nu}\to S\) the natural birational morphism. Now we have \(K_{T^{\nu}}+E_{T^{\nu}}\sim_{\mathbb{Q}}\sigma^{*}D\), then by Lemma 2.2 we obtain
\[D\sim_{\mathbb{Q}}\sigma_{*}(K_{T^{\nu}}+E_{T^{\nu}})=K_{S}+E_{S}\]
where \(E_{S}\geq 0\).
Case (2): \(\deg_{K(S)}T=2\).
Case (2.1): \(T\to S\) is a separable morphism. As in case (1), there exists \(E_{T^{\nu}}\geq 0\) such that
\[\sigma^{*}D\sim_{\mathbb{Q}}(K_{X}+M+\Delta)|_{T^{\nu}}\sim_{\mathbb{Q}}K_{T^{ \nu}}+E_{T^{\nu}}.\]
Pushing forward via \(\sigma_{*}\) gives
\[D\sim_{\mathbb{Q}}\frac{1}{2}\sigma_{*}(K_{T^{\nu}}+E_{T^{\nu}})\succeq_{ \mathbb{Q}}K_{S}.\]
Case (2.2): \(T\to S\) is inseparable, which only happens when \(p=2\). Let \(S_{1}\) be the normalization of \(S\) in \(k(T)\). In other words, \(S_{1}\) is the intermediate variety of the Stein factorization \(T^{\nu}\to S_{1}\to S\), which is birational to \(T\) and finite over \(S\). Since
\(K(S_{1})/K(S)\) is a purely inseparable extension of degree two, \(X_{K(S_{1})}\) is integral by Proposition 4.2.
Case (2.2.1): \(X_{K(S_{1})}\) is normal. Consider the following commutative diagram
The morphism \(\pi\colon X_{1}\to X\) is finite and purely inseparable, and the normalization morphism \(\nu\colon X_{1}\to X_{S_{1}}\) induces an isomorphism of the generic fibers over \(S_{1}\). Since \(X_{S_{1}}\to S_{1}\) has a birational section which is mapped to \(T\), the prime divisor \(T_{1}\) supported on \(\pi^{-1}T\) is a birational section, and \(\pi^{*}T=2T_{1}\). Applying Proposition 3.4, we have
\[\pi^{*}(K_{X}-f^{*}K_{S})\sim_{\mathbb{Q}}K_{X_{1}}-g^{*}K_{S_{1}}+E_{1}+N,\]
where \(E_{1}\) is effective and \(N\) is exception over \(S\). Write \(\pi^{*}M=2T_{1}+E_{1}^{\prime}\). Then
\[\pi^{*}(K_{X}+M+\Delta)=K_{X_{1}}+2T_{1}+g^{*}\tau^{*}K_{S}-g^{*}K_{S_{1}}+E_{1 }+\pi^{*}\Delta+E_{1}^{\prime}+N.\]
Denote by \(\sigma_{1}\colon T_{1}^{\nu}\stackrel{{\nu}}{{\to}}T_{1}\to S_{1}\) the composition morphism. Consider the restriction on \(T_{1}^{\nu}\). Applying the adjunction formula \((K_{X_{1}}+T_{1})|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}K_{T_{1}^{\nu}}+\Delta_{T_{1} ^{\nu}}\), we have
\[\tau^{*}D|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}(K_{X_{1}}+T_{1}+T_{1}+g ^{*}\tau^{*}K_{S}-g^{*}K_{S_{1}}+E_{1}+\pi^{*}\Delta+E_{1}^{\prime}+N_{1})|_{T_ {1}^{\nu}}\] \[\sim_{\mathbb{Q}}(K_{T_{1}^{\nu}}-\sigma_{1}^{*}K_{S_{1}})+ \Delta_{T_{1}^{\nu}}+(T_{1}+E_{1}^{\prime}+E_{1}+\pi^{*}\Delta)|_{T_{1}^{\nu} }+\sigma_{1}^{*}\tau^{*}K_{S}+N_{1}|_{T_{1}^{\nu}}. \tag{10}\]
Note that
* \((T_{1}+E_{1}^{\prime})|_{T_{1}^{\nu}}\succeq_{\mathbb{Q}}0\) by the claim at the beginning of the proof since \(\pi^{*}M=2T_{1}+E_{1}^{\prime}\);
* both \((K_{T_{1}^{\nu}}-\sigma_{1}^{*}K_{S_{1}})\) and \(N_{1}|_{T_{1}^{\nu}}\) are exceptional over \(S\);
* both \(E_{1}\) and \(\Delta\) is vertical over \(S\) thus \((E_{1}+\pi^{*}\Delta)|_{T_{1}^{\nu}}\) is effective.
Applying \(\tau_{*}\sigma_{1*}\) to (10), by Lemma 2.2 we obtain that \(D\sim_{\mathbb{Q}}K_{S}+E_{S}\) for some divisor \(E_{S}\geq 0\).
Case (2.2.2): \(X_{K(S_{1})}\) is not normal. This means that \(X_{K(S)}\) is a non-smooth conic as described in Proposition 4.2 (3). In this case, we assume that one of the conditions \((\mathrm{b},\mathrm{c})\) holds.
Applying Proposition 4.2, we see that \(X_{K(S_{1})}\) is not normal along the preimage of the generic point of \(T\), and \(K(S_{1})\) is not algebraically closed in \(K(X)\otimes_{K(S)}K(S_{1})\). Let \(X_{1}:=(X_{S_{1}})^{\nu}\) and denote by \(S_{1}^{\prime}\) the normalization of \(S_{1}\) in \(X_{1}\). Let \(T_{1}\) be the irreducible divisor corresponding to \(\mathrm{Supp}\,\pi^{-1}T\). Denote by \(\rho\colon T_{1}^{\nu}\to T^{\nu}\) the induced morphism by
the normalizations. These varieties fit into the following commutative diagram
The conductor divisor of \(X_{1}\to X_{S_{1}}\) is like \(aT_{1}+C^{\prime}\) where \(a\geq 1\), namely, \(\nu^{*}K_{X_{S_{1}}}=K_{X_{1}}+aT_{1}+C^{\prime}\). Similarly to case (2.2.1), we can find a divisor \(N_{1}\) supported on the non-flat locus of \(g\) and a \(g\)-vertical effective divisor \(E_{1}\) on \(X_{1}\) such that
\[\pi^{*}K_{X}=K_{X_{1}}+aT_{1}+E_{1}+N_{1}-\nu^{*}f_{1}^{*}K_{S_{1}/S}.\]
We may write that \(\pi^{*}T=bT_{1}\) and \(\pi^{*}M=bT_{1}+E_{1}^{\prime}\). Then
\[\pi^{*}(K_{X}+M+\Delta)=K_{X_{1}}+(a+b)T_{1}+E_{2}+E_{1}^{\prime}+N_{1}-\nu^{* }f_{1}^{*}K_{S_{1}/S},\]
where \(E_{2}=E_{1}+\pi^{*}\Delta\). Since \((K_{X}+M+\Delta)_{K(S)}\sim 0\), we conclude that \(a=b=1\) and that \(T_{1}\to S_{1}^{\prime}\) is birational. Note that \(T\not\subseteq\operatorname{Supp}\Delta\), thus Lemma 2.8 gives
\[T_{1}|_{T_{1}^{\nu}}=\rho^{*}(T|_{T^{\nu}})=\rho^{*}(K_{T^{\nu}}+B_{T^{\nu}}+M| _{T^{\nu}})-\pi^{*}(K_{X}+M+\Delta)|_{T_{1}^{\nu}}.\]
Applying the adjunction formula on \(T_{1}\) we have
\[\pi^{*}(K_{X}+M+\Delta)|_{T_{1}^{\nu}} =(K_{X_{1}}+T_{1})|_{T_{1}^{\nu}}+T_{1}|_{T_{1}^{\nu}}+(E_{2}+E_{1 }^{\prime}+N_{1}-\nu^{*}f_{1}^{*}K_{S_{1}/S})|_{T_{1}^{\nu}}\] \[\sim_{\mathbb{Q}}K_{T_{1}^{\nu}}+\Delta_{T_{1}^{\nu}}+\rho^{*}(K_ {T^{\nu}}+B_{T^{\nu}}+M|_{T^{\nu}})-\pi^{*}(K_{X}+M+\Delta)|_{T_{1}^{\nu}}\] \[\qquad+(E_{2}+E_{1}^{\prime}+N_{1}-\nu^{*}f_{1}^{*}K_{S_{1}/S})| _{T_{1}^{\nu}}\]
It follows that
\[\begin{split} 2\pi^{*}(K_{X}+M+\Delta)|_{T_{1}^{\nu}}\sim_{ \mathbb{Q}}\pi^{*}f^{*}K_{S}+K_{T_{1}^{\nu}}\\ +\Delta_{T_{1}^{\nu}}+\rho^{*}B_{T^{\nu}}+\pi^{*}M|_{T_{1}^{\nu}} +(E_{2}+E_{1}^{\prime})|_{T_{1}^{\nu}}+N_{2}\end{split} \tag{11}\]
where \(N_{2}=N_{1}|_{T_{1}^{\nu}}+\rho^{*}(K_{T^{\nu}}-\delta^{*}K_{S_{1}})\) is exceptional over \(S\).
If condition (b) holds, then \(T_{1}^{\nu}\) is an open subset of a normal projective variety \(\overline{T}_{1}\) of m.A.d. Thus by Proposition 2.9, \(K_{\overline{T}_{1}}\geq 0\). Note that \(2\pi^{*}(K_{X}+M+\Delta)|_{T_{1}^{\nu}}=2\tau^{*}D|_{T_{1}^{\nu}}\). Pushing down the equation (11) via the morphism \(T_{1}^{\nu}\to S\) yields \(D\sim_{\mathbb{Q}}\frac{1}{2}K_{S}+E_{S}\) for some divisor \(E_{S}\geq 0\).
If the condition (c) holds, then \(S_{1}^{\prime}\to S\) is of height one and of degree \(\geq p^{2}=4\), this implies \(S_{1}^{\prime}=S^{\frac{1}{p}}\). It follows that \(K_{T_{1}^{\nu}}=g^{*}K_{S^{\frac{1}{p}}}|_{T_{1}^{\nu}}+N_{3}\) where \(N_{3}\) is a divisor on \(T_{1}^{\nu}\) exceptional over \(S\). Pushing down the equation (11) via the morphism \(T_{1}^{\nu}\to S\) yields \(D\sim_{\mathbb{Q}}\frac{3}{4}K_{S}+E_{S}\) for some divisor \(E_{S}\geq 0\).
As an application of the above theorem, if \(K_{X}+M_{0}+\Delta\) is anti-nef, we obtain the following structure result, which will appear in the proof of Theorem 1.5.
**Proposition 5.2**.: _Assume that a singular variety admits a smooth resolution of singularities. Let \(X\) be a \(\mathbb{Q}\)-factorial normal projective variety, and let \(\Delta\) be an effective \(\mathbb{Q}\)-divisor on \(X\). Let \(f\colon X\to S\) be a fibration of relative dimension one, and \(\mathfrak{M}\) a movable linear system without fixed components such that \(\deg_{K(S)}\mathfrak{M}>0\). Assume that the following three conditions hold:_
1. \(S\) _is of m.A.d.;_
2. \(-(K_{X}+M_{0}+\Delta)\) _is nef, where_ \(M_{0}\in\mathfrak{M}\)_; and_
3. _either_ \((X_{K(S)},\Delta_{K(S)})\) _is klt, or if_ \(T\) _is a (the unique) horizontal irreducible component of_ \(\Delta\) _with coefficient one then_ \(\deg_{K(S)}T=1\) _and the restriction_ \(T|_{T^{\nu}}\) _on the normalization of_ \(T\) _is pseudo-effective._
_Then_
1. \(S\) _is an abelian variety;_
2. \(M_{0}\) _is semi-ample with numerical dimension_ \(\nu(M_{0})=1\)_, that is,_ \(|M_{0}|\) _defines a fibration_ \(g\colon X\to\mathbb{P}^{1}\)_;_
3. _for a general_ \(t\in\mathbb{P}^{1}\)_, the fiber of_ \(g\) _over_ \(t\) _(denoted by_ \(G_{t}\)_) is isomorphic to an abelian variety, and_ \(\Delta|_{G_{t}}\equiv 0\)_._
Proof.: Let \(M\in\mathfrak{M}\) be a general divisor and write \(M=T+M^{\prime}+V\) where \(T\) is a horizontal irreducible component, \(V\) is the vertical part, and \(M^{\prime}\) is the remaining part. We remark that it is allowed that \(M^{\prime}=0\), \(M^{\prime}=T\), or \(M^{\prime}\) is another horizontal component. Since \(T\) is dominant over \(S\), it is of m.A.d., and we have \(K_{T^{\nu}}\succeq_{\mathbb{Q}}0\). We claim that \((M^{\prime}+V)|_{T^{\nu}}\succeq_{\mathbb{Q}}0\). Indeed, this is trivial if \(T\) is not a component of \(M^{\prime}\); otherwise, namely, \(T\leq M^{\prime}\), then by \(\deg_{K(S)}M_{0}\leq 2\), we have \(M^{\prime}=T\), and it follows that \((M^{\prime}+V)|_{T^{\nu}}\sim_{\mathbb{Q}}\frac{1}{2}(M+V)|_{T^{\nu}}\succeq_ {\mathbb{Q}}0\).
For convenience, we first establish a lemma.
**Lemma 5.3**.: _With the notation above, we have \(V=0\), \(M^{\prime}|_{T^{\nu}}\sim_{\mathbb{Q}}\Delta|_{T^{\nu}}\sim_{\mathbb{Q}}0\), \((K_{X}+M+\Delta)|_{T^{\nu}}\equiv 0\), \(T^{\nu}\) is isomorphic to an abelian variety, and \(T\) is normal in codimension one._
Proof of the lemma.: We first remark that
1. Suppose \(V\neq 0\), as \(M\) varies in \(\mathfrak{M}\) we get a family of numerically equivalent horizontal divisors \(\mathfrak{T}\) and a family of numerically equivalent vertical divisors \(\mathfrak{V}\), which contains \(T\) and \(V\) respectively. Both of the families of divisors cover \(X\), therefore, \(\operatorname{Supp}T\cap\operatorname{Supp}V\neq\emptyset\).
By the adjunction formula (Lemma 2.7), we have
\[(K_{X}+M+\Delta)|_{T^{\nu}}=(K_{X}+T+V+M^{\prime}+\Delta)|_{T^{\nu}}\sim_{ \mathbb{Q}}K_{T^{\nu}}+\Delta_{T^{\nu}}+(M^{\prime}+V)|_{T^{\nu}}+\Delta|_{T^{ \nu}},\]
which is anti-nef. Since \(K_{T^{\nu}},\Delta_{T^{\nu}},\Delta|_{T^{\nu}},(M^{\prime}+V)|_{T^{\nu}}\succeq _{\mathbb{Q}}0\), we see that
\[K_{T^{\nu}}\sim_{\mathbb{Q}}M^{\prime}|_{T^{\nu}}\sim_{\mathbb{Q}}0\ \ \text{ and }\ \Delta_{T^{\nu}}=V|_{T^{\nu}}=\Delta|_{T^{\nu}}=0.\]
We therefore conclude that
1. \(\operatorname{Supp}T\cap\operatorname{Supp}V=\emptyset\), hence \((*)\) implies that a general \(M\in\mathfrak{M}\) has no vertical part;
* by Lemma 2.7 and Proposition 2.9, \(T\) is normal in codimension one, and \(T^{\nu}\) is isomorphic to an abelian variety;
* \((K_{X}+M+\Delta)|_{M}\equiv 0\).
This completes the proof of the lemma.
We proceed with the proof of the proposition. For assertions (i) and (ii), we divide the proof into two cases according to whether \(\deg_{K(S)}(K_{X}+M_{0}+\Delta)\) equals zero.
Case (1): \(\deg_{K(S)}(K_{X}+M_{0}+\Delta)<0\). In this case \(\deg_{K(S)}M_{0}=1\), a general \(M\in|M_{0}|\) is reduced and irreducible, and \((X_{K(S)},\Delta_{K(S)})\) is klt. Let \(N=-(K_{X}+M+\Delta)\). Then \(N|_{M^{\nu}}\equiv 0\) by Lemma 5.3.
Take an ample divisor \(H\) on \(S\). For any rational number \(\epsilon>0\) the divisor \(N+\epsilon f^{*}H\) is nef and big, which is \(\mathbb{Q}\)-linearly equivalent to some effective \(\mathbb{Q}\)-divisor \(\Delta_{\epsilon}\) by Lemma 2.5. We have
\[K_{X}+M+\Delta+\Delta_{\epsilon}\sim_{\mathbb{Q}}\epsilon f^{*}H.\]
By Theorem 5.1, we have \(\epsilon H\succeq_{\mathbb{Q}}tK_{S}\) for some \(t>0\) (independent of \(\epsilon\)). This shows that \(-K_{S}\) is nef, but \(K_{S}\succeq_{\mathbb{Q}}0\) since \(S\) is of m.A.d., thus \(K_{S}\equiv 0\). Therefore, \(S\) is an abelian variety by Proposition 2.9.
Next, we apply Hodge Index Theorem to show that \(M_{0}\) is semi-ample. Take a surface \(W\) which is the intersection of \(\dim X-2\) very general hyperplane sections on \(X\) intersecting the base locus of \(|M_{0}|\) properly. Then both \(M|_{W}\) and \(N|_{W}\) are nef but not numerically trivial. We only need to show that \(\nu(M_{0}|_{W})=1\), which implies that \(|M_{0}|\) is base-point free and \(\nu(M_{0})=1\). Otherwise, \((M|_{W})^{2}>0\), but since \((M|_{W})\cdot(N|_{W})=0\), Hodge Index Theorem implies that \((N|_{W})^{2}<0\), which contradicts the fact that \(N|_{W}\) is nef.
Case (2): \(\deg_{K(S)}(K_{X}+M_{0}+\Delta)=0\).
Since \(-(K_{X}+M_{0}+\Delta)\) is nef, by Lemma 2.6 there is a big open subset \(S^{\circ}\subseteq S\) and a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor \(D_{S^{\circ}}\) on \(S^{\circ}\) such that
* \(-(K_{X}+M_{0}+\Delta)|_{X_{S^{\circ}}}\sim_{\mathbb{Q}}f^{*}D_{S^{\circ}}\);
* the closure divisor \(D\) of \(D^{\circ}\) is pseudo-effective;
* \(X_{S^{\circ}}\to S^{\circ}\) is flat.
By Theorem 5.1, we have that \(-D_{S^{\circ}}\succeq_{\mathbb{Q}}tK_{S^{\circ}}\) for some \(t>0\). But since \(S\) is of m.A.d., Proposition 2.9 tells that \(K_{S^{\circ}}\succeq_{\mathbb{Q}}0\). We conclude that \(D\equiv K_{S}\equiv 0\), thus \(S\) is an abelian variety by Proposition 2.9 again.
Next, we prove that \(M_{0}\) is semi-ample. Let \(M\in|M_{0}|\) be a general element. It suffices to verify that \(M|_{M}\equiv 0\) which implies that \(|M_{0}|\) is base-point free with \(\nu(M_{0})=1\). With the notation at the beginning \(M=T+M^{\prime}\), by Lemma 5.3, \(M^{\prime}|_{T^{\nu}}\equiv 0\), thus we only need to show that \(T|_{T^{\nu}}\equiv 0\). We separate into the following cases.
Case (2.I): \(M^{\prime}\neq 0\). If \(M^{\prime}=T\) then we are done since \(M^{\prime}|_{T^{\nu}}\equiv 0\). Assume \(M^{\prime}\neq T\), then \(M^{\prime}\cap T=\emptyset\). We apply Hodge Index Theorem again. Take a surface \(W\) which is the intersection of \(\dim X-2\) very general hyperplane sections on \(X\) intersecting the base locus of \(|M_{0}|\) properly. Then both \(M^{\prime}|_{W}\) and \(T|_{W}\) are nef divisors. We only need to show that \(\nu(M|_{W})=1\). Since \((M^{\prime}|_{W})\cdot(T|_{W})=0\), it suffices to verify that \((T|_{W})^{2}=(M^{\prime}|_{W})^{2}=0\). Indeed, if say, \((T|_{W})^{2}>0\), Hodge Index Theorem implies that
\((M^{\prime}|_{W})^{2}\leq 0\). Since \(M^{\prime}|_{W}\) is nef, \(M^{\prime}|_{W}\equiv t(T|_{W})\) for some \(t>0\), but this contradicts the condition \((M^{\prime}|_{W})\cdot(T|_{W})=0\).
In the remaining cases, we assume that \(M\) is reduced and irreducible.
Case (2.II): \(\deg_{K(S)}M_{0}=1\). In this situation, we have \(\deg_{K(S)}\Delta>0\), thus \(\operatorname{Supp}\Delta\) must contain a horizontal component \(T_{1}\). By Lemma 5.3, we know that \(T_{1}\) does not intersect a general \(M\), but the family of divisors in \(|M_{0}|\) covers \(X\), we see that \(T_{1}\) is a component of certain \(M_{1}\in|M_{0}|\) and \(\deg_{K(S)}T_{1}=1\). We may write that
\[M_{1}=T_{1}+V_{1}\ \text{ and }\ \Delta=aT_{1}+\Delta^{\prime}\]
where \(V_{1}\) is a vertical divisor and \(a\) is the coefficient of \(T_{1}\) in \(\Delta\). Then
\[K_{X}+M+\Delta\sim_{\mathbb{Q}}(K_{X}+T_{1})+aM+(1-a)V_{1}+\Delta^{\prime}.\]
By the adjunction formula, we have
\[(K_{X}+M+\Delta)|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}K_{T_{1}^{\nu}}+\Delta_{T_{1}^ {\nu}}+(aM+(1-a)V_{1}+\Delta^{\prime})|_{T_{1}^{\nu}}\equiv 0.\]
Note that \(K_{T_{0}^{\nu}}\geq 0\) as \(T_{0}^{\nu}\) is of m.A.d. Since \(M\) is movable, it follows that
\[K_{T_{1}^{\nu}}\sim_{\mathbb{Q}}\Delta_{T_{1}^{\nu}}\sim_{\mathbb{Q}}M|_{T_{1} ^{\nu}}\sim_{\mathbb{Q}}(1-a)V_{1}|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}\Delta^{ \prime}|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}0.\]
Since \(M_{1}=T_{1}+V_{1}\) is connected by Lemma 2.4, if \(a<1\) we obtain \(V_{1}=0\); if \(a=1\), then by \(M|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}(T_{1}+V_{1})|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}0\) and the assumption that \(T|_{T_{1}^{\nu}}\) is pseudo-effective, we also have \(V_{1}=0\). In turn we conclude that \(M_{1}=T_{1}\) and \(M|_{T^{\nu}}\sim 0\).
Case (2.III): \(\deg_{K(S)}M_{0}=2\). We fall into one of the following two cases
* The generic fiber \(X_{K(S)}\) is not geometrically normal (this can happen only when \(p=2\)), that is, \(X_{K(S)}\) is a non-smooth conic over \(K(S)\).
* \(X_{K(S)}\) is smooth over \(K(S)\).
Fix a general divisor \(M_{1}\in|M_{0}|\), which is a prime divisor. Let \(S^{\prime}\to M_{1}\) be the normalization morphism. Then \(S^{\prime}\) is isomorphic to an abelian variety. Do the base change \(S^{\prime}\to S\), which is a morphism of degree two. Note that in either of the above two cases, \(X_{K(S^{\prime})}\) is integral by Proposition 4.2. Let \(\nu\colon X_{1}:=(X_{S^{\prime}})^{\nu}\to X_{S^{\prime}}\) be the normalization morphism. Let \(f_{1}\colon X_{1}\to S_{1}\) be the fibration arising from the Stein factorization of \(X_{1}\to S^{\prime}\). We have the following commutative diagram
In case (III-1), by Proposition 4.2 (3), \(X_{S^{\prime}}\) is not normal along \(\pi^{\prime-1}M_{1}\), and \(S_{1}\to S^{\prime}\) is a finite purely inseparable morphism of degree two. Denote by \(T_{1}\) the prime divisor so that \(T_{1}=\operatorname{Supp}\pi^{*}M_{1}\). Then \(T_{1}\) is a birational section over \(S_{1}\) and \(\pi^{*}M_{1}=T_{1}\). We may write that
\[\pi^{*}K_{X}\sim K_{X_{1}}+T_{1}+V_{1},\]
where \(V_{1}\geq 0\) is a vertical divisor over \(S_{1}\). Hence
\[\pi^{*}(K_{X}+M+\Delta)\sim K_{X_{1}}+\pi^{*}M+T_{1}+V_{1}+\pi^{*}\Delta.\]
Since \(\pi^{*}M\sim T_{1}\), we can apply the argument of case (2.II) and obtain that \(T_{1}|_{T_{1}^{\nu}}\equiv 0\), which implies that \(M|_{M}\equiv 0\).
In case (III-2), we have \(S_{1}=S^{\prime}\) and
\[\pi^{*}K_{X}\sim K_{X_{1}}+V_{1}\]
for some effective divisor \(V_{1}\) vertical over \(S_{1}\).
If \(M_{1}\to S\) is purely inseparable, then \(\pi^{*}M_{1}=2T_{1}\) where \(T_{1}\) is a birational section over \(S_{1}\). Applying the adjunction formula on \(T_{1}\) we have
\[(K_{X_{1}}+T_{1}+T_{1}+V_{1}+\pi^{*}\Delta)|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}K_{ T_{1}^{\nu}}+\Delta_{T_{1}^{\nu}}+(T_{1}+V_{1}+\pi^{*}\Delta)|_{T_{1}^{\nu}} \equiv 0.\]
Since \(T_{1}|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}\frac{1}{2}\pi^{*}M_{1}|_{T_{1}^{\nu}} \succeq_{\mathbb{Q}}0\) we conclude that \(\pi^{*}M|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}0\), which shows that \(M|_{M_{0}^{\nu}}\sim_{\mathbb{Q}}0\).
Now consider the case that for general \(M\in|M_{0}|\), \(M\to S\) is a separable morphism of degree two. Since the normalization \(M^{\nu}\) of \(M\) is isomorphic to an abelian variety, the natural morphism \(M^{\nu}\to S\) is an etale morphism of abelian varieties. Note that
* up to isomorphism, there are only finitely many abelian varieties etale over \(S\) of degree two;
* if \(M\in|M_{0}|\) is birationally equivalent to \(M_{0}\) then \(K(M_{0})\otimes_{K(S)}K(M)\cong K(M)\times K(M)\), hence \(\pi^{*}M\) splits into two distinct components.
From this we conclude that general \(M\) are isomorphic to each other, \(\pi^{*}M\) splits into the sum of two divisors \(T_{1}+T_{2}\) which varies as \(M\) varies. We may consider the fibration \(X_{S^{\prime}}\to S^{\prime}\) and apply the argument of case (2.I).
Finally, let us prove the statement (iii). The argument is a minor modification of that of Lemma 5.3 and case (2.II). By [10, Theorem 12.2.4], there is a nonempty open subset of \(\mathbb{P}^{1}\) over which the fiber \(X_{t}\) satisfies Serre's condition \(S_{2}\). Take a general fiber \(X_{t}\) which is \(S_{2}\) and integral. Note that a divisor in \(|M_{0}|\) is a sum of some fibers of \(g\colon X\to\mathbb{P}^{1}\), thus \(X_{t}|_{X_{t}^{\nu}}\equiv 0\). Considering the adjunction formula on \(X_{t}^{\nu}\)
\[(K_{X}+M+\Delta)|_{X_{t}^{\nu}}\equiv K_{X_{t}^{\nu}}+\Delta_{X_{t}^{\nu}}+ \Delta|_{X_{t}^{\nu}},\]
which is anti-nef, and by \(K_{X_{t}^{\nu}}\succeq_{\mathbb{Q}}0\) we conclude that
\[K_{X_{t}^{\nu}}\sim_{\mathbb{Q}}\Delta_{X_{t}^{\nu}}\sim_{\mathbb{Q}}\Delta|_{ X_{t}^{\nu}}\sim_{\mathbb{Q}}0.\]
Therefore, \(X_{t}\) is normal in codimension one and \(X_{t}^{\nu}\) is isomorphic to an abelian variety. Moreover, we have \(X_{t}=X_{t}^{\nu}\) since it satisfies Serre's condition \(S_{2}\).
## 6. Canonical bundle formula for separable fibrations
Throughout this section, we work over an algebraically closed field \(k\) of characteristic \(p>0\). We aim to deduce a canonical bundle formula for a separable fibration. We first treat a general case and obtain the following theorem, which can be regarded as an addendum of Witaszek's result (Theorem 1.1).
**Theorem 6.1**.: _Let \(f\colon X\to S\) be a fibration of relative dimension one between \(\mathbb{Q}\)-factorial normal quasi-projective varieties. Let \(\Delta\) be an effective \(\mathbb{Q}\)-divisor on \(X\). Let \(\tau_{1}\colon S_{1}\to S\) be a finite purely inseparable morphism of height one with \(S_{1}\) normal. Assume that_
1. \((X_{K(S)},\Delta_{K(S)})\) _is lc;_
2. \(K_{X}+\Delta\sim_{\mathbb{Q}}f^{*}D\) _for some_ \(\mathbb{Q}\)_-divisor_ \(D\) _on_ \(S\)_; and_
3. \(X_{K(S_{1})}\) _is reduced but not normal (which happens only when_ \(p=2,3\)_)._
_Then there exist finite morphisms \(\tau\colon\bar{T}\to S\), \(\tau^{\prime}\colon\bar{T}^{\prime}\to\bar{T}\) and \(\tau^{\prime}_{1}\colon\bar{T}^{\prime}\to S_{1}\) fitting into the following commutative diagram_
_and an effective \(\mathbb{Q}\)-divisor \(E_{\bar{T}^{\prime}}\) on \(\bar{T}^{\prime}\) such that_
\[(\tau^{\prime}_{1}\circ\tau_{1})^{*}D\sim_{\mathbb{Q}}aK_{\bar{T}^{\prime}}+b \tau^{\prime*}K_{\bar{T}}+c\tau^{\prime*}_{1}(\tau^{*}_{1}K_{S}-K_{S_{1}})+E_{ \bar{T}^{\prime}}\]
_where \(a,b,c\geq 0\) are rational numbers relying on the coefficients of \(\Delta_{K(S)}\). Moreover,_
* _if_ \((X_{K(S)},\Delta_{K(S)})\) _is klt, then_ \(c\geq\frac{1-\theta}{p(p-1)}>0\) _where_ \(\theta\) _is the maximum of the coefficients of_ \(\Delta_{K(S)}\)_; and_
* _the finite morphisms_ \(\tau^{\prime}\) _and_ \(\tau_{1}\) _are purely inseparable, and if_ \(f\) _is separable, then_ \(\tau\) _and_ \(\tau^{\prime}_{1}\) _are also purely inseparable._
Proof.: Let \(\nu\colon X^{\prime}\to X_{S_{1}}\) be the normalization morphism. Since \(X_{K(S_{1})}\) is not normal, we may find an \(f\)-horizontal irreducible component \(T^{\prime}\) on \(X^{\prime}\) of the conductor divisor. Denote by \(\pi\colon X^{\prime}\to X\) the natural morphism, which is purely inseparable of height one. Let \(f^{\prime}\colon X^{\prime}\to S^{\prime}\) be the fibration arising from the Stein factorization of \(X^{\prime}\to S_{1}\). We fit these varieties into the following commutative diagram
Step 1: We first consider the case when \(S\) and \(S_{1}\) are both regular.
By results of Section 3.2, there exists an effective divisor \(E^{\prime}\) such that
\[\det\mathcal{F}_{X^{\prime}/X}=f^{*}_{1}\det\mathcal{F}_{S_{1}/S}(-p^{\gamma_ {1}}T^{\prime}-E^{\prime})\]
where \(\gamma_{1}=0\) or \(1\) and \(\operatorname{Supp}\,E^{\prime}\) does not contain \(T^{\prime}\). Then we can write that
\[\pi^{*}K_{X}\sim_{\mathbb{Q}}K_{X^{\prime}}+(p-1)(p^{\gamma_{1}}T^{\prime}+E^ {\prime})+f^{*}_{1}(\tau^{*}_{1}K_{S}-K_{S_{1}}). \tag{12}\]
Since \(p_{a}(X^{\prime}_{K(S^{\prime})})=0\), we have \(\deg_{K(S^{\prime})}(p-1)(p^{\gamma_{1}}T^{\prime}+E^{\prime}+\pi^{*}\Delta)=2\), thus \(\deg_{K(S^{\prime})}T^{\prime}=1\) or \(2\). Hence \(T^{\prime}\to S^{\prime}\) is either birational or finite of degree \(2\), where the latter happens only when \(p=2\).
Let \(\rho\colon T^{\prime\nu}\to T^{\nu}\) be the morphism arising from the normalization of \(T^{\prime}\to T\). We may write \(\pi^{*}T=p^{\gamma_{2}}T^{\prime}\) where \(\gamma_{2}=0\) or \(1\). Write \(\Delta=\alpha T+\Delta^{\prime}\), where \(\operatorname{Supp}\Delta^{\prime}\not\supset T\). By the adjunction formula there exists an effective \(\mathbb{Q}\)-divisor \(\Delta_{T^{\prime\nu}}\) on \(X^{\prime}\) such that
\((K_{X^{\prime}}+T^{\prime})|_{T^{\prime\nu}}\sim_{\mathbb{Q}}K_{T^{\prime\nu}}+ \Delta_{T^{\prime\nu}}\). By the relation (12) we have
\[\begin{split}\pi^{*}(K_{X}+\Delta)|_{T^{\prime\nu}}& \sim_{\mathbb{Q}}(K_{X^{\prime}}+T^{\prime})|_{T^{\prime\nu}}+\big{(}(p-1)p^{ \gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}T^{\prime}|_{T^{\prime\nu}}\\ &\qquad+((p-1)E^{\prime}+\pi^{*}\Delta^{\prime})|_{T^{\prime\nu}} +f_{1}^{*}(\tau_{1}^{*}K_{S}-K_{S_{1}})|_{T^{\prime\nu}}\\ &\sim_{\mathbb{Q}}K_{T^{\prime\nu}}+\Delta_{T^{\prime\nu}}+ \big{(}(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}T^{\prime}|_{T^{ \prime\nu}}\\ &\qquad+((p-1)E^{\prime}+\pi^{*}\Delta^{\prime})|_{T^{\prime\nu}} +f_{1}^{*}(\tau_{1}^{*}K_{S}-K_{S_{1}})|_{T^{\prime\nu}}.\end{split} \tag{13}\]
By Lemma 2.8, there exists an effective divisor \(B_{T^{\nu}}\) on \(T^{\nu}\) such that
\[(1-\alpha)p^{\gamma_{2}}T^{\prime}|_{T^{\prime\nu}}\sim(1-\alpha)\rho^{*}(T|_ {T^{\prime}})\sim_{\mathbb{Q}}\rho^{*}(K_{T^{\nu}}+B_{T^{\nu}})|_{T^{\prime \nu}}-\pi^{*}(K_{X}+\Delta)|_{T^{\nu}}. \tag{14}\]
Multiplying the equations (13) and (14) by \((1-\alpha)p^{\gamma_{2}}\) and \(\big{(}(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}\) respectively and then summing up, we obtain
\[\begin{split}\big{(}(1-\alpha)p^{\gamma_{2}}+(p-1)& p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}\pi^{*}(K_{X}+\Delta)|_{T^{ \prime\nu}}\\ &\sim_{\mathbb{Q}}(1-\alpha)p^{\gamma_{2}}K_{T^{\prime\nu}}+ \big{(}(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}\rho^{*}K_{T^{\nu}} \\ &\qquad+(1-\alpha)p^{\gamma_{2}}f_{1}^{*}(\tau_{1}^{*}K_{S}-K_{S_{ 1}})|_{T^{\prime\nu}}+E_{T^{\prime\nu}},\end{split} \tag{15}\]
where
\[E_{T^{\prime\nu}}:=(1-\alpha)p^{\gamma_{2}}\Delta_{T^{\prime\nu} }+(1-\alpha)p^{\gamma_{2}}((p-1)E^{\prime}+\pi^{*}\Delta^{\prime})|_{T^{\prime \nu}}\\ +\big{(}(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}\rho^{*} B_{T^{\nu}}\geq 0.\]
Step 2: There is a big open regular subset \(S^{\circ}\subseteq S\) over which \(S^{\prime\circ}\) is also regular. We consider the fibration \(X_{S^{\circ}}\to S^{\circ}\), by Step 1 we we get a relation (15) which holds on \((T^{\prime\nu})_{S^{\circ}}\). To extend this relation to the whole variety \(T^{\prime\nu}\), we only need to add an exceptional (over \(S\)) divisor \(N_{T^{\prime\nu}}\) on \(T^{\prime\nu}\) on the right hand side; that is
\[\begin{split}\big{(}(1-\alpha)p^{\gamma_{2}}+(p-1)p^{\gamma_{1}}+ \alpha p^{\gamma_{2}}-1\big{)}\pi^{*}(K_{X}+\Delta)|_{T^{\prime\nu}}\\ \sim_{\mathbb{Q}}(1-\alpha)p^{\gamma_{2}}K_{T^{\prime\nu}}& +\big{(}(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}\rho^{*}K_{T^{\nu}} \\ &+(1-\alpha)p^{\gamma_{2}}f_{1}^{*}(\tau_{1}^{*}K_{S}-K_{S_{1}})|_ {T^{\prime\nu}}+E_{T^{\prime\nu}}+N_{T^{\prime\nu}}.\end{split} \tag{16}\]
Finally we denote by \(\bar{T},\bar{T}^{\prime}\) the normalization of \(S\) in \(T^{\nu},T^{\prime\nu}\) respectively. By the construction the varieties \(\bar{T},\bar{T}^{\prime},S_{1},S\) fit into the commutative diagram
as claimed in the theorem. We push down the relation (16) via \(\sigma\) to \(\bar{T}^{\prime}\) and obtain the relation
\[\begin{split}&(\tau_{1}^{\prime}\circ\tau_{1})^{*}\big{(}(1- \alpha)p^{\gamma_{2}}+(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}D\\ &\sim_{\mathbb{Q}}(1-\alpha)p^{\gamma_{2}}K_{\bar{T}^{\prime}}+ \big{(}(p-1)p^{\gamma_{1}}+\alpha p^{\gamma_{2}}-1\big{)}\tau^{\prime*}K_{\bar {T}}+(1-\alpha)p^{\gamma_{2}}\tau_{1}^{\prime*}(\tau_{1}^{*}K_{S}-K_{S_{1}})+ \sigma_{*}E_{T^{\prime\nu}}.\end{split}\]
The rational numbers \(a,b,c\) and the divisor \(E_{\bar{T}^{\prime}}\) are determined by the above equation. In particular, if \(\theta\) denotes the maximum of the coefficients of \(\Delta_{K(S)}\), then we have
\[c=\frac{(1-\alpha)p^{\gamma_{2}}}{(1-\alpha)p^{\gamma_{2}}+(p-1)p^{\gamma_{1}}+ \alpha p^{\gamma_{2}}-1}\geq\frac{1-\theta}{p(p-1)}.\]
Moreover, if \(f\) is a separable fibration, applying Proposition 4.5 we know that both \(\tau\) and \(\tau_{1}^{\prime}\) are purely inseparable by construction.
**Theorem 6.2**.: _Let \(f\colon X\to S\) be a separable fibration of relative dimension one between normal \(\mathbb{Q}\)-factorial quasi-projective varieties. Let \(\Delta\) be an effective \(\mathbb{Q}\)-divisor on \(X\). Assume that_
1. \((X_{K(S)},\Delta_{K(S)})\) _is lc; and_
2. \(K_{X}+\Delta\sim_{\mathbb{Q}}f^{*}D\) _for some_ \(\mathbb{Q}\)_-divisor_ \(D\) _on_ \(S\)_._
_Then there exist finite purely inseparable morphisms \(\tau_{1}\colon\bar{T}\to S\), \(\tau_{2}\colon\bar{T}^{\prime}\to\bar{T}\), an effective \(\mathbb{Q}\)-divisor \(E_{\bar{T}^{\prime}}\) on \(\bar{T}^{\prime}\) and rational numbers \(a,b,c\geq 0\) such that_
\[\tau_{2}^{*}\tau_{1}^{*}D\sim_{\mathbb{Q}}aK_{\bar{T}^{\prime}}+br_{2}^{*}K_{ \bar{T}}+cr_{2}^{*}\tau_{1}^{*}K_{S}+E_{\bar{T}^{\prime}}.\]
_Moreover, if \((X_{K(S)},\Delta_{K(S)})\) is klt, then \(c\geq c_{0}\) for some positive number \(c_{0}\) relying only on the maximal coefficient of prime divisors in \(\Delta_{K(S)}\)._
Proof.: If the generic fiber of \(f\) is smooth then we may apply [25, Theorem 3.4], which tells that there exists a finite purely-inseparable morphism \(\tau\colon T\to S\) such that
\[\tau^{*}D\sim_{\mathbb{Q}}t\tau^{*}K_{S}+(1-t)(K_{T}+\Delta_{T})\]
for some rational number \(t\in[0,1]\) and some effective \(\mathbb{Q}\)-divisor \(\Delta_{T}\) on \(T\). Here we remark that when \((X_{K(S)},\Delta_{K(S)})\) is klt, the argument of [25, Theorem 3.4] actually shows that \(t\geq c_{0}>0\), where \(c_{0}\) relies on the maximal coefficient of prime divisors in \(\Delta_{K(S)}\). We may set \(\bar{T}^{\prime}=\bar{T}=T\) to get our assertion as a special case.
Now assume that the generic fiber of \(f\) is not smooth. Since \(f\) is separable, if we set \(\tau_{1}=F_{S}\colon S_{1}=S^{\frac{1}{p}}\to S\), then \(X_{K(S_{1})}\) is integral but not smooth. We may apply Theorem 6.1 and obtain the assertion once noticing that \(\tau_{1}^{*}K_{S}-K_{S_{1}}\sim_{\mathbb{Q}}\tau_{1}^{*}\big{(}\frac{p-1}{p}K_ {S}\big{)}\).
## 7. Canonical bundle formula for inseparable fibrations
In this section, we shall treat inseparable fibrations of relative dimension one. We work over an algebraically closed field \(k\) of characteristic \(p>0\). Throughout this section, we assume that a singular variety admits a resolution of singularities to apply Proposition 2.9.
### A special base change
Let \(X,S\) be normal projective varieties over \(k\). Let \(f\colon X\to S\) be an inseparable fibration of relative dimension one such that the generic fiber \(X_{K(S)}\) has arithmetic genus \(\leq 1\). In the following, we shall treat the case when \(S\) has m.A.d. Remind that \(\Omega^{1}_{S^{\frac{1}{p}}/S}\) is not necessarily globally generated. We construct a base change \(S_{1}\to S\) such that \(\Omega^{1}_{S_{1}/S}\) is generically globally generated as follows.
Assume that \(S\) is of m.A.d. and denote by \(a_{S}\colon S\to A\) the Albanese morphism of \(S\). Let \(X_{1},S_{1}\) be the normalization of \((X_{A^{\frac{1}{p}}})_{\mathrm{red}},(S_{A^{\frac{1}{p}}})_{\mathrm{red}}\) respectively. Note that the natural morphism \(f_{1}\colon X_{1}\to S_{1}\) is not necessarily a fibration; we denote by \(f_{1}^{\prime}\colon X_{1}\to S_{1}^{\prime}\) the fibration arising from the Stein factorization of \(f_{1}\). We have the following commutative diagram
Note that since the reduced generic geometric fiber of \(f\colon X\to S\) is a rational curve, the composition \(X\to S\to A\) is the Albanese map of \(X\).
We make the following important remark by results of Section 3.2.1
* \(X_{1}\) coincides with \((X_{S_{1}})_{\mathrm{red}}^{\nu}=(X_{A^{\frac{1}{p}}})_{\mathrm{red}}^{\nu}\), the sheaves \[\Omega_{A^{\frac{1}{p}}\to X^{\frac{1}{p}}}\subseteq\Omega_{S_{1}\to X^{ \frac{1}{p}}}\subseteq\Omega_{X_{1}\to X^{\frac{1}{p}}}\cong\pi^{\prime*} \Omega_{X_{1}/X}^{1}\] coincide with each other over a nonempty open subset of \(X^{\frac{1}{p}}\), and the morphisms \(X^{\frac{1}{p}}\to X_{1}\) and \(S^{\frac{1}{p}}\to S_{1}\) are induced by the foliations \((\Omega_{A^{\frac{1}{p}}\to X^{\frac{1}{p}}})^{\perp}\) and \((\Omega_{A^{\frac{1}{p}}\to S^{\frac{1}{p}}})^{\perp}\) respectively.
**Proposition 7.1**.: _Let the notation be as above._
1. _Over an open subset of_ \(S_{1}\)_,_ \(\Omega_{S_{1}/S}^{1}\) _is globally generated by_ \(a_{S_{1}}^{*}H^{0}(A^{\frac{1}{p}},\Omega_{A^{\frac{1}{p}}/A}^{1})\)_, and the natural map_ \(H^{0}(A^{\frac{1}{p}},\Omega_{A^{\frac{1}{p}}/A}^{1})\to H^{0}(S_{1},\Omega_{S_ {1}/S}^{1})\) _is injective._
2. _We have_ \(h^{0}(S_{1},\tau_{1}^{*}K_{S}-K_{S_{1}})\geq 1\)_, and if_ \(a_{S}\colon S\to A\) _is inseparable then the strict inequality holds._
3. _If_ \(X_{K(S_{1})}\) _is integral, then there exists an effective divisor_ \(E_{1}\) _and an_ \(f_{1}\)_-exceptional divisor_ \(N_{1}\) _on_ \(X_{1}\) _such that_ \[\pi_{1}^{*}K_{X}\sim_{\mathbb{Q}}K_{X_{1}}+E_{1}+f_{1}^{*}(\tau_{1}^{*}K_{S}-K _{S_{1}})+N_{1}.\]
4. _If_ \(X_{K(S_{1})}\) _is non-reduced, then the movable part of the linear system_ \(|\mathrm{det}(\Omega_{X_{1}/X}^{1})|\) _has nontrivial horizontal components over_ \(S_{1}\)_._
5. _If_ \(X_{K(S_{1})}\) _is normal, then_ \(f_{1}\colon X_{1}\to S_{1}\) _is a fibration. We may do base change_ \(S_{2}:=(S_{1}\times_{A^{\frac{1}{p}}}A^{\frac{1}{p^{2}}})_{\mathrm{red}}^{\nu}\to S _{1}\) _as above, by repeating this process we can obtain a number_ \(n\) _such that_ \(X_{K(S_{n-1})}\) _is normal, but_ \(X_{K(S_{n})}\) _is not normal._
Proof.: (1) First we note that, the natural homomorphism \(a_{S_{1}}^{*}\Omega_{A^{\frac{1}{p}}/A}^{1}\to\Omega_{S_{1}/S}^{1}\) is generically surjective. Thus over an open subset of \(S_{1}\), \(\Omega_{S_{1}/S}^{1}\) is globally generated by \(a_{S_{1}}^{*}H^{0}(A^{\frac{1}{p}},\Omega_{A^{\frac{1}{p}}/A}^{1})\). For the second assertion, by construction, the morphism \(X^{\frac{1}{p}}\to A^{\frac{1}{p}}\) is the Albanese morphisms of \(X^{\frac{1}{p}}\), hence by [11] the natural map
\(H^{0}(A^{\frac{1}{p}},\Omega^{1}_{A^{\frac{1}{p}}/A})\to H^{0}(X^{\frac{1}{p}}, \Omega^{1}_{X^{\frac{1}{p}}/X})\cong H^{0}(X^{\frac{1}{p}},\Omega^{1}_{X^{\frac{ 1}{p}}})\) is injective. However, this map factors through \(H^{0}(A^{\frac{1}{p}},\Omega^{1}_{A^{\frac{1}{p}}/A})\to H^{0}(S_{1},\Omega^{1} _{S_{1}/S})\), which is injective too.
(2) By assertion (1), we have \(\det\Omega^{1}_{S_{1}/S}\succeq 0\), and if \(a_{S}\) is inseparable then there exists a big open subset \(U_{1}\subseteq S_{1}\) such that \(\Omega^{1}_{S_{1}/S}\) is locally free over \(U_{1}\), and
\[h^{0}(U_{1},\Omega^{1}_{S_{1}/S}|_{U_{1}})\geq h^{0}(A^{\frac{1}{p}},\Omega^{1 }_{A^{\frac{1}{p}}/A})=\dim A\geq\dim S>\text{rank }\Omega^{1}_{S_{1}/S}\]
which implies \(h^{0}(S_{1},\det\Omega^{1}_{S_{1}/S})>1\) by Lemma 2.1. We conclude the assertion from the formula \(\tau_{1}^{*}K_{S}=K_{S_{1}}+(p-1)\det\Omega^{1}_{S_{1}/S}\).
Assertion (3) is a direct consequence of Proposition 3.4.
(4) Note that \(X_{1}\) coincides with \((X_{S_{1}})_{\text{red}}^{\nu}\). By assertion (1), over an open subset \(U\) of \(S_{1}\), \(\Omega^{1}_{S_{1}/S}\) is globally generated by \(\Omega:=a_{S_{1}}^{*}H^{0}(A^{\frac{1}{p}},\Omega^{1}_{A^{\frac{1}{p}}/A}) \subseteq H^{0}(S_{1},\Omega^{1}_{S_{1}/S})\). Shrinking \(U\), we may assume \(\Omega^{1}_{U/S}\) is locally free. Then we can apply Proposition 3.3.
(5) If \(X_{K(S_{1})}\) is normal, then the inclusion \(\mathcal{O}_{S_{1}}=\tau_{1}^{*}f_{*}\mathcal{O}_{X}\subseteq f_{1*}\mathcal{ O}_{X_{1}}\) is an isomorphism over the generic point of \(S_{1}\). In turn, since \(S_{1}\) is normal, we conclude that \(f_{1}\colon X_{1}\to S_{1}\) is a fibration.
Denote by \(B\) the Albanese image \(a_{X}(X)\). If \(K(S)/K(B)\) is inseparable, then \([K(S_{1}):K(B^{\frac{1}{p}})]<[K(S):K(B)]\). Finally we may attain some \(m\) such that \(S_{m}\to B^{\frac{1}{p^{m}}}\) is separable, which implies that \(S_{m+1}\cong S_{m}^{\frac{p^{m}}{p^{m}}}\). Since \(X_{K(S)}\) is not geometrically normal, there must exist some \(n\) such that \(X_{K(S_{n})}\) is not normal.
### Canonical bundle formula for inseparable fibrations
Our first result for inseparable fibrations is the following, under the condition that the Albanese morphism \(a_{S}\colon S\to A\) is separable.
**Theorem 7.2**.: _Let \(X\) be a normal \(\mathbb{Q}\)-factorial projective variety and \(S\) be a normal projective variety. Let \(\Delta\) be an effective \(\mathbb{Q}\)-divisor on \(X\). Let \(f\colon X\to S\) be an inseparable fibration of relative dimension one. Assume that_
1. \((X_{K(S)},\Delta_{K(S)})\) _is lc;_
2. _there exists a big open subset_ \(S^{\circ}\) _contained in the regular locus_ \(S^{\text{reg}}\) _of_ \(S\) _and a_ \(\mathbb{Q}\)_-divisor_ \(D^{\circ}\) _on_ \(S^{\circ}\) _such that_ \((K_{X}+\Delta)|_{f^{-1}(S^{\circ})}\sim_{\mathbb{Q}}f^{*}D^{\circ}\)_;_
3. \(S\) _is of m.A.d., and the Albanese morphism_ \(a_{S}\colon S\to A\) _is separable._
_Then \(D\succeq_{\mathbb{Q}}\frac{1}{2p}K_{S}\), where \(D:=\overline{D^{\circ}}\) is the closure divisor of \(D^{\circ}\) in \(S\). In particular, \(\kappa(S,D)\geq\kappa(S)\)._
Proof.: To prove the assertion, we may restrict ourselves on \(S^{\circ}\). So in the following, we assume \(S=S^{\circ}\). By the assumption (C3) we have \(S_{1}=S^{\frac{1}{p}}\). Since the fibration \(X^{\frac{1}{p}}\to S^{\frac{1}{p}}\) factors through \(f_{1}\colon X_{1}\to S_{1}\), the morphism \(f_{1}\) is also a fibration. By Proposition 4.7, the generic fiber of \(X_{1}\to S_{1}\) has arithmetic genus zero, and
\[\pi_{1}^{*}K_{X}\sim_{\mathbb{Q}}K_{X_{1}}+(p-1)\det(\Omega^{1}_{X_{1}/X}).\]
By Proposition 7.1 (4), the movable part of the linear system \(|\text{det}(\Omega^{1}_{X_{1}/X})|\) has non-trivial horizontal components. We may write that
\[f_{1}^{*}\tau_{1}^{*}D\sim_{\mathbb{Q}}\pi_{1}^{*}(K_{X}+\Delta)\sim_{\mathbb{ Q}}K_{X_{1}}+(p-1)|\text{det}(\Omega^{1}_{X_{1}/X})|+\pi_{1}^{*}\Delta.\]
Applying Theorem 5.1, we have \(\tau_{1}^{*}D\succeq_{\mathbb{Q}}\frac{1}{2}K_{S_{1}}\). In turn, by \(\tau_{1}^{*}K_{S}\sim_{\mathbb{Q}}pK_{S_{1}}\) we conclude that \(D\succeq_{\mathbb{Q}}\frac{1}{2p}K_{S}\).
The second theorem treats the case when the Albanese morphism \(a_{S}\colon S\to A\) is inseparable.
**Theorem 7.3**.: _Let the notation be as in Theorem 7.2. Assume \((\mathrm{C1},\mathrm{C2})\) and the following condition_
* \(S\) _is of m.A.d., and the Albanese morphism_ \(a_{S}\colon S\to A\) _is inseparable._
_Then \(\kappa(S,D)\geq 0\) where \(D:=\overline{D^{\circ}}\). More precisely,_
* _if_ \(X_{K(S_{1})}\) _is integral, then_ \(\tau_{1}^{*}D\succeq_{\mathbb{Q}}\tau_{1}^{*}K_{S}-K_{S_{1}}\)_, hence_ \(\kappa(S,D)\geq 1\)_;_
* _if_ \(X_{K(S_{1})}\) _is non-reduced, then_ \(\tau_{1}^{\prime*}\tau_{1}^{*}D\succeq_{\mathbb{Q}}\frac{1}{2}K_{S_{1}^{ \prime}}\)_, and if moreover_ \(\kappa(S,D)=0\) _then_ \(S_{1}=S_{1}^{\prime}\) _is birational to an abelian variety._
_In particular, if \(\dim S=2\) then \(\kappa(S,D)\geq 1\)._
Proof.: We may also restrict us on \(S^{\circ}\) and assume \(S=S^{\circ}\). Let \(X_{1}\), \(S_{1}\), \(f_{1}\) be as in Section 7.1.
If \(X_{K(S_{1})}\) is normal, by Proposition 7.1 (3) there exist divisors \(E_{1}\geq 0\) and \(N_{1}\) (exceptional over \(S_{1}\)) fitting into the following equation
\[\pi_{1}^{*}(K_{X}+\Delta)\sim_{\mathbb{Q}}K_{X_{1}}+E_{1}+\pi_{1}^{*}\Delta+f_{ 1}^{*}(\tau_{1}^{*}K_{S}-K_{S_{1}})+N_{1}. \tag{17}\]
Let \(\Delta_{1}=E_{1}+\pi_{1}^{*}\Delta\) and \(D_{1}=\tau_{1}^{*}D+K_{S_{1}}-\tau_{1}^{*}K_{S}\), then
\[K_{X_{1}}+\Delta_{1}+N_{1}\sim_{\mathbb{Q}}f_{1}^{*}D_{1}.\]
Repeating this process, by Proposition 7.1 (5), we obtain a minimal \(n\) such that \((X_{n-1})_{K(S_{n})}\) is not normal. We have the following commutative diagram:
where \(f_{n}^{\prime}\) is a fibration.
Note that \((\Delta_{n-1})_{K(S_{n-1})}=\Delta_{K(S_{n-1})}\). We claim that \((X_{K(S_{n-1})},\Delta_{K(S_{n-1})})\) is lc. By construction, for \(1\leq i\leq n-1\), each \(X_{K(S_{i})}=X_{K(S)}\otimes_{K(S)}K(S_{i})\) is not geometrically reduced. It is trivial if \(X_{K(S_{n-1})}\) has arithmetic genus one because then \(\Delta_{K(S_{n-1})}=0\). We only need to consider the case \(p=2\) and \(X_{K(S_{n-1})}\) is a non-smooth conic over \(K(S_{n-1})\), on which each prime divisor has degree \(\geq 2\). Since \(\deg_{K(S_{n-1})}K_{X_{K(S_{n-1})}}=-2\) and \(K_{X_{K(S_{n-1})}}+\Delta_{K(S_{n-1})}\equiv 0\), we see that if \((X_{K(S_{n-1})},\Delta_{K(S_{n-1})})\) is not klt, then it is lc and \(\Delta_{K(S_{n-1})}\) is a prime divisor of degree two.
Case \((1)\): \((X_{n-1})_{K(S_{n})}\) is reduced but not normal. By Proposition 7.1 (3) we have
\[K_{X_{n}}+E_{n}+\pi_{n}^{*}\Delta_{n-1}+N_{n}\sim_{\mathbb{Q}}f_{n}^{\prime*} \tau_{n}^{\prime*}(\tau_{n}^{*}D_{n-1}+K_{S_{n}}-\tau_{n}^{*}K_{S_{n-1}}). \tag{18}\]
If necessary replacing \(S\) with a big open subset, we may assume \(N_{n}=0\). Applying Theorem 6.2, we see that there exist finite morphisms \(\tau^{\prime}\colon\bar{T}\to S_{n}^{\prime}\), \(\tau^{\prime\prime}\colon\bar{T}^{\prime}\to\bar{T}\), an
effective \(\mathbb{Q}\)-divisor \(E_{\bar{T}^{\prime}}\) and rational numbers \(a,b,c\geq 0\) such that
\[(\tau_{n}^{*}D_{n-1}+K_{S_{n}}-\tau_{n}^{*}K_{S_{n-1}})|_{\bar{T}^{\prime}}\sim_ {\mathbb{Q}}aK_{\bar{T}^{\prime}}+b\tau^{\prime\prime*}K_{\bar{T}}+c\tau^{ \prime\prime*}\tau^{\prime*}K_{S_{n}^{\prime}}+E_{\bar{T}^{\prime}}. \tag{19}\]
Since \(K_{\bar{T}^{\prime}},K_{\bar{T}},K_{S_{n}^{\prime}}\succeq_{\mathbb{Q}}0\), we see that
\[\big{(}D_{n}:=\tau_{n}^{*}D_{n-1}-(\tau_{n}^{*}K_{S_{n-1}}-K_{S_{n}})\big{)} \big{|}_{\bar{T}^{\prime}}\succeq_{\mathbb{Q}}0.\]
By Covering Theorem 2.3 we have
\[\kappa(S_{n-1},D_{n-1})=\kappa(S_{n},\tau_{n}^{*}D_{n-1})\geq\kappa(S_{n},\tau_ {n}^{*}K_{S_{n-1}}-K_{S_{n}}).\]
Remark that
* if \(S_{n-1}\to A^{\frac{1}{p^{n-1}}}\) is separable then \(S_{n}=S_{n-1}^{\frac{1}{p}}\), thus \(\kappa(S_{n},\tau_{n}^{*}K_{S_{n-1}}-K_{S_{n}})=\kappa(S_{n-1},K_{S_{n-1}})\geq 0\);
* otherwise by Proposition 7.1 (2), we have \(\kappa(S_{n},\tau_{n}^{*}K_{S_{n-1}}-K_{S_{n}})\geq 1\).
In particular each \(D_{i}\succeq_{\mathbb{Q}}0\), \(i=1,\ldots,n-1\), and inductively we obtain that
\[\tau_{1}^{*}D=D_{1}+(\tau_{1}^{*}K_{S}-K_{S_{1}})\succeq_{\mathbb{Q}}\tau_{1}^ {*}K_{S}-K_{S_{1}}.\]
Then by Covering Theorem and the assumption that \(a_{S}\) is inseparable, we have \(\kappa(S,D)\geq\kappa(S_{1},\tau_{1}^{*}K_{S}-K_{S_{1}})\geq 1\).
Case (2): \((X_{n-1})_{K(S_{n})}\) is non-reduced. In this case \(|\text{det}(\Omega^{1}_{X_{n}/X_{n-1}})|\) has nontrivial horizontal movable part by Proposition 7.1 (4). We have
\[f_{n}^{*}\tau_{n}^{*}D_{n-1}\sim_{\mathbb{Q}}\pi_{n}^{*}(K_{X_{n-1}}+\Delta_{n -1})\sim_{\mathbb{Q}}K_{X_{n}}+(p-1)\,\text{det}(\Omega^{1}_{X_{n}/X_{n-1}})+ \pi_{n}^{*}\Delta_{n-1}.\]
Applying Theorem 5.1 to the fibration \(f_{n}^{\prime}\colon X_{n}\to S_{n}^{\prime}\), we see that
\[\tau_{n}^{\prime*}\tau_{n}^{*}D_{n-1}\succeq_{\mathbb{Q}}\frac{1}{2}K_{S_{n}^ {\prime}},\]
thus \(\kappa(S_{n}^{\prime},\tau_{n}^{\prime*}\tau_{n}^{*}D_{n-1})\geq\kappa(S_{n}^ {\prime},K_{S_{n}^{\prime}})\geq 0\). Therefore, \(D_{n-1}\succeq_{\mathbb{Q}}0\).
If \(n\geq 2\), using \(D_{i}=\tau_{i}^{*}D_{i-1}-(\tau_{i}^{*}K_{S_{i-1}}-K_{S_{i}})\) and \(\tau_{i}^{*}K_{S_{i-1}}-K_{S_{i}}\succeq_{\mathbb{Q}}0\)\((1<i\leq n)\) inductively we prove that \(D_{1}\succeq_{\mathbb{Q}}0\). It follows that
\[\tau_{1}^{*}D=D_{1}+(\tau_{1}^{*}K_{S}-K_{S_{1}})\succeq_{\mathbb{Q}}\tau_{1}^ {*}K_{S}-K_{S_{1}},\]
thus \(\kappa(S,D)\geq 1\).
If \(n=1\), since \(\tau_{1}^{\prime*}\tau_{1}^{*}D\geq\frac{1}{2}K_{S_{1}^{\prime}}\), applying Covering Theorem 2.3 we obtain
\[\kappa(S,D)=\kappa(S_{1}^{\prime},\tau_{1}^{\prime*}\tau_{1}^{*}D)\geq\kappa(S _{1}^{\prime},K_{S_{1}^{\prime}})\geq 0.\]
If moreover \(\kappa(S,D)=0\), then \(S_{1}^{\prime}\) is birational to an abelian variety by Proposition 2.9. By \(\text{Alb}(X^{\frac{1}{p}})=A^{\frac{1}{p}}\) and the Albanese morphism of \(X^{\frac{1}{p}}\) factors through \(X_{1}\to S_{1}^{\prime}\), we see that \(S_{1}^{\prime}\to A^{\frac{1}{p}}\) is a birational morphism, and in turn that \(S_{1}=S_{1}^{\prime}\).
Finally, if \(\dim S=2\), then \(\deg\tau_{1}\leq p\), thus \(X_{K(S_{1})}\) is reduced ([12, Lemma 1.3]). We fall into case (1), and it follows that \(\kappa(S,D)\geq 1\)
## 8. Albanese morphism of varieties with nef anticanonical divisor
In this section, we apply the canonical bundle formulas obtained in the previous sections to study varieties with nef anticanonical divisor. We work over an algebraically closed field \(k\) of characteristic \(p>0\) and assume that a singular variety admits a resolution of singularities.
**Theorem 8.1**.: _Let \(X\) be a normal projective \(\mathbb{Q}\)-factorial variety and \(\Delta\) an effective \(\mathbb{Q}\)-divisor on \(X\) such that \(-(K_{X}+\Delta)\) is nef. Let \(X\stackrel{{ f}}{{\to}}S\stackrel{{ a_{S}}}{{\to}}A_ {X}\) be the Stein factorization of the Albanese morphism of \(X\). Suppose that the fibration \(f\) has relative dimension one and that \((X_{K(S)},\Delta_{K(S)})\) is klt. Then \(S\) is an abelian variety._
### A preliminary lemma
**Lemma 8.2**.: _Let \(X\) be a normal projective variety equipped with two fibrations_
* \(f\colon X\to A\)_, of relative dimension one onto an abelian variety; and_
* \(g\colon X\to\mathbb{P}^{1}\)_, such that each fiber is dominant over_ \(A\)_._
_Let \(\mathcal{F}\) be a foliation on \(X\) of rank \(r\) and \(X^{\prime}:=X/\mathcal{F}\). Consider the following commutative diagram_
_where \(X^{\prime}\to S^{\prime}\to A^{p}\) is the Stein factorization of \(X^{\prime}\to A^{p}\). Assume that there is a dense open subset \(V\subseteq\mathbb{P}^{1}\) such that for each \(t\in V\), the fiber \(G_{t}\) of \(g\) is isomorphic to an abelian variety, \(\mathcal{F}|_{G_{t}}\) is locally free in codimension one and \(\det(\mathcal{F}|_{G_{t}})\equiv 0\). Then \(S^{\prime}\) is an abelian variety._
Proof.: Let \(\mathcal{G}\subseteq\mathcal{T}_{A}\) be the foliation corresponding to \(A\to S^{\prime}\), namely, \(S^{\prime}=A/\mathcal{G}\). We only need to prove that \(\mathcal{G}\) arises from a \(p\)-Lie sub-algebra of \(H^{0}(A,\mathcal{T}_{A})\).
There is a natural \(\mathcal{O}_{X}\)-linear homomorphism \(\eta\colon\mathcal{F}\subseteq\mathcal{T}_{X}\to f^{*}\mathcal{T}_{A}\), which is induced by taking the dual of \(f^{*}\Omega^{1}_{A}\to\Omega^{1}_{X}\). By construction, over some open subset of \(X\), we have \(\eta(\mathcal{F})\subseteq f^{*}\mathcal{G}\). Observe that, identifying \(\mathcal{O}_{A}\) as a sub-ring of \(\mathcal{O}_{X}\), if \(\mathcal{G}^{\prime}\) is a foliation on \(A\) such that \(f^{*}\mathcal{G}^{\prime}\supseteq\eta(\mathcal{F})\) then \(\operatorname{Ann}(\mathcal{G}^{\prime})\subseteq\operatorname{Ann}(\mathcal{ F})\), which means there is a morphism \(X^{\prime}=X/\mathcal{F}\to A/\mathcal{G}^{\prime}\). Since the intersection of foliations on a variety is still a foliation (if necessary, taking saturation), we see that \(\mathcal{G}\) is the minimal foliation on \(A\) whose pullback contains \(\eta(\mathcal{F})\). We may characterize \(\mathcal{G}\) as the minimal foliation such that the function field \(K(A/\mathcal{G})\subseteq K(X/\mathcal{F})\).
Claim. _For each \(t\in V\), there exists a big open subset \(G_{t}^{\circ}\) of \(G_{t}\) such that the image of \(\eta_{t}\colon\mathcal{F}|_{G_{t}^{\circ}}\to(f|_{G_{t}^{\circ}})^{*}\mathcal{T} _{A}\) is a free sheaf._
Proof of the claim.: Since \(X\) is normal and a general fiber of \(g\) is smooth, if necessary shrinking \(V\), for each \(t\in V\), there exists a big open subset \(G_{t}^{\circ}\subset G_{t}\) such that \(X\) is regular along \(G_{t}^{\circ}\) and that \(\mathcal{F}|_{G_{t}^{\circ}}\) is locally free. Note that the normal bundle
\(\mathcal{O}_{G_{t}^{\circ}}\). By assumption \(\mathcal{T}_{G_{t}^{\circ}}\cong\bigoplus^{n}\mathcal{O}_{G_{t}^{\circ}}\) where \(n:=\dim A\). Then we have the following commutative diagram of \(\mathcal{O}_{G_{t}^{\circ}}\)-linear homomorphisms
where the two horizontal sequences are exact. Note that as subsheaves of free sheaves, the determinants \(\det(\mathcal{F}|_{G_{t}^{\circ}}\cap\mathcal{T}_{G_{t}^{\circ}})\preceq 0\) and \(\det(\operatorname{Im}\theta)\preceq 0\). Therefore, the assumption \(\det(\mathcal{F}|_{G_{t}^{\circ}})\equiv 0\) implies that \(\det(\mathcal{F}|_{G_{t}^{\circ}}\cap\mathcal{T}_{G_{t}^{\circ}})=0\) and \(\det(\operatorname{Im}\theta)=0\). By Lemma 8.3 below, both \(\mathcal{F}|_{G_{t}^{\circ}}\cap\mathcal{T}_{G_{t}^{\circ}}\) and \(\operatorname{Im}\theta\) are free sheaves. That is to say, \(\mathcal{F}|_{G_{t}^{\circ}}\) is an extension of free sheaves.
Note that \(\mathcal{K}:=\ker\eta_{t}\) is a subsheaf of an extension of free sheaves, thus \(\det\mathcal{K}\preceq 0\). Since \((\mathcal{F}|_{G_{t}^{\circ}})/\mathcal{K}\cong\operatorname{Im}\eta_{t} \subseteq\bigoplus^{n}\mathcal{O}_{G_{t}^{\circ}}\), we have \(\det((\mathcal{F}|_{G_{t}^{\circ}})/\mathcal{K})\preceq 0\). Therefore by \(\det(\mathcal{F}|_{G_{t}^{\circ}})\equiv 0\), we conclude that \(\det\mathcal{K}\equiv\det(\operatorname{Im}\eta_{t})\equiv 0\). Applying Lemma 8.3 again, if necessary replacing \(G_{t}^{\circ}\) with a big open subset, we see that \(\operatorname{Im}\eta_{t}\) is a free sheaf.
Granted the claimed assertion, there exist \(\alpha_{t,1},\dots,\alpha_{t,k_{t}}\in H^{0}(G_{t}^{\circ},\operatorname{Im} \eta_{t})\) such that \(\operatorname{Im}\eta_{t}=\bigoplus_{i=1}^{k_{t}}\mathcal{O}_{G_{t}^{\circ}} \cdot\alpha_{t,i}\). Since \(G_{t}^{\circ}\) is a big open subset of \(G_{t}\), we have a natural isomorphism \(H^{0}(A,\mathcal{T}_{A}=\mathcal{O}_{A}^{n})\cong H^{0}(G_{t}^{\circ},f^{*} \mathcal{T}_{A}|_{G_{t}^{\circ}})\) induced by the pullback via the map \(f\colon G_{t}^{\circ}\to A\). Let \(\beta_{t,i}\in H^{0}(A,\mathcal{T}_{A})\) be the element corresponding to \(\alpha_{t,i}\). Let \(\mathcal{G}_{t}\) be the subsheaf of \(\mathcal{T}_{A}\) generated by \(\{\beta_{t,i}\}\), and let \(\mathcal{G}^{\prime}:=\sum_{t\in V}\mathcal{G}_{t}\). It follows that \(\mathcal{G}^{\prime}\) is globally generated by \(\Lambda^{\prime}=H^{0}(A,\mathcal{G}^{\prime})\subseteq H^{0}(A,\mathcal{T}_{ A})\). Here we remind that \(\Lambda^{\prime}\) is not necessarily a \(p\)-Lie sub-algebra of \(H^{0}(A,\mathcal{T}_{A})\), because it is not necessarily \(p\)-closed, in other words, \(\mathcal{G}^{\prime}\) is not necessarily a foliation. Let \(\bar{\Lambda}\) be the \(p\)-Lie sub-algebra of \(H^{0}(A,\mathcal{T}_{A})\) generated by \(\Lambda^{\prime}\), which corresponds to a smooth foliation \(\overline{\mathcal{G}}\) on \(A\). Note that \(\overline{\mathcal{G}}\) is the minimal foliation containing \(\mathcal{G}^{\prime}\), and \(A/\overline{\mathcal{G}}\) is an abelian variety.
To finish the proof, we are left to verify \(\overline{\mathcal{G}}=\mathcal{G}\). On one hand, for \(t\in V\), by the construction, \(\mathcal{G}_{t}\) is the smallest the subsheaf of \(\mathcal{T}_{A}\) such that \(f^{*}\mathcal{G}_{t}|_{G_{t}}\supseteq\operatorname{Im}\eta_{t}\), thus \(f^{*}\mathcal{G}_{t}|_{G_{t}}\subseteq f^{*}\mathcal{G}|_{G_{t}}\), we conclude that \(\mathcal{G}^{\prime}\subseteq\mathcal{G}\), which implies that \(\overline{\mathcal{G}}\subseteq\mathcal{G}\). On the other hand, over a nonempty open subset \(U\subseteq g^{-1}V\) we have \(\eta(\mathcal{F})|_{U}\subseteq f^{*}\overline{\mathcal{G}}|_{U}\), thus \(K(A/\overline{\mathcal{G}})\subseteq K(X/\mathcal{F})\), but \(\mathcal{G}\) is the minimal foliation in this sense, therefore \(\mathcal{G}\subseteq\overline{\mathcal{G}}\). To summarize, we obtain that \(\overline{\mathcal{G}}=\mathcal{G}\) and finish the proof.
**Lemma 8.3**.: _Let \(X\) be a normal projective variety and \(\mathcal{F}\) a coherent subsheaf of \(\bigoplus^{n}\mathcal{O}_{X}\). Assume that \(\det\mathcal{F}\equiv 0\). Then there exists a big open subset \(X^{\circ}\subset X\) such that \(\mathcal{F}|_{X^{\circ}}\cong\bigoplus^{r}\mathcal{O}_{X^{\circ}}\)._
Proof.: We may assume \(\mathcal{F}\neq 0\). There exists a big open subset \(X^{\circ}\) such that \(\mathcal{F}|_{X^{\circ}}\) is locally free. In the following argument we only use the condition that \(H^{0}(X^{\circ},\mathcal{O}_{X^{\circ}})\cong k\), so by replacing \(X\) with \(X^{\circ}\) we may assume that \(X\) is regular and \(\mathcal{F}\) is locally free.
We prove the lemma by induction on \(n\). The case \(n=1\) is trivial because then \(\mathcal{F}\) can be regarded as an ideal sheaf of \(\mathcal{O}_{X}\). Assume the assertion holds for \(n\leq l\) where
\(l>0\). We prove it for \(n=l+1\). Write \(\bigoplus^{l+1}\mathcal{O}_{X}=W\oplus\mathcal{O}_{X}\), where \(W:=\bigoplus^{l}\mathcal{O}_{X}\) is the direct sum of the first \(l\) summands. We have the following commutative diagram
where \(\pi\) is the projection onto \(W\). We shall construct a homomorphism \(\theta\) in the above diagram, which will induce a splitting \(\mathcal{F}\to\mathcal{F}\cap W\).
If \(\mathcal{F}\subseteq W\), then we are done by the induction hypothesis. We assume \(\mathcal{F}\nsubseteq W\). Note that \(\det(\mathcal{F}\cap W)\preceq 0\) and \(\det((\mathcal{F}+W)/W)\preceq 0\). By the assumption \(\det\mathcal{F}\equiv 0\), we conclude \(\det(\mathcal{F}\cap W)\equiv\det((\mathcal{F}+W)/W)\equiv 0\). By the induction hypothesis we have \((\mathcal{F}+W)/W\cong\mathcal{O}_{X}\) and \((\mathcal{F}\cap W)|_{X}\cong\bigoplus^{r}\mathcal{O}_{X}\). Regard \(H^{0}(X,\mathcal{F}\cap W)\cong k^{r}\) as a \(k\)-linear subspace of \(H^{0}(X,W)\cong k^{l}\). Take a splitting \(\theta_{k}\colon H^{0}(X,W)\to H^{0}(X,\mathcal{F}\cap W)\), which determines uniquely a splitting \(\theta\colon W\to\mathcal{F}\cap W\) of \(\mathcal{F}\cap W\hookrightarrow W\). Thus the composition of morphisms \(\mathcal{F}\hookrightarrow\bigoplus^{k+1}\mathcal{O}_{X}\stackrel{{ \pi}}{{\to}}W\stackrel{{\theta}}{{\to}}\mathcal{F}\cap W\) splits \(\mathcal{F}\cap W\to\mathcal{F}\). As a consequence \(\mathcal{F}\cong(\mathcal{F}\cap W)\oplus(\mathcal{F}+W)/W\cong\bigoplus^{r} \mathcal{O}_{X}\oplus\mathcal{O}_{X}\), which completes the proof.
### Proof of Theorem 8.1
We use the notation from Section 7.1 and recall the following commutative diagram
First, in the following lemma, we deal with the cases with separable conditions.
**Lemma 8.4**.: _Under the assumptions of Theorem 8.1, \(S\) is an abelian variety unless the following conditions hold simultaneously_
1. _both the morphisms_ \(f\) _and_ \(a_{S}\) _are inseparable;_
2. \(X_{K(S_{1})}\) _is non-reduced; and_
3. \(S_{1}\) _is an abelian variety._
Proof.: We treat the following two cases separately:
1. \(-(K_{X}+\Delta)|_{X_{K(S)}}\) is ample,
2. \(-(K_{X}+\Delta)|_{X_{K(S)}}\sim_{\mathbb{Q}}0\).
Case \((1)\): \(-(K_{X}+\Delta)|_{X_{K(S)}}\) is ample. Let \(H\) be an ample divisor on \(S\) and \(0<\epsilon\ll 1\) a rational number. Then \(-(K_{X}+\Delta)+\epsilon f^{*}H\) is nef and big. By Lemma 2.5, \(-(K_{X}+\Delta)+\epsilon f^{*}H\sim_{\mathbb{Q}}\Delta_{\epsilon}\) for some effective \(\mathbb{Q}\)-divisor \(\Delta_{\epsilon}\) with coefficients small enough, such that \((X_{K(S)},(\Delta+\Delta_{\epsilon})_{K(S)})\) is klt and the maximal value of the coefficients
of \((\Delta+\Delta_{\epsilon})_{K(S)}\)) are smaller than some number \(\theta<1\). By construction we have
\[K_{X}+\Delta+\Delta_{\epsilon}\sim_{\mathbb{Q}}\epsilon f^{*}H.\]
Case (1.1): The fibration \(f\) is separable. We can apply Theorem 6.2 to obtain finite purely inseparable morphisms \(\tau_{1}\colon\bar{T}\to S\), \(\tau_{2}\colon\bar{T}^{\prime}\to\bar{T}\) and effective \(\mathbb{Q}\)-divisor \(E_{\bar{T}^{\prime}}\) on \(\bar{T}^{\prime}\) such that
\[\epsilon\tau_{2}^{*}\tau_{1}^{*}H\sim_{\mathbb{Q}}aK_{\bar{T}^{\prime}}+b\tau_{ 2}^{*}K_{\bar{T}}+c\tau_{2}^{*}\tau_{1}^{*}K_{S}+E_{\bar{T}^{\prime}},\]
with \(a,b\geq 0\), and \(c\geq\frac{1-\theta}{p(p-1)}>0\). Letting \(\epsilon\to 0\), we obtain \(K_{S}\equiv 0\), thus \(S\) is an abelian variety by Proposition 2.9.
Case (1.2): The fibration \(f\) is inseparable and \(a_{S}\) is separable. By Theorem 7.2 we have that \(\epsilon H\succeq_{\mathbb{Q}}\frac{1}{2p}K_{S}\). Then \(K_{S}\equiv 0\), and thus \(S\) is an abelian variety.
Case (1.3): Both \(f\) and \(a_{S}\) are inseparable.
We first show that \(X_{K(S_{1})}\) is non-reduced. Otherwise by Theorem 7.3, we have \(\epsilon\tau_{1}^{*}H-(\tau_{1}^{*}K_{S}-K_{S_{1}})\geq 0\). By letting \(\epsilon\to 0\), we obtain that \(-(\tau_{1}^{*}K_{S}-K_{S_{1}})\) is pseudo-effective. But this contradicts that \(\kappa(S_{1},\tau_{1}^{*}K_{S}-K_{S_{1}})\geq 1\) (Proposition 7.1 (2)).
Now assume that \(X_{K(S_{1})}\) is non-reduced. We define \(X_{1}\) to be the normalization of \((X_{K(S_{1})})_{\text{red}}\) and denote by \(S_{1}^{\prime}\) the normalization of \(S_{1}\) in \(X_{1}\). Then by Theorem 7.3 we have \(\tau_{1}^{\prime*}\tau_{1}^{*}\epsilon H\succeq_{\mathbb{Q}}\frac{1}{2}K_{S_{1 }^{\prime}}\). Letting \(\epsilon\to 0\), we see that \(K_{S_{1}^{\prime}}\equiv 0\) and thus \(S_{1}^{\prime}\) is an abelian variety by Proposition 2.9. By the universal property of Albanese morphism, we conclude that \(S_{1}^{\prime}=S_{1}\cong A^{\frac{1}{p}}\), which means that \(f_{1}\colon X_{1}\to S_{1}\) is a fibration.
Case (2): \((K_{X}+\Delta)|_{F}\sim_{\mathbb{Q}}0\).
By Lemma 2.6, there exists a big open subset \(S^{\circ}\) contained in the regular locus of \(S\) and a \(\mathbb{Q}\)-divisor \(D^{\circ}\) on \(S^{\circ}\), such that \(-(K_{X}+\Delta)|_{f^{-1}(S^{\circ})}\sim_{\mathbb{Q}}-f^{*}D^{\circ}\) and \(-D\) is pseudo-effective, where \(D:=\overline{D^{\circ}}\).
We first show that if at least one of \(f\) and \(a_{S}\) is separable, then \(S\) is an abelian variety. Under this condition, by applying Theorem 6.2 or Theorem 7.2 to the fibration \(f^{\circ}\colon X_{S^{\circ}}\to S^{\circ}\), we obtain that \(D^{\circ}\geq tK_{S^{\circ}}\) for some positive number \(t\). Since both \(-D\) and \(K_{S}\) are pseudo-effective, we conclude \(K_{S}\equiv 0\), which implies that \(S\) is an abelian variety.
Finally, assume both \(f\) and \(a_{S}\) are inseparable. We can apply Theorem 7.3 to exclude the case that \(X_{K(S_{1})}\) is reduced, and show that \(S_{1}\) is an abelian variety as in case (1.3).
To finish the proof of Theorem 8.1, let us treat the remaining case. We argue by contradiction and assume \(\kappa(S)>0\). We only need to consider the case
* \(X_{K(S_{1})}\) is non-reduced, \(S\to A\) is inseparable and \(S_{1}=A^{\frac{1}{p}}\).
By Proposition 7.1 (4), we may write that
\[\pi^{*}K_{X}\sim_{\mathbb{Q}}K_{X_{1}}+(p-1)\mathfrak{C},\]
where the movable part \(\mathfrak{M}\) of the linear system \(\mathfrak{C}=\mathfrak{M}+V\) has \(\deg_{K(S_{1})}\mathfrak{M}>0\), and then
\[\pi^{*}(K_{X}+\Delta)\sim_{\mathbb{Q}}K_{X_{1}}+\mathfrak{M}+\Delta_{1},\]
where \(\Delta_{1}=\pi^{*}\Delta+(p-2)M_{0}+(p-1)V\) for some \(M_{0}\in\mathfrak{M}\).
Claim. \(\mathfrak{M}\) _is base point free with \(\nu(\mathfrak{M})=1\) and hence induces a fibration \(g\colon X_{1}\to\mathbb{P}^{1}\). Moreover, a general fiber \(G_{t}\) of \(g\) is isomorphic to an abelian variety, and \(\Delta_{1}|_{G_{t}^{\nu}}\equiv 0\)._
This assertion follows from applying Proposition 5.2 to the pair \((X_{1},\mathfrak{M}+\Delta_{1})\), once the following is valid
* if \(((X_{1})_{K(S_{1})},(\Delta_{1})_{K(S_{1})})\) is not klt, say, \(T_{1}\) is the unique irreducible horizontal component of \(\Delta_{1}\), then \(T_{1}|_{T_{1}^{\nu}}\) is pseudo-effective, where \(T_{1}^{\nu}\) is the normalization of \(T_{1}\).
Let \(T\) be the prime divisor supported on \(\pi(T_{1})\) and \(T^{\nu}\) denote the normalization of \(T\). Denote by \(\pi_{T_{1}^{\nu}}\colon T_{1}^{\nu}\to T^{\nu}\) the natural morphism. We may write that \(\pi^{*}T=cT_{1}\) for some \(c>0\). Let \(a\) be the coefficient of \(T\) in \(\Delta\). Then \(a<1\) and by Lemma 2.8 we have
\[cT_{1}|_{T_{1}^{\nu}}\sim_{\mathbb{Q}}\pi_{T_{1}^{\nu}}^{*}(T|_{T^{\nu}})\sim_ {\mathbb{Q}}\pi_{T_{1}^{\nu}}^{*}\bigg{(}\frac{1}{1-a}\Big{(}K_{T^{\nu}}+B_{T ^{\nu}}-(K_{X}+\Delta)|_{T^{\nu}}\Big{)}\bigg{)}.\]
This divisor is pseudo-effective since \(K_{T^{\nu}}\geq 0\) and \(-(K_{X}+\Delta)\) is assumed nef.
Granted the above Claim, the condition \(\Delta_{1}|_{G_{t}^{\nu}}\equiv 0\) implies that
\[\det\mathcal{F}_{X_{1}/X}|_{G_{t}^{\nu}}=-\mathfrak{C}|_{G_{t}^{\nu}}\equiv 0.\]
Applying Lemma 8.2, we can show that \(S\) is an abelian variety, which contradicts our assumption.
### Proof of Theorem 1.3
If the Albanese morphism \(a_{S}\colon S\to A\) is separable, we can apply Theorem 7.2. If \(a_{S}\) is inseparable, we have \(\kappa(S,D)\geq 0\) by Theorem 7.3.
Now assume that \(a_{S}\colon S\to A\) is finite and inseparable and that \(((X)_{K(S)},\Delta_{K(S)})\) is klt. To show \(\kappa(S,D)\geq 1\), we argue by contradiction and suppose \(\kappa(S,D)=0\). Then by Theorem 7.3, there is a finite morphism \(\tau\colon S_{1}\to S\) from an abelian variety. By Covering Theorem 2.3, \(\kappa(S_{1},\tau^{*}D)=\kappa(S,D)=0\). Note that any effective divisor on an abelian variety is semi-ample, thus \(\tau^{*}D\sim_{\mathbb{Q}}0\). This implies that \(D\equiv 0\) and therefore \(K_{X}+\Delta\equiv 0\). Then we can apply Theorem 8.1.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.